After the ‘terrible’ statements made by a Google employee, a frequently spoken topic about artificial intelligence came up again. Can AI really get ‘smart’ and do bad things?
Although artificial intelligence has become a popular agenda item in recent years, it actually has a long history. In fact, ‘Can machines think too?’ The question even found its place in ancient Greek myths. The journey to turn this old but impressive idea into reality began in the 1950s.
Today, we receive new news about artificial intelligence studies every day. Finally, a Google engineer recently shared a ‘chat’ he had with the artificial intelligence called LaMDA and drew arrows on it. In the conversation, LaMDA was talking about not wanting to die, about his loneliness, about his thoughts. This news sparked debates: Is artificial intelligence dangerous, can it do really bad things, can it rebel against humanity and end us with evil plans?
Will Artificial Intelligence One Day Destroy Humanity?
Artificial intelligence is the ability of a computer system to mimic human-like cognitive functions such as learning and problem solving.
While there is no AI capable of performing the multitude of diverse tasks that an ordinary human can do, some AIs can perform on a par with humans in certain tasks.
With the developing technology, artificial intelligence can be the biggest supporter of humanity, while at the same time it can be the biggest enemy of humanity. A robot that thinks and knows itself can take harsh attitudes towards humanity. This is the point of discussion for everyone, can humanity one day face the threat of robots after this developing technology?
To understand whether artificial intelligence can bring us to an end, we need to understand the concepts of artificial narrow intelligence and strong artificial intelligence.
artificial narrow intelligence and strong artificial intelligence
All discussions about artificial intelligence begin at the focus of these two types of artificial intelligence. For this reason, it is not possible to find answers to these questions without knowing the concepts of artificial narrow intelligence and strong artificial intelligence. So let’s summarize briefly.
The ability of a computer system to perform a narrowly defined task, just like a human, is defined as ‘artificial narrow intelligence’. All examples of artificial intelligence, from voice assistants that we use in daily life to autonomous vehicles, fall into this category. This is the peak that humanity can reach in artificial intelligence studies for now.
Consider the Google Assistant or Siri powered by artificial intelligence. Although the answers he gives or the things he does for you make you think that he is ‘just like a human’, he actually coordinates various narrow processes and makes decisions within a predetermined framework. In other words, consciousness or emotions do not take place in the information processing and decision mechanism of artificial narrow intelligence.
Strong artificial intelligence, on the other hand, is examined under two subheadings as artificial general intelligence and artificial super intelligence.
Strong vs. Weak Artificial Intelligence Debate
When we look at artificial general intelligence, we are faced with a scenario where the computer system is equally successful with humans in all mental tasks. Artificial general intelligence is defined as a theoretical artificial intelligence that has “a self-aware consciousness capable of solving problems, learning and planning for the future”.
In other words, we can say that the intelligence we see in robots with conscious thoughts and emotions, which we often come across in science fiction movies, is a member of this class. All these skills mean that AI will be on a par with humans in areas such as creativity and imagination, and can successfully perform far more tasks than AI.
Artificial superintelligence, like artificial general intelligence, is still a theoretical example of artificial intelligence. In this scenario, artificial intelligence exceeds the limits of the human brain and reaches a ‘super’ intelligence that we have not been able to define until now.
Consider an incredible example of intelligence that learns all the available information in minutes, then processes it within minutes and makes breakthroughs in the history of humanity; That’s what artificial superintelligence is. Artificial superintelligence brings with it the technological singularity. Technological singularity is expressed as artificial intelligence, which surpasses human intelligence, radically changes human nature and civilization. It is claimed that this will be ‘humanity’s greatest and last achievement’.
Artificial intelligence does not mean ‘humanoid robot’
When you type artificial intelligence into a search engine, you will see results full of humanoid robots. But let us remind you that humanoid robots do not mean artificial intelligence, even that these ‘humanoid and intelligent robots’ we see in movies are only a small part of the story. Artificial intelligence can exist without any humanoid robots. Humanoid robots are a nice tool to make them visually more ‘human-like’.
What can artificial intelligence do today?
Man-made reasoning (AI) makes it feasible for machines to gain for a fact, conform to new data sources and perform human-like undertakings. Most AI models that you find out about today – from chess-playing PCs to self-driving vehicles – depend vigorously on profound learning and normal language handling.
- Let’s take a look at a few examples of what artificial intelligence can do. If we talk about the current ones, for example, the artificial intelligence DALL-E developed by Open AI transforms everything you think and write into visuals that look like works of art. All you have to do is write a few words in a small box. With low latency and high resolution, your art series is ready in no time.
- The use of voice artificial intelligence for customer service services is now a very common example. In addition, artificial intelligence is used in autonomous vehicles, where various developments have been experienced recently. This artificial intelligence takes full control of the vehicle, just like a human, without the need for a driver.
- Artificial intelligence can now present news on a TV channel, research any topic and create complete and ‘human-written’ news texts in a very short time than a news editor can.
- Google’s artificial intelligence named LaMDA, which has been controversial recently, is so advanced on ‘communication’ and ‘language’ that it can have long conversations with the engineers who developed it, talking about death, freedom, manipulation and the future.
- These conversations are so impressive that Google engineer Blake Lemoine claimed that after these dialogues, LaMDA began to turn into an artificial intelligence with emotions and consciousness. Although these claims were denied by Google and Lemoine was fired, it managed to renew humanity’s doubts and fears about artificial intelligence.
So what are the fears and question marks about artificial intelligence?
The vast majority of researchers agree that a super-intelligent AI is unlikely to display human emotions such as love or hate, and there is no reason to expect the AI to be intentionally benign or malicious.
In fact, the main fear of scientists working in the field of artificial intelligence about artificial intelligence is ‘what if it turns into a super intelligence and destroys us?’ It’s not like a title.
At the beginning of the main concerns in this regard, there are situations such as the possibility that autonomous weapons can be used for great destruction and drag humanity into a great war with their ‘programmed to kill’ structures.
One of the wide objectives of man-made consciousness is independent machine knowledge, with AI being one of the particular logical strategies utilized in building AI.
Through independent machine insight, PCs, components, and frameworks fueled by AI could take a large part of the weight of navigation, dreary activity, and momentary reaction away from people, prompting more noteworthy efficiencies and execution enhancements at all levels. This is one of the more hopeful dreams of how computerized reasoning will change what’s in store. We’re as yet a way off from accomplishing this level, and there are wellbeing, moral, and functional contemplations to manage before it turns into a useful reality.
What is AI utilized for in its current structure? Presently, the computerized reasoning we have is feeble or tight AI — frameworks intended to perform explicit undertakings without human mediation, and with a restricted scope of independence in the manner they go about it. However, even at this level, there are various
The issue here is not that these autonomous weapons gain consciousness and wage war on people; what people can do to other people with these weapons. Imagine that there is a possible ‘artificial intelligence war’. In a scenario where an autonomous weapon is lost control, designed as a ‘hard to shut down’ system to prevent enemies from accessing and stopping it, a fearsome weapon that is only ‘programmed to kill’ and will execute that command to the end can cause countless deaths and destruction.
Similarly, the same scenarios can be considered even for an autonomous vehicle, not a weapon. When you say ‘take me to the airport as fast as possible’ to an autonomous vehicle, we can guess that it is not impossible to interpret this command differently and cause a dangerous journey…
Or imagine a super artificial intelligence tasked with ‘preventing climate change’, for example, in a climate engineering project. This artificial intelligence, which has been developed to do its job in the most affordable way, can make critical decisions that will disrupt the ecological balance of the world with the steps it takes. When we want to stop this, he may consider the person he encounters as an ‘obstacle’ and may not obey the command, and may even want to remove that obstacle.
Super artificial intelligence will take part in the construction of the future
As you can see from the examples we have counted so far, the problem with an over-developed artificial intelligence with consciousness is not that it wants to destroy us out of malice and hatred, but its ‘ability’ to do anything.
We are talking about an intelligence that will be thousands of times smarter than the smartest person in the world, and that can access all existing information, process it and draw conclusions, and is aware of its own existence. This superintelligence will naturally be very good at bringing his goals to life.
Let’s summarize the issue with a nice and simple example; If you come across an anthill on the way, you don’t want to destroy it. In fact, you take your step more carefully and take care not to harm the ants. But if you’re in charge of a hydroelectric green energy project and there’s an anthill in the area that will flood, you probably won’t care about the ants from then on and you’ll continue with your project.
In this scenario, imagine humans versus a super-AI versus millions of ants in an anthill. No, the super AI won’t want to destroy us. But as we work on his goal and mission, we can become invisible to him.
Will artificial superintelligence do everything human can do much better?
Yes. In fact, artificial intelligence does not even need to be ‘super’ for this to happen. As artificial intelligence developed over the years, it has already begun to do some ‘human work’. Combined with the developments in the field of robotics, artificial intelligence will be able to do many things after a while. But this does not mean that all people will be unemployed.
Because in a universe where artificial intelligence can do everything in all areas you can think of, new job definitions will be created that only humans can do. The important thing is to see the direction the world is going and to prepare ourselves for the future, to be one of those who are included in it, not watching the future.
Will artificial superintelligence be developed in the next 100 years?
The scientific world does not have a clear answer to this issue. 10 years, 50 years, 250 years… Scientists cannot say for sure when artificial superintelligence will become real and fully ‘included in the system’.
If artificial superintelligence isn’t that close and ‘scary’, why bother?
Let’s answer this question again with a good example. Imagine that today a letter arrives on our planet from an alien civilization and it says ‘We will arrive on your planet in 50 years’. Would we wait for the ship to appear in the sky to take precautions, evaluate risks and possible scenarios, and do research? No.
You can evaluate everything related to artificial super intelligence from this point of view. Even if it is a theory yet, it will introduce a new concept of ‘intelligence’ that one cannot even imagine. That’s why it’s important for us to anticipate what might happen, try to understand it, and take steps by including the worst.
Can artificial superintelligence control humans?
Technically yes. Because intelligence brings with it the power of control. Because we are ‘intelligent’ from other species, we have found various ways to control them, but sometimes we should not underestimate the mind.
Hello there! My name is Oktay from Tokensboss editors. I introduce myself as a business graduate and writer. I have been doing research on cryptocurrencies and new business lines for over 2 years.