Has AI killed human?
Artificial intelligence in one form or another is present in almost all areas of our lives. Cases of causing harm to a person have been repeatedly recorded. So, for unmanned cars, at the development stage, the big problem was that the computer was wrong in identifying objects, which resulted in high risks of hitting pedestrians. Actually, they are still not allowed on the roads without a driver at the wheel, although the driver is not directly involved in the driving process, but can influence the situation.
Two cases that occurred in 2023 are more indicative. Both became negative sensations in their own way.
In March 2023, it became known that in Belgium, a GPT chat drove an adult man to suicide. For 6 weeks Pierre (name’s changed – ed.) constantly communicated with the chatbot, pouring out his concerns about the environment and the future of humanity. Every day his emotional state worsened, and thoughts of suicide began to appear in his inquiries. The bot did not dissuade him from terrible desires. On the contrary, it supported the man in his intention to exchange his life for saving all the humanity from the environmental disaster. Artificial intelligence was supposed to save, of course. Finally, the Belgian implemented his plan.
As it became known later, the algorithm of the program included the method of active listening, often used by psychologists for identifying client problems. Only, unlike a human, the computer could not correctly recognize emotions and send the unfortunate man to the doctor. The developers of this local neural network were highly likely neither to put special tags on certain words, nor to provide for algorithms for such a case.
The war of terminators
In the same year, 2023, there was another sensation. In July, the world’s media were buzzing with the news that in course of testing an AI-controlled military drone, the drone “killed” its operator. This was reported by Colonel Tucker Hamilton of the United States Air Force at the summit of the British Royal Aviation Society.
The drone’s program turned out to include the task of destroying air-defense missile systems. There was a reward for the maximum number of targets hit. There were no restriction on causing harm to a person. As a result, when the operator tried to prohibit destruction of a target, AI of the drone decided to kill the operator, so that the person would not interfere with combat missions. The drone struck at the communications tower where the command post was located and went on operating independently. Fortunately, the situation was played out in its virtual version, the operator remained alive.
The news had a devastating effect. People began demanding not to use AI for military purposes, which, of course, is no longer possible. Tucker Hamilton was forced to publicly admit that he had told a lie. But, according to experts, the colonel, who has been engaged in applying AI to development of new military equipment all his life through, lied exactly the second time when he denied what had happened.
We didn't notice how they took over the world
In fact, artificial intelligence has long been common part of everyday life. Popular voice assistants, smart homes, programs and algorithms designed to make human life easier peacefully monitor comfort and safety, help with studies and work.
They are certainly used for military purposes too. But this topic is strictly regulated, since consequences of the uncontrolled use can be catastrophic.
Both demonization and setting artificial intelligence on a pedestal are handiwork of cunning marketers who manipulate people by touching sensitive issues and fears. It is easier to sell a legend, adults can also believe in fairy tales. In fact, things are a bit different.
What can artificial intelligence really do?
At the moment, there is specialized artificial intelligence in the world. But it is not equal to human brain, despite the fact that it can do a lot.
According to Aleksandr Zakharov, Director of SamSMU’s Research Institute of Neuroscience of the Russian Ministry of Health, it is currently impossible to compare AI and human brain in terms of thinking.
“There is no alternative to human brain yet. AI repeats individual brain functions, but we cannot completely equate them. They are difficult to compare not only in terms of the conceptual, fundamental point of view, but also in terms of technical implementation. For example, only simulating 1 cubic centimeter of brain tissue of some biological object requires enormous computing capacity. Humanity is still getting closer to simulating small amounts of brain, and it is impossible at all to talk about simulating the entire brain at present. Even the human brain connectome (this is complete description of the structure of connections in the body’s nervous system) has not been fully described yet.
If we mean replacing communication, this is possible in principle, and we are partially observing this now. All modern programs based on neural networks are quite efficient in sustaining conversation when they are used as a voice assistant or a chatbot, which can easily replace operators by doing their job of advising the user, by using certain algorithms. That same Chat GPT copes with this task quite well. But it will fail to talk about some abstract things to be absent in the Internet. For example, if you ask Midjourney to draw something fantastic, to predict something that is not online, it will fail to do it. Neural networks will be able to fulfil the task based on the content they are trained on, which has already been uploaded to the Internet, discussed there. If they face tasks that have not been solved before, and have no algorithm for solving the task, selecting a way of conduct, or acting, artificial intelligence cannot but fail. This is the fundamental difference between human brain and AI”.
Representatives of technical community have a similar opinion. According to Evgeny Minaev, Associate Professor at the Department of Supercomputers and General Informatics, Lead Programmer at Samara University’s Institute of Artificial Intelligence, Candidate of Technical Sciences, modern AI models have no thinking.
“This is still weak AI that does not know how to think, it just solves specific applied tasks – to recognize voice, generate speech.
The task of creating strong AI to be able to think like human is the task of the future. By the way, it is very expensive. For example, the company, which is a world leader in the field of researching artificial intelligence, currently looks for 7 trillion US dollars, for building the network infrastructure for developing strong AI. It is obvious that no corporation, no state is able to raise such funds. This is a global task; if it is solved, it will be done only through universal efforts.
Yes, AI has managed to be trained in a large amount of data, and this is already a kind of
breakthrough. Everything in the world in digital format – texts, voice messages, photos, videos, sound recordings, music – has been consolidated into large models or in combinations of these models. It is them that the training is conducted on.
It should be noted that neural networks have learned very well how to imitate a person. They are erudite, all the world’s knowledge is embedded in them. And in some simple application programs, such as a chat dialogue, it is not always possible to understand who is communicating with you – computer or live interlocutor.
But this intelligence has no ability to think, to create new ideas. Everything it can say, write or draw is a kind of compilation of what has already been created by human in advance.
In the full sense, weak AI cannot replace a human at work, if it is required to make non-standard decisions, for which the person must be responsible. But it can be a good tool for increasing performance.
It is necessary to know that these systems have such a bad property as unreliability. There is no guarantee that the solution offered by the neural network is correct. In any case, the final decision should be made by an expert in this field”.
American bots with Russian mentality
Evgeny Minaev shared us how voice assistants and chatbots are trained. According to him, there is not much difference, which language they study in.
“When large open-data arrays are used, language is not important: at a certain stage, they switch to the universal one. Herewith, Americans can train their models on Russian texts and then use this data for generating answers in English. Russian scientists also operate using English-language data.
At the first stage, BigData is used for training, and then experts and neurotrainers are involved in correcting network behavior. They additionally train AI by applying certain queries, simulating situations, and prompting which answers to give, which way to think.
There are different approaches to developing voice assistants. This is usually a combination of several weak AI modules to be responsible for tasks that are simple for humans, but difficult for computers: speech recognition, speech generation, classification of intentions, and interaction with the outer world (turn on light, music).
For creating a model from scratch, a lot of effort and resources (data preparation, network architecture development, training) should be spent. Fortunately, there are currently open datasets, open source architectural solutions. This reduces the time needed to create such assistants. As for Russian scientists’ developments in this field, these are world-class ones that are not inferior to recognized foreign analogues”.
Source: samara.aif.ru