
Image source, Getty Images
-
- Author, Zoe Kleinman
- Author's title, BBC News, technology editor
-
More and more cases of people suffering from “Psychosis for AI” are recorded, according to Mustafa Suleyman, responsible for Microsoft artificial intelligence.
In a series of publications on the X network, Suleyman wrote that the “apparently conscious” -a tools that give the appearance of being sensitive – take away his dream and affirms that he has a social impact even if technology is not aware in any human definition of the term.
“Today there is no proof that AI is aware. But if people perceive it as conscious, it will believe that this perception is reality,” he wrote.
In relation to this, a new condition called “Psychosis by AI” is emerging: a non -clinical term that describes incidents in which people are increasingly trusted in chatbots of ia such as Chatgpt, Claude and Grok and then convince themselves that something imaginary has become real.
For example, believing to have unravels a secret aspect of the tool, establish a romantic relationship with it or reach the conclusion that they have divine super powers.
Constant validation
Hugh, originally from Scotland, says he was convinced that he was about to become a billionaire after resorting to Chatgpt to help him prepare for what he considered an inadmissible dismissal by an old employer.
The chatbot began advising him to achieve character references and take other practical measures.
But as time passed and Hugh – who did not want to reveal his last name – gave more information to the AI, it began to tell him that he could receive a great payment and, finally, he told him that his experience was so dramatic that a book and a film about her would report more than US $ 6 million.
Basically, he validated what he told him, which is for what the chatbots are scheduled.
“The more information I gave him, the more he told me 'Oh, this treatment is terrible, you should actually receive more than this,” he said.
“He never refuted anything I said.”
Image source, Courtesy Hugh
He said that the tool advised him to talk to the citizen service – a British independent organization specialized in legal advice – and asked for an appointment, but he was so sure that the chatbot had already given him everything he needed to know, that he canceled it.
Hugh decided that the screenshots of his chats were sufficient proof. He said he began to feel like a human being gifted with supreme knowledge.
Hugh, who suffered from other mental health problems, ended up suffering a total collapse. It was the medication that made him realize that, in his own words, he had “lost contact with reality.”
He does not blame the AI of what happened. Keep using it. It was Chatgpt who gave him my name when he decided that he wanted to talk to a journalist.
But he has this advice: “You don't have to be scared of AI tools, they are very useful. But it is dangerous when you move away from reality.”
“Go and see it. Talk to real people, a therapist or a relative or whatever. Talk to real people. Stay connected to reality.”
The BBC contacted Chatgpt to obtain their comments.
“Companies should not affirm or promote the idea that their AIs are aware. AI should not be,” Suleyman wrote, who advocated more controls.
Susan Shelmerdine, a medical specialist in image diagnosis of the Great Ormond Street hopital in London and also AI academic, believes that one day doctors could begin to ask patients how much they use AI, in the same way they currently ask about smoking and drinking habits.
“We already know what ultraprocessed foods can do to the body and this is ultraprocess information. We will receive an avalanche of ultraprocessed minds,” he said.
“We are only at the beginning”
Recently, several people have contacted me at the BBC to share personal stories about their experiences with AI chatbots. Its content varies, but what they all share is the genuine conviction that what has happened is real.
Image source, Getty Images
One wrote that he was sure to be the only person in the world that Chatgpt had really fallen in love.
Another was convinced of having “unlocked” a human form of Elon Musk's Chatbot Grok and believed that his story was worth hundreds of thousands of dollars.
A third said that a chatbot had exposed it to psychological abuses as part of an undercover the AI training and that it was very distressed.
Andrew Mcstay, Professor of Technology and Society of the University of Bangor, in the United Kingdom, wrote a book entitled “Empathic Human” (empathic humans).
“We are only at the beginning of all this,” says Professor Mcstay.
“If we think of this type of systems as a new form of social media, such as social AI, we can begin to think about the potential scale of all this. A small percentage of a massive number of users can continue to represent a large and unacceptable figure.”
This year, his team conducted a study with just over 2,000 people, to which he asked various questions about AI.
They discovered that 20% believed that children under 18 should not use AI tools.
57% said it was very inappropriate for technology to identify as a real person if asked, but 49% thought that the use of the voice was appropriate so that they seem more human and attractive.
“Although these things are convincing, they are not real,” he said.
“They do not feel, they do not understand, they cannot love, they have never felt pain, they have not been ashamed and, although they may seem yes, it is only your family, friends and other trusted people who have gone through this. Be sure to talk to these real people.”
This article was written and edited by our journalists with the help of an artificial intelligence tool for translation, as part of a pilot program.
Subscribe here To our new newsletter to receive every Friday a selection of our best content of the week.
And remember that you can receive notifications in our app. Download the latest version and act.