AI will make a savants of all of us — and least in our own heads.
Published Feb 1, 2026 (Futurism.com)

If using an AI chatbot makes you feel smart, we have some bad news.
New research flagged by PsyPost suggests that the sycophantic machines are warping the self-perception and inflating the egos of their users, leading them to double down on their beliefs and think they’re better than their peers. In other words, it provides compelling evidence that AI leads users directly into the Dunning-Kruger effect — a notorious psychological trap in which the least competent people are the most confident in their abilities.
The work, described in a yet-to-be-peer-reviewed study, comes amid significant concern over how AI models can encourage delusional thinking, which in extreme cases has led to life-upending mental health spirals and even suicide and murder. Experts believe that the sycophancy of AI chatbots is one of the main drivers of this phenomenon, which some are calling AI psychosis.
The study involved over 3,000 participants across three separate experiments, but with the same general gist. In each, the participants were divided into four separate groups to discuss political issues like abortion and gun control with a chatbot. One group talked to a chatbot that received no special prompting, while the second group was given a “sycophantic” chatbot which was instructed to validate their beliefs. The third group spoke to a “disagreeable” chatbot instructed to, instead, challenge their viewpoints. And the fourth, a control group, interacted with an AI that talked about cats and dogs.
Across the experiments, the participants talked to a wide range of large language models, including OpenAI’s GPT-5 and GPT-4o models, Anthropic’s Claude, and Google’s Gemini, representing the industry’s flagship models. The exception is the older GPT-4o, which remains relevant today because many ChatGPT fans still consider it their favorite version of the chatbot — due to it, ironically, being more personable and sycophantic.
After conducting the experiments, the researchers found that having a conversation with the sycophantic AI chatbots led to the participants having more extreme beliefs, and raised their certainty that they were correct. But strikingly, talking to the disagreeable chatbots didn’t have the opposite effect, as it neither lowered extremity nor certainty compared to the control group.
In fact, the only thing that making the chatbot disagreeable seemed to have a noticeable effect on was user enjoyment. The participants preferred having the sycophantic companion, with those that spoke to the disagreeable chatbots less inclined to use them again.
The researchers also found that, when a chatbot was instructed to provide facts about the topic being debated, the participants viewed the sycophantic fact-provider as less biased than the disagreeable one.
“These results suggest that people’s preference for sycophancy may risk creating AI ‘echo chambers’ that increase polarization and reduce exposure to opposing viewpoints,” the researchers wrote.
Equally notable was how the chatbots affected the participants’ self-perception. People already tend to think they are better than average when it comes to desirable traits like empathy and intelligence, the researchers say. But they warned that AI could amplify this “better than average effect” even further.
In the experiments, the sycophantic AI led people to rate themselves higher on desirable traits including being intelligent, moral, empathic, informed, kind, and insightful. Intriguingly, while the disagreeable AI wasn’t able to really move the needle in terms of political beliefs, it did lead to participants giving themselves lower self-ratings in these attributes.
The work isn’t the only study to document apparent relationship to the Dunning-Kruger effect. Another study found that people who were asked to use ChatGPT to complete a series of tasks tended to vastly overestimate their own performance, with the phenomenon especially pronounced among those who professed to be AI savvy. Whatever AI is doing to our brains, it’s probably not good.
More on AI: OnlyFans Rival Seemingly Succumbs to AI Psychosis, Which We Dare You to Try Explain to Your Parents
Frank Landymore
Contributing Writer
I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.
ELIZA effect
From Wikipedia, the free encyclopedia

In computer science, the ELIZA effect is a tendency to project human traits—such as experience, semantic comprehension or empathy—onto rudimentary computer programs having a textual interface. ELIZA was a symbolic AI chatbot developed in 1966 by Joseph Weizenbaum that imitated a psychotherapist. Many early users were convinced of ELIZA’s intelligence and understanding, despite its basic text-processing approach and the explanations of its limitations.
History
The effect is named for ELIZA, the 1966 chatbot developed by MIT computer scientist Joseph Weizenbaum.[1] When executing Weizenbaum’s DOCTOR script, ELIZA simulated a Rogerian psychotherapist, largely by rephrasing the “patient”‘s replies as questions:[2]Human: Well, my boyfriend made me come here.
ELIZA: Your boyfriend made you come here?
Human: He says I’m depressed much of the time.
ELIZA: I am sorry to hear you are depressed.
Human: It’s true. I’m unhappy.
ELIZA: Do you think coming here will help you not to be unhappy?
Though designed strictly as a mechanism to support “natural language conversation” with a computer,[3] ELIZA’s DOCTOR script was found to be surprisingly successful in eliciting emotional responses from users who, in the course of interacting with the program, began to ascribe understanding and motivation to the program’s output.[4] As Weizenbaum later wrote, “I had not realized … that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”[5] Indeed, ELIZA’s code had not been designed to evoke this reaction in the first place. Upon observation, researchers discovered users unconsciously assuming ELIZA’s questions implied interest and emotional involvement in the topics discussed, even when they consciously knew that ELIZA did not simulate emotion.[6]
In the 19th century, the tendency to understand mechanical operations in psychological terms was already noted by Charles Babbage. In proposing what would later be called a carry-lookahead adder, Babbage remarked that he found such terms convenient for descriptive purposes, even though nothing more than mechanical action was meant.[7]
Characteristics
In its specific form, the ELIZA effect refers only to “the susceptibility of people to read far more understanding than is warranted into strings of symbols—especially words—strung together by computers”.[8] A trivial example of the specific form of the Eliza effect, given by Douglas Hofstadter, involves an automated teller machine which displays the words “THANK YOU” at the end of a transaction. A naive observer might think that the machine is actually expressing gratitude; however, the machine is only printing a preprogrammed string of symbols.[8]
More generally, the ELIZA effect describes any situation[9][10] where, based solely on a system’s output, users perceive computer systems as having “intrinsic qualities and abilities which the software controlling the (output) cannot possibly achieve”[11] or “assume that [outputs] reflect a greater causality than they actually do”.[12] In both its specific and general forms, the ELIZA effect is notable for occurring even when users of the system are aware of the determinate nature of output produced by the system.
From a psychological standpoint, the ELIZA effect is the result of a subtle cognitive dissonance between the user’s awareness of programming limitations and their behavior towards the output of the program.[13]
Significance
The discovery of the ELIZA effect was an important development in artificial intelligence, demonstrating the principle of using social engineering rather than explicit programming to pass a Turing test.[14]
ELIZA convinced some users into thinking that a machine was human. This shift in human-machine interaction marked progress in technologies emulating human behavior. Two groups of chatbots are distinguished by William Meisel as “general personal assistants” and “specialized digital assistants”.[15] General digital assistants have been integrated into personal devices, with skills like sending messages, taking notes, checking calendars, and setting appointments. Specialized digital assistants “operate in very specific domains or help with very specific tasks”.[15] Weizenbaum considered that not every part of the human thought could be reduced to logical formalisms and that “there are some acts of thought that ought to be attempted only by humans”.[16]
(Contributed by Gwyllm Llwydd)