Artificial thinking imperils actual thinking

AIBrain_web

Since the launch of ChatGPT more than two years ago, researchers have highlighted a growing number of drawbacks of generative artificial intelligence, including its negative effects on the environment, its tendency to make up responses whole cloth and its ability to be used to spread disinformation.

You can add another concern to the list: its effect on users’ critical thinking skills.

A pair of recent research studies — one of which was authored by researchers at a company that has, so far, invested billions in the nascent technology — indicate that the more people use AI tools and the more confidence and trust they place in those tools, the less likely they are to engage in critical thinking.

The danger is not only that users won’t catch AI-generated errors, misinformation or the technology just telling them what they want to hear, but that they either won’t be developing those skills or they will see the skills they have deteriorate, analysts say.

“It’s actually very difficult to use [AI] in the proper way, not to offload the thinking process and not to rely too much on it,” said Michael Gerlich, the author of one of the recent studies and a professor at SBS Swiss Business School in Zurich.

“The more you rely on it, so the more it produces a positive result, the more you trust AI, and the more you trust AI, the more you offload the thinking process, so you let it think for you,” Gerlich said.

The Icons for the smartphone apps DeepSeek and ChatGPT are seen on a smartphone screen in Beijing, Tuesday, Jan. 28, 2025.Andy Wong/Associated Press

How AI is being used and whether or not it will be a net benefit to society has big implications for San Francisco. The City is home to OpenAI and Anthropic, the two largest generative-AI startups, and it has drawn the lion’s share of the billions of dollars in venture funding that have flowed into the sector in recent years. Many in San Francisco are hoping the booming industry will help The City’s economy rebound from the downturn sparked by the COVID-19 pandemic.

In his study, published in January in the journal Societies, Gerlich surveyed 666 people in the United Kingdom. He queried them about their use of AI tools, the extent to which they engaged in so-called cognitive offloading by relying on other digital devices and services such as Google to remember things or help them solve problems, and the degree to which they scrutinized or evaluated the information they received from AI tools and elsewhere.

Gerlich then followed up with in-depth interviews with 50 of the participants to learn more about their use of AI tools and the impact on their critical thinking.

The study indicated that the more frequently people used AI tools, the more likely they were to engage in cognitive offloading — and the less likely they were to exercise critical thinking.

One of the limitations of Gerlich’s study is that it relied on participants to evaluate their own critical-thinking practices, rather than having a third party observe them or doing some kind of standard assessment. But when people evaluate themselves, they tend to paint themselves in a positive light, he said.

Given that, participants — particularly those who frequently used AI tools — could have overstated the degree to which they engaged in critical thinking, he said.

“The real results could actually be a lot worse,” Gerlich said.

Even so, many of those interviewed for Gerlich’s study expressed concern about how their use of AI was affecting their thinking, according to his report. Many said they’d become dependent on such tools, using them for everyday tasks. Some worried their use of AI tools was reducing their opportunities to exercise their own judgment and thought, according to the report.

“The more I use AI, the less I feel the need to problem-solve on my own,” one of Gerlich’s study participants said. “It’s like I’m losing my ability to think critically.”

A younger participant in the survey noted how easy it is to find information using AI, but was concerned about the downside of that.

“I sometimes worry that I’m not really learning or retaining anything,” they said. “I rely so much on AI that I don’t think I’d know how to solve certain problems without it.”

Under Chairman and CEO Satya Nadella, Microsoft has been one of the biggest supporters of generative AI to date. The software giant invested in OpenAI early on, thencommitted $10 billion to the startupsoon after it launched ChatGPT.Jason Redmond

The other study, conducted by researchers at Carnegie Mellon University and Microsoft and set to be presented at an Association of Computing Machinery conference in Japan later this month, raised similar concerns. It found that the more people — even those employed in jobs that require honed critical thinking skills — trust generative AI systems, the less likely they were to actually engage in critical thinking.

Microsoft has been one of the biggest supporters of generative AI to date. The software giant invested in OpenAI early on, then committed $10 billion to the startup soon after it launched ChatGPT. Microsoft has incorporated generative AI into its Bing search results and its Office software suite, and it offers access to OpenAI’s models to customers of its Azure cloud-computing services.

For their study, the Carnegie Mellon and Microsoft researchers surveyed 319 so-called knowledge workers — people who work in careers in which analysis and problem solving are integral to their jobs. The survey consisted of a mix of open-ended and multiple-choice questions, as well as ones for which they could select more than one answer or were asked to rate things on a scale.

The questions asked participants about their use of generative AI tools, such as ChatGPT and Claude from San Francisco’s OpenAI and Anthropic, respectively. It asked them what tools they used, how they used those tools, the extent to which they applied critical thinking while using generative AI and how they used critical thinking. It also asked them to evaluate how much they trust or rely on such tools.

In analyzing participants’ responses, the researchers divvied up critical-thinking tasks into six categories — the acquisition of knowledge, comprehension, application of knowledge to solve problems, analysis, synthesizing information to form something new, and evaluation.

The study found that the more confident participants were in AI’s outputs, the less likely they were to not only engage in critical thinking overall, but also in everything but comprehension.

Many of those surveyed said they just generally trust the generative AI tools they use, according to the researchers’ report. One participant who uses ChatGPT said they used it to make their writing sound professional.

“It’s a simple task, and I knew ChatGPT could do it without difficulty, so I just never thought about it, as critical thinking didn’t feel relevant,” the participant said in the survey.

But others said they weren’t applying critical thinking, such as by evaluating what the generative AI tools produced, because they were short for time or doing so was someone else’s job.

“In sales, I must reach a certain quota daily or risk losing my job,” said one participant. “Ergo, I use AI to save time and don’t have much room to ponder over the result.”

Others told the researchers they simply didn’t know enough to evaluate whether the answers they got from generative AI were accurate or how to polish them. One participant’s colleagues criticized a document ChatGPT helped write, for example.

But, the participant told the researcher, “I’m not sure how I could have improved the text.”

FILE — The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, March 21, 2023, in Boston.Michael Dwyer

The Carnegie Mellon researcher involved in the study did not respond to a request for comment. The Microsoft researchers were not available for comment, according to a company representative.

But in an emailed statement, Lev Tankelevitch, a senior researcher at Microsoft who was a co-author of the study, said AI works best when people use it as a “thought partner,” encouraging them to engage with it critically.

“All of the research underway to understand AI’s impact on cognition is essential to helping us design tools that promote critical thinking,” Tankelevitch said.

The problem, Gerlich and other researchers say, is that AI tools — at least as they exist today — don’t necessarily encourage that.

Instead, those tools make it easy for users to avoid “effortful thinking,” said Benjamin Riley, the founder of Cognitive Resonance, a think tank focused on generative AI and understanding human cognition.

While people have been able to use older tools, such as calculators or online search engines, to do some of their thinking for them, generative AI stands apart, Riley said.

“We’ve never before had a tool that was free and readily available that could essentially create something new … just by prompting it with whatever you want,” he said.

The danger — as even the researchers on the Microsoft study acknowledge in their report — is that because it’s so easy to use, people can become overreliant on AI and lose their ability to think critically about what it’s producing. That’s of particular concern when it comes to students, who are already widely adopting such tools, Riley said.

One of the promises of AI is that it will take on mundane, repetitive tasks. But it’s often through doing analytical tasks over and over that people hone their critical thinking, researchers say.

“While GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving,” the Microsoft researchers said in their report.

The Anthropic website and mobile phone app are shown in this photo, in New York, July 5, 2024.Richard Drew/Associated Press, File

The two studies did offer some hope. Gerlich’s study found that the more education people had, the more likely they were to engage in critical thinking. It also indicated that education tended to counteract some of the effect of AI use on such thinking; even among people who used AI tools a lot, people with more education tended to use critical thinking more often.

Similarly, the Microsoft study found that the more confident people were in themselves and being able to scrutinize AI’s output, the more likely they were to engage in critical thinking — particularly in making use of and evaluating what the tools generated. Many such people did so even though it required more time and effort.

The key thing in terms of how AI affects critical thinking, is how people approach and use it, Gerlich said. Tempting as it might be, it’s important not to simply offload all thinking to such tools, but instead to be thoughtful about how to use them and about what they produce, he said.

“We have to learn how to use it properly,” Gerlich said.

If you have a tip about tech, startups or the venture industry, contact Troy Wolverton at twolverton@sfexaminer.com or via text or Signal at 415.515.5594.

Leave a Reply

Your email address will not be published. Required fields are marked *