Thursday, March 12, 2026

AI Chatbots Are Making People All Think the Same, Study Says – CNET

by admin
0 comments

A new paper argues that humans are losing varied ways of thinking due to the use of chatbots, and that’s concerning.

Headshot of Julian Dossett
Headshot of Julian Dossett

Julian is a contributor and former staff writer at CNET. He’s covered a range of topics, such as tech, crypto travel, sports and commerce. His past work has appeared at print and online publications, including New Mexico Magazine, TV Guide, Mental Floss and NextAdvisor with TIME. On his days off, you can find him at Isotopes Park in Albuquerque watching the ballgame.

Part of what makes us human is the unique way we think and solve problems. But using large language models like ChatGPT might be eroding this uniqueness and leading humans to think and communicate the same way, according to a group of scientists and psychologists who have coauthored a new opinion paper.

“The richness of how different people write, argue, and think is one of humanity’s most valuable cognitive resources,” Zhivar Sourati, a computer scientist of the University of Southern California and first author for the paper, told CNET.

AI Atlas

When these differences are homogenized by the same LLMs, Zhivar argues, the result is standardized expressions and thoughts across users.

“Diversity of language, perspective, and reasoning isn’t just a cultural nicety; it’s functionally essential. It’s what drives creativity, innovation, and collective problem-solving,” Sourati said. 

The paper, published Wednesday in the journal Trends in Cognitive Sciences, examines how hundreds of millions of people worldwide use the same handful of chatbots and what that means for our individuality. 

Thinking inside the box

Pew Research found that one-third of all Americans used ChatGPT last year, double the 2023 figure. And chatbot use is much more common among teens: Two-thirds say they use chatbots, and almost a third use them daily.

Businesses are also going all in on artificial intelligence. Stanford found that 78% of organizations reported using AI in 2024, up from 55% in 2023. 

So we’re using AI a lot. But the danger is that we could lose the diversity in the ways we think. The team points out that LLM-generated writing varies less than what people come up with on their own. 

Sourati and the paper’s coauthors argue that LLMs pose a new threat to diverse thought.

“Earlier technologies shaped cognition too: the internet accelerated the spread of dominant cultural norms, GPS eroded localized spatial reasoning. But those tools primarily aided storage and retrieval,” Sourati says. “LLMs generate the reasoning and articulation themselves, on your behalf.”

LLMs can give users “a finished way of thinking about something,” Sourati says. “And when the same few systems are doing that for hundreds of millions of people simultaneously, the homogenizing force is unlike anything previous technology has produced.”

According to the paper’s authors, part of the reason LLMs may be pushing homogenized thought is the data used to train them. Sourati says that LLMs are trained to focus on statistical regularities in their training data, which can overrepresent dominant languages and ideologies, leading to outputs that skew toward a narrower slice of the human experience. 

Why diverse thinking matters

There’s a good reason why the authors warn against this trend. Homogenized thought reduces pluralism, the idea that multiple perspectives are good for society as a whole. 

“This value of pluralism is rooted in the long-held principle that sound judgment requires exposure to varied thought,” the authors write in the paper. “Unchecked, this homogenization risks flattening the cognitive landscapes that drive collective intelligence and adaptability.” 

So we use different ways of thinking to figure out more solutions to a problem. If we lose the ability to think and communicate differently, it could affect how we adapt to new situations. 

“The concern is not just that LLMs shape how people write or speak, but that they subtly redefine what counts as credible speech, correct perspective, or even good reasoning,” Sourati said in an announcement

The authors also say that this trend even impacts people who don’t use chatbots.

“If a lot of people around me are thinking and speaking in a certain way, and I do things differently, I would feel a pressure to align with them, because it would seem like a more credible or socially acceptable way of expressing my ideas,” Sourati says. 

Owen Muir, an interventional psychiatrist, agrees with the paper’s views, saying that “‘more average language’ gets baked into human communication, even when the machines aren’t in the room.”

You may also like