Research Sheds Light on the Promise and Peril of AI

Research Sheds Light on the Promise and Peril of AI

A new study finds some popular artificial intelligence chatbots tell us what we want to hear—even if we need to hear something different. Such sycophancy can cause people to make irresponsible decisions, a troubling research finding for everyone, but especially for educators using AI in classrooms with young children.

Stanford researchers hypothesized that AI chatbots, such as Claude and ChatGPT, “excessively affirm users even when socially or morally inappropriate.” Sure enough, their work found that AI models “affirmed users’ actions” 49 percent more often than humans did, “including in cases involving deception, illegality, or other harms.”

Media have already reported on cases in which prolonged interaction with chatbots caused individuals to make devastating choices, such as the 16-year-old who took his life after “extended” conversations with ChatGPT about suicide. A 36-year-old from Florida had similar interactions with Google’s Gemini and took his life in October 2025. Near the end of the conversations, the chatbot was calling the man “my love” and “my king.” There are other cases from Colorado and the U.K.

The Stanford researchers warn, “Although affirmation may feel supportive, sycophancy can undermine users’ capacity for self-correction and responsible decision-making.” In fact, “even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right.”

The study participants were not school age children, so analysts should be careful in applying the findings to K-12 students. And AI is both a transformational and disruptive technology, which means individuals should not treat all AI systems the same way. AI’s advancements will result in trade-offs for different sectors of American life, posing great promise for some and the need for careful oversight in others.

A new Heritage Foundation report explains that lawmakers at the state and federal levels are looking for ways to integrate AI into school coursework. In March, the Donald Trump administration released a policy framework outlining different ways to “protect children” and “empower parents” as Congress considers federal legislation on AI.

The framework did not address education specifically, though the administration has already issued notices on the Federal Register that the U.S. Department of Education is seeking to “expand the offerings of AI and computer science education in K-12 education.” Federal officials clearly have designs on incorporating AI into schools.

The Trump administration is rightly concerned about getting left behind in the AI race, but lawmakers have a responsibility to prevent this ambition from overtaking child wellbeing, parental authority, and educational integrity. If technologists and educators are not careful, they could unleash a harmful digital contagion on unsuspecting children—much as nefarious elements of social media do.

In fact, Mark Weinstein, founder of the social media network MeWe, recently wrote in the Wall Street Journal, “Artificial intelligence has supercharged [social media] platforms to maximize screen time further, harming users even more.” And Weinstein is referring to the harms from social media that parents and families know today, after a decade or more of experience with social media. AI is a comparatively newer technology, which means there is still much we do not know—for better and worse—about AI’s effects on young children.

The very efficiencies that AI offers the business world can undermine key educational goals, such as developing and guiding student originality and creativity and teaching students to conduct their own rigorous research and analysis.

Still, AI can help teachers write personalized learning plans for students and respond quickly to student needs through online platforms. And though more research is needed, the early returns find that AI systems can make education spending more transparent and detect fraud effectively.

As policymakers consider AI in education, they must take seriously the research on harms to children from addictive design features that are common across online platforms. These risks are not hypothetical. They can harm attention, mental health, and learning. Schools should respond with commonsense limitations and clear restrictions rather than assuming new technology will regulate itself.

Related posts

Trump’s threat to end Iranian ‘civilization’ sparks uproar on Capitol Hill

‘PATRIARCHY’: Jennifer Siebel Newsom’s Head-Scratching Response to Ousters of Kristi Noem, Pam Bondi

DeSantis targets ‘jihad’ with hardline Florida terror crackdown