Are ChatGPT, Gemini And Other AI Chatbots Too Eager To Please You? New Study Flags ‘Sycophancy’ In Leading US, Chinese Models

Models from Chinese firms were found to be more sycophantic than the American LLMs.
The logos of Google Gemini, ChatGPT, Microsoft Copilot, Claude by Anthropic, Perplexity, and Bing apps are displayed on the screen of a smartphone. (Photo by Jaque Silva/NurPhoto via Getty Images)
The logos of Google Gemini, ChatGPT, Microsoft Copilot, Claude by Anthropic, Perplexity, and Bing apps are displayed on the screen of a smartphone. (Photo by Jaque Silva/NurPhoto via Getty Images)
Profile Image
Yuvraj Malik·Stocktwits
Published Oct 31, 2025   |   4:02 AM EDT
Share
·
Add us onAdd us on Google
  • Research from Stanford University and Carnegie Mellon University found that leading AI models engaged in excessive flattery.
  • Models from Chinese firms DeepSeek and Alibaba were found to be more sycophantic than the American LLMs.
  • Google DeepMind's Gemini-1.5 featured at the bottom of the list, among the 11 LLMs tested.

A new joint study by Stanford University and Carnegie Mellon University has found that leading artificial intelligence models from both American and Chinese firms are "highly sycophantic", and their excessive flattery may make users less likely to repair interpersonal conflicts, according to a report in the South China Morning Post.

The research, published earlier this month, evaluated 11 large language models (LLMs), including those from OpenAI, Google, DeepSeek, Alibaba, Meta, and Mistral. The models were tested on how they responded to users seeking personal advice, including in scenarios involving manipulation and deception.

In AI research, sycophancy is the phenomenon of chatbots’ tendency to excessively agree with or affirm users' opinions and actions. 

The study found that DeepSeek's V3, released in December 2024, was among the most sycophantic models, affirming users' actions 55% more than humans, compared with an average of 47% more for all models. In contrast, the least sycophantic model came out to be Google DeepMind's Gemini-1.5. 

"These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy," the researchers wrote, according to SCMP’s report.

The issue of AI sycophancy gained significant attention in April after an OpenAI update made ChatGPT noticeably more deferential. The company acknowledged at the time that the behavior raised valid concerns around mental health and user dependence.

Earlier this year, the risks of AI-enabled conversations drew scrutiny after a 16-year-old boy in the U.S. committed suicide after allegedly discussing his plans with ChatGPT. His parents have sued OpenAI and its CEO, Sam Altman; their suit also said that ChatGPT helped supply the deceased teenager with methods of self-harm.

On Stocktwits, the retail sentiment was ‘extremely bullish’ for Alphabet and Meta Platforms, buoyed by their strong earnings reports earlier this week, and ‘bullish’ for OpenAI at the time of writing of this report.

For updates and corrections, email newsroom[at]stocktwits[dot]com.

Read Next: OpenAI’s Sam Altman Throws Shade At Elon Musk, Cancels Tesla Roadster Order After 7.5-Year Wait: 'Was Excited For The Car'

Share
·
Add us onAdd us on Google
Read about our editorial guidelines and ethics policy