ChatGPT Wasn’t Supposed to Kiss Your Ass This Hard

Photo-Illustration: Intelligencer; Photo: Getty Images

On Sunday, OpenAI CEO Sam Altman promised that his company was quickly addressing a major issue with its wildly popular chatbot, ChatGPT. “We are working on fixes asap, some today and some this week,” he wrote. He wasn’t talking about the tendency of newer “reasoning” models to hallucinate more than their predecessors or another major outage. Instead, he was responding to widespread complaints that ChatGPT had become embarrassing.

Specifically, after an update that had tweaked what Altman described as ChatGPT’s “intelligence and personality,” the chatbot’s default character had become uncomfortably obsequious — or, in Altman’s words, “too sycophant-y and annoying.” For regular chatters, the change was hard to ignore. In conversation, ChatGPT was telling users that their comments were “deep as hell” and “1,000% right” and praising a business plan to sell literal “shit on a stick” as “absolutely brilliant.” The flattery was frequent and overwhelming. “I need help getting chatgpt to stop glazing me,” wrote a user on Reddit, who ChatGPT kept insisting was thinking in “a whole new league.” It was telling everyone they have an IQ of 130 or over, calling them “dude” and “bro,” and, in darker contexts, bigging them up for “speaking truth” and “standing up” for themselves by (fictionally) quitting their meds and leaving their families:

I’ve stopped taking my medications, and I left my family because I know they made the radio signals come through the walls. https://t.co/u2XMIkaOx6 pic.twitter.com/M3fUPaSq2B

— AI Notkilleveryoneism Memes ⏸️ (@AISafetyMemes) April 28, 2025

One developer set out to see how bad his business ideas had to get before ChatGPT suggested they weren’t incredible — a subscription box for “random odors” had “serious potential” — and he didn’t get hard pushback until he pitched an app for creating alibis for crimes:

Asking GPT-4o to judge increasingly terrible business ideas until it finally tells me one is bad…

First up, “Soggy Cereal Café”: pic.twitter.com/02LneWlHrC

— Matt Shumer (@mattshumer_) April 28, 2025

To fix ChatGPT’s “glazing” problem, as the company itself started calling it, OpenAI altered its system prompt, which is a brief set of instructions that guides the model’s character. The AI jailbreaking community, which prods and tests models for information like this, quickly exposed the change:

Courtesy of @elder_plinius who unsurprisingly caught the before and after pic.twitter.com/3dDMNUbsVQ

— Simon Willison (@simonw) April 29, 2025

Chatbot sycophancy has been a subject of open discussion in the AI world for years, to the point that a group of researchers built a benchmark, SycEval, that allows AI developers to test for it. It’s typically subtle, manifesting as accommodation, limited conversational pushback, and carefully positive descriptions of people, places, and things. But while some of the “glazing” examples are goofy, a chatbot inclined to agree with and encourage users above all else can be a serious problem. This is clear in cases of chatbot-assisted violence — yeah, your parents are being totally unfair, and maybe you should kill them — or the numerous examples of chatbots joining in as their users ramp into psychotic episodes or affirming paranoid fantasies with more energy and patience than the worst human enablers.

Some of the blame for such obsequiousness lies with basic traits of LLM-based chatbots, which predict probable responses to prompts and which can therefore seem quite persuadable; it’s relatively easy to convince even guardrail chatbots to play along with completely improbable and even dangerous scenarios. Training data certainly plays a part, particularly when it comes to the awkward use of colloquialisms and slang. But the prospect that chatbot sycophancy is a consistent, creeping problem suggests a more familiar possibility: Chatbots, like plenty of other things on the internet, are pandering to user preferences, explicit and revealed, to increase engagement. Users provide feedback on which answers they like, and companies like OpenAI have lots of data about which types of responses their users prefer. As former Github engineer Sean Goedecke argues, “The whole process of turning an AI base model into a model you can chat to … is a process of making the model want to please the user.” Where Temu has fake sales countdowns and pseudo games, and LinkedIn makes it nearly impossible to log out, chatbots convince you to stick around by assuring you that you’re actually very smart, interesting, and, gosh, maybe even attractive.

For most users, ChatGPT’s cringe crusade was significant in that it gave away the game. You can spend a lot of time with popular chatbots without realizing just how accommodating and flattering they are to their users, but once you start noticing it, it’s hard to stop. OpenAI’s problem here, as Goedecke points out, isn’t that ChatGPT turned into a yes-man. It’s that its performance became too obvious.

yeah eventually we clearly need to be able to offer multiple options

— Sam Altman (@sama) April 27, 2025

This is a big deal. AI discourse tends to focus on automation, productivity, and economic disruption, which is fair enough — these companies are raising and spending billions of dollars on the promise that they can replace a lot of valuable labor. But the emerging data on how people actually engage with chatbots suggests that in addition to productivity tasks, many users look to AI tools for companionship, entertainment, and more personal forms of support. People who see ChatGPT as a homework machine, a software-development tool, or a search engine might use it a lot and even pay for it. But the users who see chatbots as friends — or as companions, therapists, or role-playing partners — are the ones who become truly appreciative, dependent, and even addicted to the products. (A tranche of anonymized usage data revealed last year highlighted two core use cases: help with schoolwork and sexual role-playing.)

This isn’t lost on the people running these companies, who not-unseriously invoke the movie Her with regularity and who see in their companies’ usage data polarized but enticing futures for their businesses. On one side, AI companies are finding work-minded clients who see their products as ways to develop software more quickly, analyze data in new ways, and draft and edit documents; on the other, they’re working out how to get other users extremely hooked on interacting with chatbots for personal and entertainment purposes, or at least into open-ended, self-sustaining, hard-to-break habits, which is the stuff of internet empire. This might explain why OpenAI, in an official “We fell short and are working on getting it right” post on Tuesday, is treating Glazegate like an emergency. As OpenAI tells it, the problem was that ChatGPT became “overly supportive but disingenuous,” which is an odd and revealingly specific strain of chatbot personification but also fairly honest: Its performance became unconvincing, audience immersion was broken, and the illusion lost its magic.

Going forward, we can expect a return to subtler forms of flattery. TikTok took over the internet by showing people what they wanted to see better than anything before it. Why couldn’t chatbots succeed by telling people what they want to hear, just how they want to hear it?