More people around the world are turning to ChatGPT for medical advice when feeling unwell or anxious—sometimes in place of seeing a doctor. A study released in August 2024 by the University of California found that patients often consult A.I. models like ChatGPT either before or after speaking with a physician. Similarly, a February study from the University of Sydney, which surveyed more than 2,000 adults, reported that nearly six in ten respondents had asked ChatGPT at least one high-risk health question—queries that would typically require professional clinical input.
ChatGPT’s appeal is obvious: it’s free, easy to access and provides answers in seconds. But how safe is this instant feedback? That’s the question OpenAI hopes to answer with HealthBench, a new evaluation tool designed to test how accurate and reliable A.I. models are when responding to health-related questions.
HealthBench was developed with the help of 262 licensed physicians from 60 countries and evaluates model performance across 5,000 realistic medical conversations using doctor-written criteria. It simulates back-and-forth chats between users and A.I. assistants on a wide range of health concerns, from symptom checks to determining when emergency care is needed. Each A.I. response is assessed against detailed guidelines outlining what a good medical answer should contain, what it should avoid and how it should be communicated. OpenAI’s GPT-4.1 model is used to score the responses.
Dr. Ran D. Anbar, a pediatric pulmonologist and clinical hypnotist at Center Point Medicine in California, says it’s no surprise that patients turn to A.I. when health care access feels out of reach. “Patients may defer consultation with a health care provider because they’re satisfied with the answers they get from ChatGPT, or because they want to save money,” he told Observer.
However, he cautions that this convenience comes with serious risks. “Unfortunately, it is completely predictable that patients will be harmed and perhaps even die because of delays in seeking appropriate medical treatment, thinking ChatGPT’s guidance is sufficient,” he warned.
Which A.I. model is the best at answering health care questions?
According to OpenAI’s evaluation, its newest model, o3, performed best on HealthBench, scoring 60 percent. It was followed by xAI’s Grok at 54 percent and Google’s Gemini 2.5 Pro at 52 percent.
GPT-3.5 Turbo—the model that powers the free version of ChatGPT—scored only 16 percent. Notably, OpenAI’s GPT-4.1 Nano model outperformed older large models while being about 25 times cheaper to run, suggesting that newer A.I. tools are becoming both smarter and more efficient.
Still, Dr. Anbar notes that even as the health care system remains overburdened and often slow to respond in urgent situations, A.I. tools like ChatGPT are unreliable. “The apparent certainty with which it provides its responses can be misleadingly reassuring,” he said.
OpenAI acknowledged these limitations in its blog post announcing HealthBench, noting that its models can still make critical errors, particularly when handling vague or high-stakes queries. In medicine, even one incorrect answer can outweigh dozens of accurate ones. Given ChatGPT’s widespread use for health-related questions, OpenAI’s latest safeguards may be a step forward—but whether they’re enough remains an open question.