Say Hi to Doctor ChatGPT 

3 hours ago 2

AI has already changed how we search, work, and create—but for some, it’s also quietly transforming how we heal. From persistent back pain to mysterious jaw issues, users are crediting ChatGPT with helping them solve complex health problems.

The Llama Club Article Foot

“I’m hearing more and more stories of ChatGPT helping people fix their longstanding health issues. We still have a long way to go, but it shows how AI is already improving people’s lives in meaningful ways,” said OpenAI president Greg Brockman on X.

Allie K Miller, Fortune 500 AI advisor and angel investor, shared a personal example of how ChatGPT helped her in a moment of discomfort. After a rough bout of food poisoning and ongoing electrolyte imbalance, she was out dining with friends when she began to feel unwell. Instead of leaving, she took photos of the menu and uploaded them to ChatGPT and Claude, asking “what to eat based on low electrolytes”. 

Both tools gave the same recommendation. She then followed up with more questions, explored alternatives, and eventually ordered exactly what was suggested—plus some extra vegetables. 

The result? Her headache and the “weirdness” went away. She admitted that had her symptoms been more severe, she would’ve consulted a specialist. But, for a quick, situational solution that let her stay and enjoy time with friends, she saw it as a “good use of AI”. 

Another user on X said that he got two months of ChatGPT Pro to deep research on his rare disease, and in that time, he learned things that even his cardiologists couldn’t tell him. “It has improved my life a lot.” 

“We need to find some way to make sure people get to use ChatGPT for healthcare, whether or not they can afford it; I’m hopeful it can really help!” said OpenAI chief Sam Altman. 

Similarly, one Reddit user shared how ChatGPT helped him overcome a decade of chronic low back pain—a condition shaped by bad posture, prolonged sitting, and gym injuries. After seeing “7 or 8 different physios” over the years, he said most treatments addressed symptoms without offering clear explanations. Each therapist had a different theory: one pointed to lateral imbalance, another to weak deep core muscles, and yet another suggested dry needling. The result was confusion and progress inconsistent.

The turning point came when he discovered a programme called low back ability (LBA), which focused on strengthening rather than avoiding back movement. But even then, the “vague” explanations left him unsure of how each exercise was meant to help.

That’s when he turned to ChatGPT, feeding it “pages of context”—his pain history, past exercises, and the full LBA plan. The outcome? “It finally clicked.” ChatGPT broke down exactly why his back hurt in specific ways, how each movement helped, and guided him in building a gradual, safe routine.

He stayed consistent, asked follow-up questions, adjusted the plan, and over the next few weeks, saw “tightness and pain go down by 60–70%.” 

Separately, another Reddit user has claimed that ChatGPT helped resolve a chronic jaw issue that had persisted for over five years. The user described a long history of jaw clicking, likely caused by a boxing injury. 

“Every time I opened my mouth wide it would pop or shift,” they wrote. Despite trying various workarounds and even undergoing two MRIs and a consultation with an ENT, no lasting solution was found.

The user eventually asked ChatGPT about the issue. “It gave me a detailed explanation saying the disc in my jaw was probably just slightly displaced but still movable,” they said. The AI then suggested a specific technique involving slow mouth opening while keeping the tongue on the roof of the mouth and monitoring for symmetry.

“I followed the instructions for maybe a minute max, and suddenly… no click. Still no clicking today.”

The post quickly gained traction and was later shared on X by LinkedIn co-founder Reid Hoffman.“Reddit user shares how ChatGPT fixed a medical issue they had for 5 years. Replies are flooded with users who had the same condition and finally found answers too,” Hoffman wrote. “Superagency!”

“Every prescription and medical report I receive now goes through ChatGPT. I can say with confidence that if ChatGPT were available earlier, my parents would still be alive,” shared another user on X. 

Others have turned to ChatGPT in far more urgent situations. Flavio Adamo, a Python developer at Exifly, on X, shared that ChatGPT’s o4-mini model saved his life after it urged him to seek immediate medical attention based on the symptoms he shared. Doctors later told him that arriving just 30 minutes later could have cost him an organ. “So yeah. AI literally saved me,” he said.

Altman responded to the post, saying, “Really happy to hear!”

As his story gained attention, users asked which model he had used. Adamo replied, “o4-mini btw.” Adamo did not share details about the medical condition, but the post adds to the conversation around AI’s role in personal health decisions.

Stanford researchers similarly found that ChatGPT scored about 92% on diagnostic tasks, outperforming physicians who scored in the mid-70s, but physicians using ChatGPT as a diagnostic aid did not significantly improve their accuracy. This suggests that while AI has strong diagnostic potential, effective integration and training are needed for doctors to leverage it fully.

The Risks of Self-Diagnosis with ChatGPT

While these stories highlight ChatGPT’s potential, experts warn of significant risks associated with using AI for self-diagnosis. “Using artificial intelligence for diagnosis and even for prescriptions, one has to be really cautious, because physical examination is missing,” said Dr CN Manjuanth, senior cardiologist and director of the Sri Jayadeva Institute of Cardiovascular Sciences and Research, Bengaluru. 

He further emphasised that, despite the widespread use of technology in healthcare, physical evaluation remains a cornerstone of accurate diagnosis. Though medications may alleviate symptoms, he advised always following up with a qualified medical practitioner for comprehensive care.

He explained that once a particular diagnosis has been made, patients can follow up with ChatGPT. Manjunath said he does not use ChatGPT or any other tool as of now and rather depends on reputed journals.

However, he remains optimistic that AI tools can be beneficial, particularly in areas with limited access to medical professionals (such as remote or underserved regions). However, it should always be supervised by a medical professional. AI can provide valuable support, but it should not be used to make clinical decisions independently.

“Decision-making is more important than interventions, and treatment should not be more harmful than the disease itself,” he said. 

Dr Sharon Baisil, assistant professor in Community Medicine at MOSC Medical College, told AIM that ChatGPT tends to hallucinate and can confidently present false information as true. He said the rate of such inaccuracies can be significant, ranging from 10% to 30–35%.

He further added that, unlike human doctors who are trained in “bad news breaking” and delivering difficult diagnoses with sensitivity, ChatGPT lacks emotional intelligence and may bluntly present alarming possibilities, potentially causing distress.

Moreover, he explained that while doctors typically focus on ruling out common conditions first due to their higher probability, ChatGPT uses a symptom-based approach. In rare instances, this might lead to quicker identification of a rare disease, but this is uncommon.

The future of AI in healthcare lies in its potential to improve clinical practice, improve accessibility, and empower patients, but only with robust safeguards and human expertise at the helm.

Read Entire Article