When AI Gives the Wrong Medical Advice
-

Sina Bari has seen firsthand how AI chatbots can mislead patients. He recalled a case where a patient brought in a printed ChatGPT conversation claiming a prescribed medication carried a 45% risk of pulmonary embolism—a statistic that turned out to be taken from a narrow study on tuberculosis patients and was irrelevant to the case at hand.
Despite these risks, Bari doesn’t see AI health chatbots as something that can be stopped. Instead, he believes the problem lies in how loosely they’re used today, without proper guardrails or context, rather than in the idea of AI-assisted health guidance itself.