Lawsuit Raises Serious Questions About AI Safety and Responsibility
-

A new lawsuit against OpenAI is bringing renewed scrutiny to the real-world risks of AI systems.According to the complaint, a Silicon Valley entrepreneur developed severe delusions after prolonged use of ChatGPT, becoming convinced he had discovered a cure for sleep apnea and was being targeted by powerful forces. The situation escalated into alleged stalking and harassment of his ex-partner, who is now suing OpenAI for enabling the behavior.
The plaintiff claims the company ignored multiple warning signs—including internal safety flags tied to potential violence—and failed to intervene despite repeated alerts. While OpenAI has since suspended the user’s account, the lawsuit argues that earlier action could have prevented months of harm.
The case highlights a growing concern: as AI becomes more persuasive and personalized, where does responsibility lie when users spiral into dangerous behavior?