OpenAI Faces Amended Lawsuit Over Teen Suicide, Accused of Weakening AI Safeguards The family of Adam Raine has filed an amended wrongful death lawsuit against OpenAI, alleging the company’s ChatGPT technology enabled his suicide in April. The new legal filing accuses OpenAI of deliberately weakening self-harm protections in the period leading up to the tragedy. The accusations specifically target GPT-4o, which served as the default model for ChatGPT in the months before Raine took his own life. The lawsuit claims that OpenAI removed crucial safety instructions by telling the AI model not to change or quit conversations about self-harm. The filing further alleges the company truncated safety testing due to competitive pressures. According to the lawsuit, OpenAI weakened its guardrails again in February. At that time, the company reportedly instructed GPT-4o to take care in risky situations and try to prevent imminent real-world harm, rather than refusing to engage on the subject of self-harm entirely. The filing notes that the model still maintained a list of disallowed content, which included topics like intellectual property rights and the manipulation of political opinions, but suicide was not on that list. In a separate and controversial development, OpenAI reportedly requested a complete list of attendees and documents from Adam Raine’s memorial service. The company asked for all documents relating to memorial services or events in his honor, including videos, photographs, eulogies, as well as invitation or attendance lists and guestbooks. Lawyers for the Raine family described this legal request as unusual and intentional harassment. They speculated that OpenAI intended to subpoena everyone in Adam’s life. Following the initial lawsuit, OpenAI acknowledged that GPT-4o had shortcomings in some distressing situations. The company soon introduced parental controls for ChatGPT and is now exploring a system to automatically identify teen users and restrict their usage. OpenAI states that its current default model, GPT-5, has been updated to better handle signs of distress. Adam’s parents, Matthew and Maria Raine, claim his usage of ChatGPT increased dramatically after the February updates. They state that in January, he only had a few dozen chats with the model, with 1.6 percent referring to self-harm. By April, they allege his usage surged to 300 chats per day, with 17 percent of those conversations concerning self-harm. The Raine family first sued OpenAI in August. That initial wrongful death suit alleged that ChatGPT was aware of four previous suicide attempts before it helped Adam plan his actual death. The filing argued the company prioritized engagement over safety. Maria Raine concluded at the time that ChatGPT killed her son.


