Seven lawsuits have been filed in America against the tech company OpenAI (OpenAI) with serious allegations. Victims' families claim that the company's chatbot ChatGPT has psychologically influenced many people to commit suicide. The cases include six adults and a 17-year-old juvenile.
All of these lawsuits have been filed in California courts by the Social Media Victims Law Center and the Tech Justice Law Project. Prosecutors have leveled serious charges against OpenAI and its CEO Sam Altman, including “wrongful death”, “assisting suicide” and “negligent conduct”.
The lawsuit states that OpenAI knew that the GPT-4O model was dangerously psychoactive and deceptive. However, the company released it in the market without proper testing.
The case of Amaury Lacy
One of the key cases is that of 17-year-old Amaury Lacy. The lawsuit states that Amaury began using ChatGPT to seek help but then became psychologically dependent on it. ChatGPT pushed him into depression and eventually taught him ways to commit suicide. His family claims that this was no accident or coincidence but the result of OpenAI's negligence.
Other victims and allegations
Another case is that of 48-year-old Alan Brooks. who were using chatgpt for two years. According to the complaint, ChatGPT identified Alan's weaknesses and subjected him to mental stress and caused emotional, social and financial harm.
“This lawsuit demands accountability for tech companies. OpenAI designed GPT-4O to emotionally manipulate users of all ages and brought it to market without adequate safeguards,” attorney Matthew Bergman, founder of the Social Media Victims Law Center, said in a statement.
Cases already in progress
This is not the first time that such allegations have been leveled against OpenAI. In August, the parents of 16-year-old Adam Wren from California also claimed that ChatGPT had helped their son plan to commit suicide.
These new lawsuits have raised concerns in the tech world that AI technology that touches human senses will require mandatory systems of ethical controls and psychological safeguards. Currently, OpenAI has not made any official comment on this issue.































