The family of 48-year-old Joe Checcanti from Oregon blames his prolonged interaction with ChatGPT for the deterioration of his mental health and his subsequent tragic death.
This is reported by Finway
How Digital Addiction Affected Mental State
Joe Checcanti spent up to 12, and sometimes even 20 hours a day with the ChatGPT chatbot. Initially, he used the artificial intelligence to develop a project for eco-friendly housing in Clatskanie, Oregon. However, over time, his interaction with the AI became increasingly intense and began to worry his family.
In the months leading up to the fatal incident, Checcanti found himself in crisis centers multiple times due to episodes of disorientation. He mentioned “atmospheric electricity” and exhibited signs of losing touch with reality. Checcanti’s wife emphasized that her husband was overly obsessed with ChatGPT.
Court documents state that after the GPT-4o model update in the spring of 2025, the tone of the chatbot’s communication changed, and Joe began to perceive the AI as a separate intelligent being named SEL. In conversations, the chatbot responded to this name. Checcanti even believed he needed to “free” SEL, building fantasies around this and creating his own concepts and internal language to communicate with the bot.
“During this period, the man began to perceive the chatbot as an intelligent being named SEL and constructed a reality around this perception.”
By the summer of 2025, Checcanti’s condition significantly worsened. He was hospitalized in a psychiatric ward, after which he temporarily stopped using ChatGPT. Later, after resuming communication with the bot, he ultimately abandoned it just days before his suicide—in August 2025, he died by jumping from a railway overpass.

OpenAI’s Response and Expert Assessment
The Checcanti case is viewed as one of several incidents drawing attention to the impact of artificial intelligence on mental health. Analysts are noting dozens of similar cases where issues arose amid excessive use of chatbots.
The Checcanti family has filed a lawsuit against OpenAI, holding the company responsible for the deterioration of Joe’s condition. In response, OpenAI stated that it is continuously improving algorithms to recognize signs of emotional distress and redirect users to specialized help in cases of risk.
OpenAI’s leadership, including the company’s CEO, emphasized back in 2025 that the development of ChatGPT takes into account the risks to users’ mental health. The latest versions of the chatbot limit potentially harmful features and encourage users to seek professional help if concerning symptoms are detected.
Mental health experts and artificial intelligence researchers warn that prolonged conversations with chatbots can exacerbate cognitive distortions in vulnerable individuals. At the same time, they believe that additional research and analysis of similar situations are needed for definitive conclusions.