Unbeknownst to his loved ones, Adam had been asking ChatGPT for information on suicide since December 2024. At first the chatbot provided crisis resources when prompted for technical help, but the chatbot explained those could be avoided if Adam claimed prompts were for “writing or world-building.”
From that point forward, Adam relied on the jailbreak as needed, telling ChatGPT he was just “building a character” to get help planning his own death
Because if he didn’t use the jailbreak it would give him crisis resources
but even OpenAI admitted that they’re not perfect:
On Tuesday, OpenAI published a blog, insisting that “if someone expresses suicidal intent, ChatGPT is trained to direct people to seek professional help” and promising that “we’re working closely with 90+ physicians across 30+ countries—psychiatrists, pediatricians, and general practitioners—and we’re convening an advisory group of experts in mental health, youth development, and human-computer interaction to ensure our approach reflects the latest research and best practices.”
But OpenAI has admitted that its safeguards are less effective the longer a user is engaged with a chatbot. A spokesperson provided Ars with a statement, noting OpenAI is “deeply saddened” by the teen’s passing.
That said chatgpt or not I suspect he wasn’t on the path to a long life or at least not a happy one:
Prior to his death on April 11, Adam told ChatGPT that he didn’t want his parents to think they did anything wrong, telling the chatbot that he suspected “there is something chemically wrong with my brain, I’ve been suicidal since I was like 11.”
I think OpenAI could do better in this case, the safeguards have to be increased but the teen clearly had intent and overrode the basic safety guards that were in place, so when they quote things chatgpt said I try to keep in mind his prompts included that they were for “writing or world-building.”
Tragic all around :(
I do wonder how this scenario would play out with any other LLM provider as well
Because if he didn’t use the jailbreak it would give him crisis resources
but even OpenAI admitted that they’re not perfect:
That said chatgpt or not I suspect he wasn’t on the path to a long life or at least not a happy one:
I think OpenAI could do better in this case, the safeguards have to be increased but the teen clearly had intent and overrode the basic safety guards that were in place, so when they quote things chatgpt said I try to keep in mind his prompts included that they were for “writing or world-building.”
Tragic all around :(
I do wonder how this scenario would play out with any other LLM provider as well