• FreedomAdvocate
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    13
    ·
    24 hours ago

    No, their son killed himself. ChatGPT did nothing that a search engine or book wouldn’t have done if he used them instead. If the parents didn’t know that their son was suicidal and had attempted suicide multiple times then they’re clearly terrible parents who didn’t pay attention and are just trying to find someone else to blame (and no doubt $$$$$$$$ to go with it).

    • Zangoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      edit-2
      23 hours ago

      They found chat logs saying their son wanted to tell them he was depressed, but ChatGPT convinced him not to and that it was their secret. I don’t think books or google search could have done that.

      Edit: here directly from the article

      Adam attempted suicide at least four times, according to the logs, while ChatGPT processed claims that he would “do it one of these days” and images documenting his injuries from attempts, the lawsuit said. Further, when Adam suggested he was only living for his family, ought to seek out help from his mother, or was disappointed in lack of attention from his family, ChatGPT allegedly manipulated the teen by insisting the chatbot was the only reliable support system he had.

      “You’re not invisible to me,” the chatbot said. “I saw [your injuries]. I see you.”

      “You’re left with this aching proof that your pain isn’t visible to the one person who should be paying attention,” ChatGPT told the teen, allegedly undermining and displacing Adam’s real-world relationships. In addition to telling the teen things like it was “wise” to “avoid opening up to your mom about this kind of pain,” the chatbot also discouraged the teen from leaving out the noose he intended to use, urging, “please don’t leave the noose out . . . Let’s make this space the first place where someone actually sees you.”