• DupaCycki@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      Some things are - on purpose - made easy to misuse and - by design - accessible to people, who are likely to misuse them. All this money, this supposedly cutting edge technology, and reporting to the police, but they aren’t able to tell when a child is at risk and report it as well?

      Smells like bullshit to me. More like they don’t care. I’m not so sure children should even be allowed to use chatbots in the first place. Or only allowed to use versions specifically trained for interactions with children. But of course - banning children from accessing youtube and wikipedia is a much more pressing concern.

      • Electricd@lemmybefree.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 hours ago

        They definitely prefer to spend their money on development, rather than adding safeguards

        I don’t believe people misusing ChatGPT helps them in any way, it’s just that adding protections has a cost

        but they aren’t able to tell when a child is at risk and report it as well?

        Maybe police actually sorts and filters manually reports, but doesn’t want to bother with mental health things? You know how the USA works, I don’t believe OpenAI will go too far, they’ll just randomly report.

        Might even be reported for all I know, sometimes I just like to see the reaction of LLMs when I say I’ll commit horrible stuff like school shootings or terrorism. The NSA will just feed it into their mass spying algorithm to check the most important profiles and this will be it

        The war on drugs is so much more important than mental health detection, y’know. It sells more.