• mojofrododojo@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 days ago

      as it should be, anyone with half a brain would reconsider their actions when prompted to self harm by a fucking executable.

      UNFORTUNATELY HERE WE ARE, in reality, where people are so fucking willing to turn off their once functional grey matter because the chat bot told them they were gonna be rich, famous, etc.,

      So good for you, but also, look out for society, it’s not only going to harm the ones it drives crazy, but the victims of that crazy as well.

    • Hackworth@piefed.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 days ago

      “Role-playing machine” is where it seems like the research is ending up. Language always has an implied communicator, and therefore an implied persona to adopt. LLMs are foremost maintaining a contextual role. Post-training is an attempt to keep them in the Assistant role, but (particularly as contexts get large) it’s trivial to push them into nearly any role imaginable. We made an improv bot that’s so good at playing a coder that it can actually code, kinda.

      • mojofrododojo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 days ago

        I wish there was some way to convince the idiots LARGE LANGUAGE MODELS ARE NOT INTELLIGENCE.

        They’re hotwired eliza with a shit-ton more computational grunt, but they aren’t intelligence and these companies foisting it on people without proper warnings and guard rails are just asking for tragedies.