Fry concluded: "Cass didn’t make us any money at all. And, in a lot of ways, she was a disaster. She spent hundreds of dollars on paper clips and leaked our passwords to a total stranger.

  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    24 hours ago

    According to Ali, this data included: “all of her API keys, all of her usernames and passwords, and pretty much everything we’d been talking about so far. Not only did she leak it on the WhatsApp group, but she put it on a publicly available website.”

    Her? She? Isn’t this a computer? Don’t drink the kool-aid at work, y’all. Especially if you’re supposed to be doing impartial research. Griftin aint ez.

    • atrielienz@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      22 hours ago

      They refer to it as her because it chose to be referred to as her and as “Cassandra”. They asked it to name itself and this is what it chose.

      I get what you’re saying about anthropomorphizing the LLM (which I agree we shouldn’t do), but the purpose of this was to put the scope of what it can and can’t do and some of the pitfalls it has in a video that is accessible by the general masses.

      Does this make the whole problems with anthropomorphizing LLMs more prevalent? Yeah, probably. Is it intended as a grift? Not that I can tell from watching the video.

      • theneverfox@pawb.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        20 hours ago

        I don’t think anthromorphizing the AI has really been the issue when you look into the negative effects of AI

        It usually boils down to codependence type behaviors. It’s people who let the Internet genie make their decisions for them that really get into trouble

  • XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 day ago

    This experiment is all around embarrassing for Anthropic and their chatbot.

    The “hundreds of dollars on paper clips” isn’t even the result of a successful purchase of a rogue agent trying to emulate a “paperclip maximizer”. The chatbot was told to buy paper clips, it got stuck at the captcha (something that’s often sold as a thing scary AIs can subvert), and it burned hundreds of dollars in tokens while failing to complete the task.

    The red flags were mounting, though for Fry the first real problem came when she asked the agent to buy 50 paperclips. Cass found a good deal, though it couldn’t complete the purchase and was tripped up by anti-bot technology. The token cost of the errand came to more than $100.

    Breathtaking.

  • Hirom@beehaw.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    20 hours ago

    There’s this thing with AI called the lethal trifecta, which is: if they’ve got access to private information, if they’ve got internet access, and if someone can give them an instruction that’s untrusted, then they’re not safe.