• XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    18 hours ago

    Every example we have of Anthropic’s behavior paints a picture of an immoral company that pretends to be moral. It’s bad enough that they continue doing harm, but then they dress it up with phrases like “AI Safety” and “Information Security”. (And every press release they create to describe how scary good their system is, tends to be followed up by a sudden cash infusion from an openly morally bankrupt company like Google or Amazon.)

    I reserve zero empathy for the people on the abuser side of an abusive dynamic. Maybe Elon Musk is autistic too. I don’t really care. Only Moloch knows their hearts. I’ll judge them for their actions.

    • Hackworth@piefed.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      18 hours ago

      I did find an update on that funding, btw. Anthropic already took money from Qatar (the QIA), but the amount isn’t known - likely around $100M. The UAE has yet to happen, but if does, it would be “hundreds of millions”.

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        17 hours ago

        Interesting. I appreciate you doing the digging to check. It’s frustrating that people spent so much time looking at the fact that Anthropic had an uncrossed red line, they didn’t look at all the red lines that were already crossed - in the very article about those supposed red lines. Such is PR I guess.

        I suppose you saw that “He Will Not Divide Us 2.0” letter from OpenAI and Google employees who promised to stand behind Anthropic. Never mind the fact OpenAI split… Doesn’t anybody know Google already does mass surveillance of Americans?

        …I ramble.