• /home/pineapplelover@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 hour ago

    Dude the only guardrails are

    1. No fully automated killings

    2. No mass surveillance

    You could literally do anything else, you could automate killing people with a person approving.

    Trump booted anthropic because they couldn’t lift these two guardrails. Fuck me

  • pnelego@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    44 minutes ago

    I’m wondering if this is a play for a future bailout. OpenAI knows they are fucked; and instead of just going away like most companies do when they fail, they are embedding themselves in the government to secure a bailout under the guise of a critical defence vendor.

    Furthermore, I’m not convinced the researchers and critical personnel will work for a company that does this. I think we’re about to see the biggest jumping of a ship so far in the industry.

  • raskal@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    92
    ·
    19 hours ago

    Canada recently has had its 2nd worst school shooting ever. The killer had many interactions with ChatGPT that warranted banning her account. A whistleblower has claimed that they wanted to inform Canada’s police force of these comments but were denied by ChatGPT’s management.

    They had a chance to stop the death of 8 people, most of which were young children, but failed to do anything.

    FUCK CHATGPT AND THOSE BASTARDS THAT RUN IT

    • jagungal@aussie.zone
      link
      fedilink
      English
      arrow-up
      11
      ·
      15 hours ago

      Why would you not contact police? I understand that this is a systemic failure and blame does not lie with that employee but if others me I’d rather be out of a job than have those deaths on my conscience for the rest of my life.

      • Kissaki@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        5 hours ago

        In my eyes some blame does lie with them. A systematic failure is a failure of many parts. An employee taking notice and following bad instructions is one of them.

        I don’t know what information they had, but if they were at the point of intending to share, it seems like whistleblowing would have been the just and moral thing to do even if it means ignoring immediate authoritative structure.

      • Takios@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        9 hours ago

        It’s probabilities. If you report it you’re 100% out of a job but only maybe prevented something bad from happening. If you don’t report, you keep your job but maybesomething bad happens. Reliance on a job for survival shifts the decision even further to taking the course of action that’ll keep you your job.

  • trackball_fetish@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    13
    ·
    14 hours ago

    Anyone stockpiling ai prompt vulnerabilities for when we’ll eventually need them to fight off some deathbots?

    • Credibly_Human@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      3
      ·
      13 hours ago

      This is a nonsensical and unrealistic fear/threat to be putting at the top of your list.

      The biggest problems are happening right now not in some 90s sci fi films.

      One of those threats is automated weaponry and mass surveillance, but not in the comic relief way you speak about it.

      • trackball_fetish@lemmy.wtf
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        7 hours ago

        Prey tell the purpose of your comment, Brutus

        You take issue with referring to these machines as deathbots? I’m allowed to poke fun at things that will eventually be used to attempt murdering me you absolute anthropomorphic dunce cap.

        I wasn’t referring to some far off scenario, more for when this situation happens

        I can assure you that not only do I live somewhere where these very things are above me daily, that I’m out here working my ass off in unspeakable ways to prevent exactly the aforementioned sceneario for people like yourself

        Direct your anger elsewhere, the energy could be spent doing something useful

      • vacuumflower@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        7 hours ago

        It’s a trope that every problem posed by the plot has a solution of difficulty level properly fit to the audience.

        A culture of arcade games, unfortunately, has such long-standing effects.

        While we are playing a roguelike. With no respawns.

    • ILikeBoobies@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      10 hours ago

      A machine is more expensive and less expendable than a human. You don’t need to worry about killbots.

      • bearboiblake@pawb.social
        link
        fedilink
        English
        arrow-up
        7
        ·
        6 hours ago

        Sorry, but this is a stupid take. Humans can refuse to fire on a crowd of innocent people. Killbots cannot. The unquestioning loyalty is worth more than money can buy.

          • ArmchairAce1944@discuss.online
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            The reason why shooting people was too difficult is because many of the einsatzgruppen members broke down psychological and some became so murderous that they might not have been refit to reenter civilian society. They used gas chambers because it was sufficiently distanced from the actual act of killing (it just involved rounding people up into a room and having some guy with a canister dump the stuff into a vent. None of the actual killers even had to see the results of their actions as the cleaning was done by another group) that they could do it without creating that same problem.

        • dejova281@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          6 hours ago

          Brainwashing is a thing, just look at the modern despots and their foot soldiers.

    • Chaotic Entropy@feddit.uk
      link
      fedilink
      English
      arrow-up
      37
      ·
      19 hours ago

      Sam Altman is just some fail upward money guy, he’s been eventually removed from basically every prior position he has held.

      • PolarKraken@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        20
        ·
        18 hours ago

        Seems like his career has largely been lying and making impossible promises, so. The folks who do that well always manage to exit the stage before the magic tincture is revealed to just be piss 🤷‍♂️

  • cloudskater@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    83
    arrow-down
    4
    ·
    23 hours ago

    I cannot believe this is what it took for a boycott to go more mainstream. Tell me more about how so many people have no respect for the environment or the artists who’s work they gleefully consume.

  • perishthethought@piefed.social
    link
    fedilink
    English
    arrow-up
    169
    arrow-down
    2
    ·
    1 day ago

    mainstream

    I’ll believe that when my sisters start saying this. Till then, it’s just us privacy fans screaming in a dark cave, enjoying the echo.

    • criscodisco@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      I had a coworker tell me how cool Copilot was because he asked it a question and it found the answer in an email in his outlook mailbox. I thought, “you needed AI to search your email?”

      We are probably cooked.

    • Xorg_Broke_Again@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      88
      arrow-down
      1
      ·
      1 day ago

      It’s always like this. We get a ton of articles on how everyone is suddenly boycotting/deleting [insert thing] but when you ask someone in real life, they usually have no idea what you’re talking about.

      • EldritchFemininity@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        6
        ·
        13 hours ago

        The one thing I will say is that there does seem to be a generalized dislike for AI that has all the investors and upper management types nervous. Even by their own studies do people generally either not care about AI in their products or actively dislike it/find it intrusive. There was a study by a phone company from this past summer or fall that concluded that 80% of their users had no interest in AI or found that it actively made their experience worse, and there have been plenty of pretty damning reports about how useful it’s been in various industries (just look at Microslop). That is not conducive to convincing investors to fund your product and does not show a viable path to making a profit in the future.

        We’ve seen similar things happening recently with car manufacturers walking back on their big touchscreens (with some help from regulation in civilized places that care about things like “pedestrian fatalities” - like Europe) due to consumer sentiment. They tried for nearly a decade to push bigger and bigger screens into cars and remove physical buttons, and now they’re moving in the other direction. Completely anecdotal evidence, but the last time I went to buy a car I told the salesman at the dealership that I wasn’t interested in cars newer than a certain year because that was when they increased the size of the screen and put them in a more obnoxious spot on the dashboard, and he said that he heard similar sentiments from practically everybody who came in looking to buy a car - everybody hated the bigger screens.

      • The Quuuuuill@slrpnk.net
        link
        fedilink
        English
        arrow-up
        28
        arrow-down
        1
        ·
        1 day ago

        so explain it to them gently. you won’t reach everyone, but you’ll reach more people than accepting this status quo

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      19 hours ago

      Anthropic is scum, accepting money from foreign dictators, forcing their software on minorities while insisting it was conscious and had emotions just like them, praising the Trump administration, making up scary stories to get more funding…

      …In many ways, they’re worse than OpenAI. They’re just running with the same playbook that Sam Altman used to use to pretend he was a good guy.

      • Vlyn@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        17 hours ago

        I mean they praised the Trump administration for benefiting their business, which is… fair? I guess?

        If you do ask Claude Sonnet 4.6 about Trump it leans quite negative, as it should.

        • XLE@piefed.social
          link
          fedilink
          English
          arrow-up
          3
          ·
          16 hours ago

          I missed when sucking up to the Trump administration and echoing Cold War style nationalism was “fair”. If that’s the case, OpenAI’s behavior is fair.

          Fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems.

          Our strong preference is to continue to serve the Department and our warfighters

          Dario “Warfighter” Amodei

          • Vlyn@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 hours ago

            I missed when sucking up to the Trump administration and echoing Cold War style nationalism was “fair”. If that’s the case, OpenAI’s behavior is fair.

            It’s just capitalism. Anthropic pushed against the administration and now they are about to be branded as “supply chain risk”. OpenAI bent over and are going to get billions in funding that they sorely need (and hopefully don’t get, let them fail).

            You miss the mark though: Anthropic only praised the administration, but that’s just words to give the Twitter pedo in chief a pat on the head. OpenAI actually signed a contract and they are providing their service. Massive difference.

            • XLE@piefed.social
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              edit-2
              2 hours ago

              They both signed the contract. They both allegedly hold the exact same set of red lines. One of them just gets to pretend to be the virtuous company with the virtuous capitalist CEO, despite showing tons of red flags that should have you scrambling to be as concerned about them as OpenAI.

              If you read their statement, Good Guy Anthropic is totally cool with

              • Mass surveillance of non-Americans
              • Targeted surveillance of Americans
              • Semi-autonomous bombings
              • Fully autonomous bombings… in the future
              • The exact same Red Scare BS that Sam Altman talks about
              • Vlyn@lemmy.zip
                link
                fedilink
                English
                arrow-up
                1
                ·
                23 minutes ago

                They literally didn’t sign the new contract and now they are getting punished for it. What are you even talking about?

        • XLE@piefed.social
          link
          fedilink
          English
          arrow-up
          6
          ·
          19 hours ago

          Sorry, not quite, but close. From 404 media

          When users confronted Clinton with their concerns, he brushed them off, said he would not submit to mob rule, and explained that AIs have emotions and that tech firms were working to create a new form of sentience, according to Discord logs and conversations with members of the group.

          • Hackworth@piefed.ca
            link
            fedilink
            English
            arrow-up
            3
            ·
            19 hours ago

            Oh, that guy! To be fair, that’s one employee, not Anthropic’s actions or position. You mentioned forcing their software on minorities while insisting it was better than it was, and I was getting OLPC flashbacks. But Anthropic looking for funding in the UAE and Qatar is shitty. I can’t seem to find anything about whether or not they went through with those contracts.

            • XLE@piefed.social
              link
              fedilink
              English
              arrow-up
              6
              ·
              edit-2
              18 hours ago

              Jason Clinton is Anthropic’s Deputy Chief Information Security Officer. That means Jason knew better, and he was using his position as a moderator (and supposedly a security expert) to try gaslighting a vulnerable minority into believing his favorite toy was “secure” when it was not.

              • Hackworth@piefed.ca
                link
                fedilink
                English
                arrow-up
                3
                ·
                18 hours ago

                I mean, I’m not gonna defend him. But fucking up a discord that you’re a mod of isn’t really in the same ballpark as taking money from dictators or directing fully autonomous strikes. Also, from the read, it really sounds like that Deputy CISO was a prime example of cyber-psychosis, or AI mania, or whatever we’ve decided to call it. And I assume he is part of the same vulnerable minority?

                • XLE@piefed.social
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  18 hours ago

                  Every example we have of Anthropic’s behavior paints a picture of an immoral company that pretends to be moral. It’s bad enough that they continue doing harm, but then they dress it up with phrases like “AI Safety” and “Information Security”. (And every press release they create to describe how scary good their system is, tends to be followed up by a sudden cash infusion from an openly morally bankrupt company like Google or Amazon.)

                  I reserve zero empathy for the people on the abuser side of an abusive dynamic. Maybe Elon Musk is autistic too. I don’t really care. Only Moloch knows their hearts. I’ll judge them for their actions.

  • 𝕲𝖑𝖎𝖙𝖈𝖍🔻𝕯𝖃 (he/him)@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    18 hours ago

    Glad that I’ve switched platforms. sam altman should probably be in prison or something.

    I’ve been using Venice lately, they claim (I have done zero research to determine if this is true) that they’re privacy focused. They do run uncensored models, which is a big plus.

    That said, I find myself using the lying machine less these days. It was like a fun video game when I first got my hands on it, entertaining for a while, and I’m moving on. Maybe I’m not imaginative enough to use it to the fullest potential, but I’m having more fulfillment actually writing and actually drawing (even though I am very bad at both).

  • SpiceDealer@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    3
    ·
    19 hours ago

    I’d argue that an armed uprising would have a greater effect than a smaller internet-based boycott but I’m just some random guy on some niche internet forum so… who’s to say?

    • GreenBeard@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      12 hours ago

      Quite frankly we don’t have the organizational infrastructure for that. An army, including a rebellion marches on its stomach. Small protest organization feeds into larger scale organization down the road. We’ve got to start somewhere.