The Swedish prime minister, Ulf Kristersson, has come under fire after admitting that he regularly consults AI tools for a second opinion in his role running the country.

Kristersson, whose Moderate party leads Sweden’s centre-right coalition government, said he used tools including ChatGPT and the French service LeChat. His colleagues also used AI in their daily work, he said.

  • JasSmith@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    24
    ·
    2 days ago

    “We didn’t vote for Google” says angry luddite who doesn’t like his politicians using Google.

    “We didn’t vote for an iPhone” says angry luddite who doesn’t like his politicians using iPhones.

    This is such a silly argument. Politicians can and will use tools as they see fit during the course of their tenure.

    • crapwittyname@feddit.uk
      link
      fedilink
      English
      arrow-up
      17
      ·
      2 days ago

      Gonna have to heavily disagree with this take. Firstly, “Luddite” isn’t the insult you seem to think it is - Luddites were pretty righteous people.
      Second, and I think this is the most important thing, comparing a politician’s use of Google, an iPhone and an LLM is a big fat false equivalence.

      • vandsjov@feddit.dk
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        2 days ago

        comparing a politician’s use of Google, an iPhone and an LLM is a big fat false equivalence.

        To some degree, searching with Google and using LLM can have the same issue: Google is serving up the results that can change your view on things, just like an LLM can. Difference is that with the Google results, you should be getting human created stuff and you could know their political views and take that into your research, whereas LLM is much more of a black box of what the answers are and how they have been influenced by the LLM creators.

        I do agree that putting iPhone into this discussion is, at best, far fetched and could be said about any technology brand/model.

      • JasSmith@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        2 days ago

        They are all tools people use to find and collate and present data. They do it in different ways, but they are all under the control of the user. If you find iPhones too distinct, then consider Google. Both Google and ChatGPT serve content determined by an opaque algorithm. The content may or may not be real. Some of it is completely false. It is up to the user to make an informed determination themselves.

        • Mesophar@pawb.social
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          The difference is that a search engine result (before they started adding LLM results) will give you individual articles and pages with the information you’re looking up. You will get a lot of fake results, and sponsored articles that push certain viewpoints or agendas, but in theory you can find the sources for that information on those pages (I say in theory because not every article will list where the information was sourced from, but at the very least you can find the author’s name in most cases).

          For the results from an LLM, you get an amalgamation of all that data spit out in a mix of verified and fake information altogether. It can hallucinate information, report fabrications as facts, and miss the context of what you’re asking entirely. (Yes, a search result can miss what you’re asking as well, but it’s usually more immediately evident). Depending on how it’s used, the longer the session goes on the more likely the information is going to be tailored to what it expects you want it to provide. If used simply for “what is the current exchange rate between country A and country B”, you might get the wrong answer but it probably is an isolated mistake.

          If you start asking it for a second opinion, for it to appraise what you are saying and give you feedback, you’ll start to get answers further and further from impartiality and more and more in line with mimicking your own pattern of thinking.

          • JasSmith@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            2 days ago

            For the results from an LLM, you get an amalgamation of all that data spit out in a mix of verified and fake information altogether. It can hallucinate information, report fabrications as facts, and miss the context of what you’re asking entirely.

            I don’t agree with your delineation. Both LLMs and Google serve a mix of verified and fake information altogether. Both “hallucinate” information. Much of what Google serves now is actually created by LLMs. Both serve fabrications as facts and miss the context of what one is “asking” entirely. Both serve content which is created by humans and generated by LLMs, and they don’t provide any way to tell the difference.

            • Mesophar@pawb.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              Before the advent of LLMs it was a different playground. I agree that now it has poisoned search engines as well, but there are non-Google search engines that are slightly better at filtering those sorts of results.

              I think it is an important distinction, still. A search engine will list a variety of results that you can select which ones you trust. It gives you more control over the information you ultimately ingest, allowing you to avoid sources you don’t trust.

              If you use LLMs in conjunction with other tools, then it is just another tool in your toolbox and these downsides can be mitigated, I suppose. If you rely entirely on the LLM, though, it only compounds.

              • JasSmith@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                17 hours ago

                If you use LLMs in conjunction with other tools, then it is just another tool in your toolbox and these downsides can be mitigated, I suppose. If you rely entirely on the LLM, though, it only compounds.

                I think I broadly agree. Both can provide a list of sources and citations if used correctly. Both can be used to find poor quality data. It is up to the user to use their judgement to consume reputable and valid information.

    • YknsNMo000@thelemmy.club
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      2 days ago

      Remember when Merkel used an iphone and then it turned out the americans were spying on her for industrial intelligence purposes?

      Of course I don’t fucking want politicians to use a machine from a company directly linked with surveillance scandal, I’m not fucking stupid.