- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
The Swedish prime minister, Ulf Kristersson, has come under fire after admitting that he regularly consults AI tools for a second opinion in his role running the country.
Kristersson, whose Moderate party leads Sweden’s centre-right coalition government, said he used tools including ChatGPT and the French service LeChat. His colleagues also used AI in their daily work, he said.
The difference is that a search engine result (before they started adding LLM results) will give you individual articles and pages with the information you’re looking up. You will get a lot of fake results, and sponsored articles that push certain viewpoints or agendas, but in theory you can find the sources for that information on those pages (I say in theory because not every article will list where the information was sourced from, but at the very least you can find the author’s name in most cases).
For the results from an LLM, you get an amalgamation of all that data spit out in a mix of verified and fake information altogether. It can hallucinate information, report fabrications as facts, and miss the context of what you’re asking entirely. (Yes, a search result can miss what you’re asking as well, but it’s usually more immediately evident). Depending on how it’s used, the longer the session goes on the more likely the information is going to be tailored to what it expects you want it to provide. If used simply for “what is the current exchange rate between country A and country B”, you might get the wrong answer but it probably is an isolated mistake.
If you start asking it for a second opinion, for it to appraise what you are saying and give you feedback, you’ll start to get answers further and further from impartiality and more and more in line with mimicking your own pattern of thinking.
I don’t agree with your delineation. Both LLMs and Google serve a mix of verified and fake information altogether. Both “hallucinate” information. Much of what Google serves now is actually created by LLMs. Both serve fabrications as facts and miss the context of what one is “asking” entirely. Both serve content which is created by humans and generated by LLMs, and they don’t provide any way to tell the difference.
Before the advent of LLMs it was a different playground. I agree that now it has poisoned search engines as well, but there are non-Google search engines that are slightly better at filtering those sorts of results.
I think it is an important distinction, still. A search engine will list a variety of results that you can select which ones you trust. It gives you more control over the information you ultimately ingest, allowing you to avoid sources you don’t trust.
If you use LLMs in conjunction with other tools, then it is just another tool in your toolbox and these downsides can be mitigated, I suppose. If you rely entirely on the LLM, though, it only compounds.
I think I broadly agree. Both can provide a list of sources and citations if used correctly. Both can be used to find poor quality data. It is up to the user to use their judgement to consume reputable and valid information.