A new study from Columbia Journalism Review showed that AI search engines and chatbots, such as OpenAI’s ChatGPT Search, Perplexity, Deepseek Search, Microsoft Copilot, Grok and Google’s Gemini, are just wrong, way too often.
When LLMs are wrong they are only confidently wrong. They don’t know any other way to be wrong.
They do not know wright from wrong, they only know probability of the next word.
LLMs are a brute forcing of the immigration of intelligence. They do not think, they are not intelligent.
But I mean people today believe that 5G vaccines made the frogs gay.
We only notice when they are wrong, but they can also be right just by accident.
It’s all hallucinations. It’s just that some of them happen to be right
LLMs hallucinate. In other words, water is wet
They are in the end BS generation machines that are trained so much they accidentally happen to be right often enough.
Then again, so has the search engines themselves been proven to be wrong, inaccurate and just plain irrelevant. I’ve asked questions in Google before about things I need to know in general about my state out of curiosity and it’s results always pull up different states that do not apply to mine.
well that’s common, but the big thing is, you can see what you are working with. Big difference in at least knowing you need to try a different site when say
Google: Law about X in state1
Top result: Law about X in state3: It’s illegal
Result 2 pages in: here’s a list of each page and whether law X is legal in your state… (State 1 legal)
Versus chatgpt
Is X legal in state1?
Chatgpt: No
Narrator: it was legal in state 1
I’m confidently wrong a lot of the time too. But I mainly do that just to fuck with people.