A new study from Columbia Journalism Review showed that AI search engines and chatbots, such as OpenAI’s ChatGPT Search, Perplexity, Deepseek Search, Microsoft Copilot, Grok and Google’s Gemini, are just wrong, way too often.
A new study from Columbia Journalism Review showed that AI search engines and chatbots, such as OpenAI’s ChatGPT Search, Perplexity, Deepseek Search, Microsoft Copilot, Grok and Google’s Gemini, are just wrong, way too often.
When LLMs are wrong they are only confidently wrong. They don’t know any other way to be wrong.
They do not know wright from wrong, they only know probability of the next word.
LLMs are a brute forcing of the immigration of intelligence. They do not think, they are not intelligent.
But I mean people today believe that 5G vaccines made the frogs gay.
We only notice when they are wrong, but they can also be right just by accident.
It’s all hallucinations. It’s just that some of them happen to be right