Whenever someone at work says “ChatGPT says this” or “Claude says this” or “I asked Gemini and…” whatever they say after that point is just static and I never take them seriously as a person again.
I appreciate the honesty when they say it’s an AI response and not genuine knowledge.
When I tell someone “an LLM told me that…” It’s usually followed by “Let’s see if there’s any truth to it.” An AI response should always be treated as a suggestion, not an answer.
Hell, Google’s AI still doesn’t know which day the F1 GP is on this week. It was wrong by a whole week a while back. Now it’s only off by a day.
An AI response should always be treated as a suggestion, not an answer
Exactly. An AI response can be a great way to get started on a topic you know little about, but it’s never a definitive answer. You have to verify whether it’s actually true. Whether it works. Never trust it blindly.
Whenever someone at work says “ChatGPT says this” or “Claude says this” or “I asked Gemini and…” whatever they say after that point is just static and I never take them seriously as a person again.
red flag
i dunno dude. i used to be a real piece of shit.

I appreciate the honesty when they say it’s an AI response and not genuine knowledge.
When I tell someone “an LLM told me that…” It’s usually followed by “Let’s see if there’s any truth to it.” An AI response should always be treated as a suggestion, not an answer.
Hell, Google’s AI still doesn’t know which day the F1 GP is on this week. It was wrong by a whole week a while back. Now it’s only off by a day.
Exactly. An AI response can be a great way to get started on a topic you know little about, but it’s never a definitive answer. You have to verify whether it’s actually true. Whether it works. Never trust it blindly.