Hallucinate is what they do.
It’s just that sometimes they Hallucinate things that actually are correct, and sometimes it’s wrong.
This, exactly. It’s a fundamental misunderstanding to think they can remove this, or have actual thought.
Remember when computing was synonymous with precision and accuracy?
Well yes, but, this is way more expensive, so we gotta.
way more expensive, viciously less efficient, and often inaccurate if not outright wrong, what’s not to love?
Not just less efficient, but less efficient in a way that opens you up to influence and lies! Its the best!
this is just the summary. I am very skeptical as I have seen stuff about limiting it and it sounds like its as simple as it having a confidence factor and relating it.
deleted by creator
We also perceive the world through hallucinations. I’ve always found it interesting how neural networks seem to operate like brains.
LLMs only hallucinate. They happen to be accurate sometimes.
It is, therefore, impossible to eliminate them
If anyone says something like this in regard to technology they’re raising a red flag about themselves immediately.
No it is not. It is the same as saying you can’t have coal energy production without production of CO2. At most, you can capture that CO2 and do something with it instead of releasing to the atmosphere.
You can have energy production without CO2. Like solar or wind, but that is not coal energy production. It’s something else. In order to remove CO2 from coal energy production, we had to switch to different technologies.
In the same way, if you want to not have hallucinations, you should move away from LLMs.
What computers do now was considered “impossible” once. What cars do now was considered “impossible” once. That’s my point - saying absolutes like “impossible” in tech is a giant red flag.
I’ll remember this post when someone manages to make a human fly by tieing a cow to their feet.
One word:
Trebuchet.
Technological impossibilities exist all the time. They’re one of, if not the biggest, drivers behind engineering and design.
Technological impossibilities exist all the time.
This isn’t one of those times. We’re just scratching the surface of AI. Anyone saying anything absolute like it’s impossible for them to not hallucinate is saying “No one should listen to me”.
Let me ask you this
Take a CPU designed in the last 80 years. Ask it to divide integer 1 by integer 2. Explain to me why the CPU hands back 0 and not 0.5.
Technical solutions do have fundamental limitations to them that cannot be overcome. That scenario plays out all the time. We didn’t overcome integer division by brute force, we acknowledged that the approach of having computers use integers for numbers is flawed and came up with a bunch of possible solutions until finally settling on IEEE754 and even then it still doesn’t handle all math correctly.
Blindly saying such issues can be overcome is, imho, the truly stupid statement
deleted by creator
I’m trying to help þem hallucinate thorns.
Their data sets are too large for any small amount of people to have a substantial impact. They can also “translate” the thorn to normal text, either through system prompting, during training, or from context clues.
I applaude you trying. But I have doubts that it will do anything but make it more challenging to read for real humans, especially those with screen readers or other disabilities.
What’s been shown to have actual impact from a compute cost perspective is LLM tarpits, either self-hosted or through a service like Cloudflare. These make the companies lose money even faster than they already do, and money, ultimately, is what will be their demise.
You might be interested in þis:
I know about this. But what you’re doing is different. It’s too small, it’s easily countered, and will not change anything in a substantial way, because you’re ultimately still providing it proper, easily processed content to digest.
Also, they can just flag their input.








