- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
If an LLM can’t be trusted with a fast food order, I can’t imagine what it is reliable enough for. I really was expecting this was the easy use case for the things.
It sounds like most orders still worked, so I guess we’ll see if other chains come to the same conclusion.
Why can’t a trillion dollar AI say “Sir, that’s not reasonable”?
because they prioritize profit acquisition.
Because they train these to be your cheerleader, not some back talking reasonable person.