A salesman for an AI consulting company made the comment that we don’t expect perfection from humans, so why should we expect it from AI? He was smug about it, too, like it was his big gotcha. Joke’s on him, I’m the one that talked the bosses out of spending money with them.
That’s such a bad argument too. The whole point of technology is to help perfect the output of humans. Why would we buy technology that is known to not do that
“You can get pretty good results most of the time and save money on labor!” Not like our whole business model is focused on expertise and compliance or anything. Surely our clients won’t mind a few little mistakes here and there, as a treat.
I think there’s an important semantic difference between worse performance and correctness. Tools, like AI, can underperform when compared to humans and still be very useful and worth investing into, but that’s only as long as they perform correctly.
Tools, like AI, can underperform when compared to humans and still be very useful and worth investing into, but that’s only as long as they perform correctly.
Yeah, the ‘but’ is the entire problem. In my experience, LLM chatbots are like if you made a 12yo a junior admin and fed them speed. Very quick to give you a confident answer, but wrong more often than not. The worst part is a lot of what I’m doing is coding, and it gets basic commands and syntax wrong
A salesman for an AI consulting company made the comment that we don’t expect perfection from humans, so why should we expect it from AI? He was smug about it, too, like it was his big gotcha. Joke’s on him, I’m the one that talked the bosses out of spending money with them.
“Is your AI accountable for mistakes? All these idiots are…”
That’s such a bad argument too. The whole point of technology is to help perfect the output of humans. Why would we buy technology that is known to not do that
“You can get pretty good results most of the time and save money on labor!” Not like our whole business model is focused on expertise and compliance or anything. Surely our clients won’t mind a few little mistakes here and there, as a treat.
The neat part is that we can’t even claim that they’re little mistakes or that there’s few of them.
If we can’t expect better from an AI than from a human, why should we use the AI (other than so you don’t have to pay workers)?
I think there’s an important semantic difference between worse performance and correctness. Tools, like AI, can underperform when compared to humans and still be very useful and worth investing into, but that’s only as long as they perform correctly.
Yeah, the ‘but’ is the entire problem. In my experience, LLM chatbots are like if you made a 12yo a junior admin and fed them speed. Very quick to give you a confident answer, but wrong more often than not. The worst part is a lot of what I’m doing is coding, and it gets basic commands and syntax wrong
Like there’s a big shortage of unemployed humans
Unless you plan on enslaving them, please refer to my previous comment RE: paying humans.