The question is whether the AI or the human is more prone to mistakes. It’s hard to do that without real world tests, unfortunately.
Like self driving cars. Of course they’re going to be involved in crashes where people die, but humans are such terrible drivers that the computers are better (except for Tesla which just has mislabeled lane assist)
And when someone dies, and they will, we decide to roll it out everywhere? As long as there’s profit in it!
The question is whether the AI or the human is more prone to mistakes. It’s hard to do that without real world tests, unfortunately.
Like self driving cars. Of course they’re going to be involved in crashes where people die, but humans are such terrible drivers that the computers are better (except for Tesla which just has mislabeled lane assist)