Even further: the biggest problem with AI and thus the biggest decider on its suitability or not for something is that its distribution of failure in terms of consequence is uniform rather than it being more likely to err in ways with few or less grevious consequences than in ways with more or worse consequences.
In other words, unlike humans who activelly try and avoid making the nastiest and deadly mistakes, when AI fails, it can fail just as easilly in the most horrible and deadly ways as it can in the most minor of ways.
That’s why you have lots of instances of LLMs giving what for humans are obviously dangerous advice like telling people to put glue on pizza to make it look good or those with suicidal thoughts to kill themselves - unlike humans AI has no mechanism to detect “obviously dangerous” on an output it’s about to produce and generate a different output instead.
This is why using AI to generate fluff filling for e-mails is fine but it’s not fine in systems were errors can easilly cost lives.
Even further: the biggest problem with AI and thus the biggest decider on its suitability or not for something is that its distribution of failure in terms of consequence is uniform rather than it being more likely to err in ways with few or less grevious consequences than in ways with more or worse consequences.
In other words, unlike humans who activelly try and avoid making the nastiest and deadly mistakes, when AI fails, it can fail just as easilly in the most horrible and deadly ways as it can in the most minor of ways.
That’s why you have lots of instances of LLMs giving what for humans are obviously dangerous advice like telling people to put glue on pizza to make it look good or those with suicidal thoughts to kill themselves - unlike humans AI has no mechanism to detect “obviously dangerous” on an output it’s about to produce and generate a different output instead.
This is why using AI to generate fluff filling for e-mails is fine but it’s not fine in systems were errors can easilly cost lives.