The family the girl critically injured in the mass shooting in Tumbler Ridge, B.C., has launched a civil court lawsuit against artificial intelligence firm OpenAI.
Would it be valid, then, to say that a search engine is responsible when someone searches how to do a crime?
How about a forum where people talk about the subject, even if they themselves weren’t going to participate in the crimes?
The chatbot is just another avenue to finding information you want to find.
I did read into the article and apparently they’re suing because OpenAI had the account flagged as a potential harm to self or others, but they had already banned the original account. What more do you want them to do? Report them to the thought police?
If somebody on a forum was helping to plot ways to commit a crime, that person should probably be at least questioned. OpenAI’s chatbot is that “somebody” in this case.
Come on, don’t be so dishonest. Compare similar things. This “tool” is designed to create humanlike realtime communication, and it’s run by a billionaire rapist who just as easily have groomed the killer himself (thanks to it being a black box “live service”, we don’t know where the grooming came from, do we).
I remember your previous comment from another thread:
Vulnerable people don’t get to outsource responsibility.
The tool isn’t sentient, it operates on logical weights, and provides output that mimics its training set. LLMs are pretty impressive at what they can output, but it would be dishonest to attribute human qualities to it. There are decades of implementations of various AI techniques to varying degrees in attempts to achieve the same. It is on the technical basis, and the technical basis alone, that we should be carefully considering legal constraints.
How much a CEO is worth, how trustworthy they are, what cirlces they run in, shouldn’t be part of that consideration.
That doesn’t mean I think Altman isn’t a turd who can suck a fat one.
Like I said, it is built to be human_like_. Of course it’s not human or sentient, but Sam Altman sells ChatGPT with humanizing language, describes human attributes, and personally subsidized the grooming of people to commit suicide and homicide.
Did the gun convince the guy that it was a good idea to shoot people, or collaborate? Did the shoes give him ideas or tips on how to do it?
Would it be valid, then, to say that a search engine is responsible when someone searches how to do a crime?
How about a forum where people talk about the subject, even if they themselves weren’t going to participate in the crimes?
The chatbot is just another avenue to finding information you want to find.
I did read into the article and apparently they’re suing because OpenAI had the account flagged as a potential harm to self or others, but they had already banned the original account. What more do you want them to do? Report them to the thought police?
If somebody on a forum was helping to plot ways to commit a crime, that person should probably be at least questioned. OpenAI’s chatbot is that “somebody” in this case.
False equivalence. Tools are not people. We going after magic 8 balls too?
Come on, don’t be so dishonest. Compare similar things. This “tool” is designed to create humanlike realtime communication, and it’s run by a billionaire rapist who just as easily have groomed the killer himself (thanks to it being a black box “live service”, we don’t know where the grooming came from, do we).
I remember your previous comment from another thread:
But apparently billionaires do.
The tool isn’t sentient, it operates on logical weights, and provides output that mimics its training set. LLMs are pretty impressive at what they can output, but it would be dishonest to attribute human qualities to it. There are decades of implementations of various AI techniques to varying degrees in attempts to achieve the same. It is on the technical basis, and the technical basis alone, that we should be carefully considering legal constraints.
How much a CEO is worth, how trustworthy they are, what cirlces they run in, shouldn’t be part of that consideration.
That doesn’t mean I think Altman isn’t a turd who can suck a fat one.
Like I said, it is built to be human_like_. Of course it’s not human or sentient, but Sam Altman sells ChatGPT with humanizing language, describes human attributes, and personally subsidized the grooming of people to commit suicide and homicide.