A bill under consideration in New York would provide a private right of action, allowing people to file lawsuits against chatbot owners who violate the law.
Most of the medical information coming up these days is garbage and you should be going to a known, reputable site and searching their database. LLMs have been trained on absolute garbage. There is nothing of value being kept from anyone here.
I’m sure the quality of the LLM output does vary a lot based on the size of the scope it covers and the training data set.
However, I believe that if it were possible to get an LLM to be “quite accurate” in any context, that would make it easy to find a path to profitability for that tool, but I don’t think we have seen that materialize anywhere.
I believe that the best they can get is “more accurate” than the mean, but still not accurate enough to reliably make anyone money*.
Most of the medical information coming up these days is garbage and you should be going to a known, reputable site and searching their database. LLMs have been trained on absolute garbage. There is nothing of value being kept from anyone here.
It depends on the LLM actually.
Specialized medical LLMs are actually very accurate.
I’m sure the quality of the LLM output does vary a lot based on the size of the scope it covers and the training data set.
However, I believe that if it were possible to get an LLM to be “quite accurate” in any context, that would make it easy to find a path to profitability for that tool, but I don’t think we have seen that materialize anywhere.
I believe that the best they can get is “more accurate” than the mean, but still not accurate enough to reliably make anyone money*.
*Nvidia notwithstanding