Well-known AI chatbots can be configured to routinely answer health queries with false information that appears authoritative, complete with fake citations from real medical journals, Australian researchers have found.

Without better internal safeguards, widely used AI tools can be easily deployed to churn out dangerous health misinformation at high volumes, they warned in the Annals of Internal Medicine.

“If a technology is vulnerable to misuse, malicious actors will inevitably attempt to exploit it - whether for financial gain or to cause harm,” said senior study author Ashley Hopkins of Flinders University College of Medicine and Public Health in Adelaide.

  • perestroika@slrpnk.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    Possibly, reverse motivation - the training goal of such an agent would not be nice and smooth output, but shooting down misinformation.

    But I have serious doubts about whether all of that is feasible, given the computational cost of running large language models.

    • vrighter@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      how does that stop the checker model from “hallucinating” a “yep, this is fine” when it should have said “nah, this is wrong”