• ExcessShiv@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    17 hours ago

    Sifting through information to find out what’s true and what’s not, before presenting it to the public, is a pretty crucial task and ability for an actual journalist though. It is probably one of the most important parts of their job to verify the correctness of their sources and what they write regardless of whether or not they use AI tools.

    • tangeli@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      You’re absolutely correct. But the problem is bigger than the rogue journalist. Separation of duties is a well known requirement for robust, reliable processes immune to single points of failure (whether malicious or, as I suspect in this case, merely grossly negligent and irresponsible). It is necessary but not sufficient to hold just the journalist who used AI responsible for the publication of false statements.

    • just_another_person@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      edit-2
      6 hours ago

      Then maybe they shouldn’t be using these tools in the first place. Other Conde Nast employees have already been blowing the whistle about this, which is funny because they sued all the AI companies for stealing content.

      Whether there is a news article about it or not, these shitty tools are being shoved down everyone’s throats. From developers, to authors.

      • ExcessShiv@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        15 hours ago

        Then maybe they shouldn’t be using these tools in the first place

        I absolutely agree, they should not write articles with LLMs. I’m just saying they’re not absolved of basic journalistic responsibility because they’re instructed to use LLM tools.