• givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    1
    ·
    12 hours ago

    First, editors can use LLMs to suggest refinements to their own writing, as long as the edits are checked for accuracy. In other words, it’s being treated like any other grammar checker or writing assistance tool. The policy says, “ LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.”

    The second exemption for LLMs is with translation assistance. Editors can use AI tools for the first pass at translating text, but they still need to be fluent enough in both languages to catch errors. As with regular writing refinements, anyone using LLMs also has to check that incorrect information hasn’t been injected.

    Both pretty reasonable

    • Mark with a Z@suppo.fi
      link
      fedilink
      English
      arrow-up
      30
      ·
      12 hours ago

      Sounds like they emphasize that contributors are still responsible for the changes. Some people are way too trusting in the robots’ abilities.

    • ozymandias@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 hours ago

      anyone using LLMs also has to check that incorrect information hasn’t been injected.

      It seems reasonable, but it’s pretty easy to miss crucial mistakes when one sentence in 300 is wrong, and there’s 25 cases of technically correct but misleading information

      • glimse@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 hours ago

        Your worry is only reasonable if it was commonplace to write 300-sentence Wikipedia articles from scratch lol

        That’s like 5x as long as the average article. Anyone submitting that much at once will raise an eyebrow