• infeeeee@lemmy.zip
    link
    fedilink
    English
    arrow-up
    262
    ·
    9 hours ago

    Saved you a click:

    After much debate, the new policy is in effect: Wikipedia authors are not allowed to use LLMs for generating or rewriting article content. There are two primary exceptions, though.

    First, editors can use LLMs to suggest refinements to their own writing, as long as the edits are checked for accuracy. In other words, it’s being treated like any other grammar checker or writing assistance tool. The policy says, “ LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.”

    The second exemption for LLMs is with translation assistance. Editors can use AI tools for the first pass at translating text, but they still need to be fluent enough in both languages to catch errors. As with regular writing refinements, anyone using LLMs also has to check that incorrect information hasn’t been injected.

    • Rioting Pacifist@lemmy.world
      link
      fedilink
      English
      arrow-up
      160
      arrow-down
      1
      ·
      9 hours ago

      AIbros: we’re creating God!!!

      AI users: it can do translation & reformating pretty well but you got to check it’s not chatting shit

      • halcyoncmdr@piefed.social
        link
        fedilink
        English
        arrow-up
        45
        ·
        8 hours ago

        The takeaway from all LLM-based AI is the user needs to be smart enough to do whatever they’re asking anyway. All output needs to be verified before being used or relied upon.

        The “AI” is just streamlining the process to save time.

        Relying on it otherwise is stupid and just proves instantly that you are incompetent.

        • Zagorath@quokk.au
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          3 hours ago

          the user needs to be smart enough to do whatever they’re asking anyway

          I’m gonna say that’s ideal but not quite necessary. What’s needed is that the user is capable of properly verifying the output. Which anyone who could do it themselves definitely can, but it can be done more broadly. It’s an easier skill to verify a result than it is to obtain that result. Think: how film critics don’t necessarily need to be filmmakers, or the P=NP question in computer science.

          • Pyro@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 hours ago

            But if the output has issues, what’re you going to do, prompt it again? If you are only able to verify but not do the task, you cannot correct the AI’s mistakes yourself.

            • Redjard@reddthat.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              50 minutes ago

              If you don’t habe the ability then you would do what you would have 5 years ago: not do it
              Either submit without, or not submit at all.

            • Zagorath@quokk.au
              link
              fedilink
              English
              arrow-up
              1
              ·
              51 minutes ago

              At the risk of sounding like an overly obsequious AI… You know what, you’re completely right. I’m honestly not sure what use case I was imagining when I wrote that last comment.

              • Redjard@reddthat.com
                link
                fedilink
                English
                arrow-up
                2
                ·
                47 minutes ago

                Making text flow naturally, grouping and ordeeing information, good writing.

                You can verify two textst have the same facts and information, yet one reads way better than the other. But writing a text that reads well is quite hard.

      • youcantreadthis@quokk.au
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 hours ago

        Fucking hate those anti human filth pushing slop into everything. I want to take one apart with power tools.

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        I don’t think AI users would say it does reformatting either (if they’re honest): If you tell a chatbot to reformat text without changing it, it will change the text, because it does not understand the concept of not changing text. It should only take one time for someone to get burned for them to learn that lesson.

    • MissesAutumnRains@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      38
      ·
      8 hours ago

      Seems pretty reasonable to use it as a grammar checker. As long as it’s not changing content, just form or readability, that seems like a pretty decent use for it, at least with a purely educational resource like Wikipedia.

    • errer@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      3
      ·
      7 hours ago

      Wikipedia probably wants to sell access to LLMs to train. It’s only valuable if Wikipedia remains a high-quality, slop-free source.

      I think even AI zealots think there should be silos of content to train from that are fully human generated. Training slop on slop makes the slop even worse.

    • FauxPseudo @lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      5 hours ago

      Seems like there should be a third exception. For those occasions where the article is about LLM generated text. They should be able to quote it when it’s appropriate for an article.

      • Zagorath@quokk.au
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        That is a reasonable exception to no-AI policies in research papers and newspaper articles, but not for Wikipedia. As a tertiary source, Wikipedia has a strict “no original research” policy. Using AI to provide examples of AI output would be original research, and should not be done.

        Quoting AI output shared in primary and secondary sources should be allowed for that reason, though.