• Qwel@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    5 hours ago

    https://en.wikipedia.org/wiki/Wikipedia:Writing_articles_with_large_language_models

    https://en.wikipedia.org/wiki/Wikipedia:LLM-assisted_translation

    The two related “policies” are rather short, you should read them if you haven’t.

    AI shouldn’t be altering databases of knowledge, especially when it is so inconsistent

    The policy only allows usage as an auto-translater (a task at which they are not worst than old-style auto-translaters that were always allowed) and as spellcheck/grammarcheck (where it is also not worst than other allowed options).

    None of those tools were previously seen as altering Wikipedia by themselves. The goal is that LLMs should be used and considered like they were.

    To be clear they always were articles for creation submitted from clearly google-translated text, and they always were dismissed as slop. To get an autotranslated article accepted, you need to clean it up until all the information is correct and the grammar is good enough. This is a rather standard workflow for translations. The same thing should apply to LLMs.

    The new issue here is that LLMs can “organically” change informations while asked to translate. When a classic autotranslate changes the information, it often (not always) leaves a notable mess in the grammar. LLMs will insert their errors much more cleanly. This is acknowledged by both texts and, well, texts will change if that becomes a reocurring issue.