For one month beginning on October 5, I ran an experiment: Every day, I asked ChatGPT 5 (more precisely, its “Extended Thinking” version) to find an error in “Today’s featured article”. In 28 of these 31 featured articles (90%), ChatGPT identified what I considered a valid error, often several. I have so far corrected 35 such errors.

  • Qwel@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    2 hours ago

    Most of the errors aren’t so bad, but it’s definitely nice to correct them.


    You need to know Wikipedia’s system a bit though, because ChatGPT suggests these kind of things:

    Want me to draft a crisp correction note you can paste on the article’s talk page?

    Using LLMs when interacting with other editors is “strongly frowned upon”, and you can get banned if you refuse to stop. Especially if you are editing a lot of pages as you just discovered a lot of issues.