- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
For one month beginning on October 5, I ran an experiment: Every day, I asked ChatGPT 5 (more precisely, its “Extended Thinking” version) to find an error in “Today’s featured article”. In 28 of these 31 featured articles (90%), ChatGPT identified what I considered a valid error, often several. I have so far corrected 35 such errors.


Most of the errors aren’t so bad, but it’s definitely nice to correct them.
You need to know Wikipedia’s system a bit though, because ChatGPT suggests these kind of things:
Using LLMs when interacting with other editors is “strongly frowned upon”, and you can get banned if you refuse to stop. Especially if you are editing a lot of pages as you just discovered a lot of issues.