Amanda Cooper lost the election. It was a very close contest, only 31 votes once preferences were distributed. But Bart Mellish did win. Google knows this, all the top results show Bart Mellish’s official pages, social media, and news/encyclopaedia articles about him.
I can only assume its AI crawler saw the first preference votes and a reasoning model decided that must mean Amanda Cooper won. Because no actual written source could have reported that.
I tried the search again a few minutes later and got the correct result.


I wonder if you’d receive the correct answer if you went to Gemini directly? I tried the same thing with Lumo, which uses open source models and is usually less accurate than the major players, and it reasoned its way to the correct answer at the first time of asking. My understanding is that that Google’s AI Overview is not quite the same thing as Gemini, so perhaps it is more error prone too. I haven’t used Google Search for a long time and didn’t realise the AI Overview was still this bad.
Here’s the reasoning output from Lumo:
Oof. Lumo wrong too, in a different way. There’s no federal electorate of Aspley. The suburb of Aspley is in Lilley and Petrie.
I guess it can make sense if you interpret those lines as referring to the federal electorate which contains the suburb of Aspley, as opposed to a federal electorate called Aspley. The answer was fine so we’d have never known about that possible hallucination if I hadn’t checked the reasoning output!