Researchers have found that large language models (LLMs) tend to parrot buggy code when tasked with completing flawed snippets.
That is to say, when shown a snippet of shoddy code and asked to fill in the blanks, AI models are just as likely to repeat the mistake as to fix it.
In my experience it will write three paragraphs about the mistake, what went wrong and how to fix it. Only to then output the exact same code, or very close to it, with the same bug. And once you get into that infinite loop, it’s basically impossible to get out of it.
The easiest way I found to get out of that loop, is to get mad at the AI so it hangs up on you.
I was in that problem too but you can at least occationally tell it to “change the code further” and it can work.
Often YOU have to try and fix its code though its far from perfect…