Researchers have found that large language models (LLMs) tend to parrot buggy code when tasked with completing flawed snippets.
That is to say, when shown a snippet of shoddy code and asked to fill in the blanks, AI models are just as likely to repeat the mistake as to fix it.
To be fair, if you give me a shit code base and expect me to add features with no time to fix the existing ones, I will also just add more shit on the pile. Because obviously that’s how you want your codebase to look.
And if you do that without saying you want to factor, I likely won’t stand up for you on the next round of layoffs. If I wanted to make the codebase worse, I’d use AI.