Sorry but a study of 16 developers isn’t a big enough sample to get any meaningful data, especially given the massive range of skills and levels of development.
I’m a developer and I use AI - not much, but when I think it can help based on the suggestions that it gives me since it’s integrated into visual studio. It doesn’t slow me down, it speeds me up. It could slow you down if you rely on it to do everything, but in that case you’re just a bad or lazy developer.
AI is a tool to use. Like with all tools, there are right ways and wrong ways and inefficient ways and all other ways to use them. You can’t say that they slow people down as a whole just because they slow some people down.
It’s a very small sample, so there may be biases. If the sample were larger and more generalized to programmers, there would be some support. For now, we only have more questions than answers in this study.
The bigger issue I see is that it also results in less experienced coders creating code that might work, but that they don’t understand.
And in reality it doesn’t work, or only works in very specific scenarios and thus fails with no one who wrote it around to understand why it might have failed.
Just like Stack Overflow then haha. It’s usually either
“I copied this persons code exactly, why doesn’t it work in my completely different codebase?”
or
“I copied this persons code exactly and it works in mine! I don’t want to touch it in case I break it cause I don’t get it”
haha
If you already know what you’re doing, AI generating code is redundant. If you don’t know what you’re doing, it might work for you, up to the point you’re spending all your time debugging hallucinatory code.
If you already know what you’re doing, AI generating code is redundant.
Nah, it can be really useful for people who do know what they’re doing as it can be used to generate the “charlie work” (IASIP reference if you don’t know) things like unit tests and documentation and things like that pretty damn well.
Yup, yesterday I decided it was finally time to write unit/feature tests on a project that had gotten way to far aling with 0 tests. Just over an hour of babysitting Codex and now I’ve got 800 tests covering 16 models and 60+ endpoints (8000 lines of code) in the exact code style I wanted based on my AGENTS. Honestly, reading through the tests, I wouldn’t know that I didn’t write them!
Now, trying to use AI to solve a novel problem, or even a not so novel one with any logical complexity, and hours can easily be wasted trying to guide it, correct it’s mistakes, and then eventually roll it all back and do it yourself because your smart enough, just being lazy! 😬