“No Duh,” say senior developers everywhere.
The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.
If it wasn’t for all the AI hype that it’s going to do everyone’s job, LLMs would be widely considered an amazing advancement in computer-human interaction and human assistance. They are so much better than using a search engine to parse web forums and stack overflow, but that’s not going to pay for investing hundreds of billions into building them out. My experience is like yours - I use AI chat as a huge information index mainly, and helpful sounding board occasionally, but it isn’t much good beyond that.
The hallucinations (more accurately bullshitting) and the fact they have to get new training data but are discouraging people from engaging in the structures that do so make this highly debatable
I agree that it is certainly debatable. However, my experience has been that information extracted about, say what may cause a strange error message from some R output, has been at least as reliable as random stack overflow posts - however, I get that answer instantly rather than after significant effort with a search engine. It can often find actual links better than a search engine for esoteric problems as well. This, however is merely a relative improvement, and not some world-changing event like AI boosters will claim, and it’s one of the only use-cases where AI provides a clear advantage. Generating broken code isn’t useful to me.