cross-posted from: https://lemmy.world/post/44116850

The insane AI push is purely driven by fear of being left behind.

No one is actually stopping to ask whether it is all worth it.

  • leoj@piefed.zip
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    4 hours ago

    I followed AI developments in the beginning, but it felt like really effective use cases were always just out of reach.

    Last time I was using AI was before “Agentic” AI was a thing (it was just around the corner).

    Can anyone clue me in, is AI still making forward progress? I feel like if there was a massive change or breakthrough it would be HUGE news, but I also imagine slow incremental progress could eventually build up to being a breakthrough.

    I understand that it is still way too prone to errors and hallucinations to be trusted with serious tasks, but have there been any noteworthy improvements?

    • Zak@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 hours ago

      LLM-based coding agents have become useful to the point that people are building large software projects without humans writing or reviewing code directly. The naive approach to that will result in disaster if used in a production environment, but practices to improve reliability are evolving.

      Popular opinion seems to be that Claude Opus 4.5 was the tipping point for this.

      • DireTech@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        20 minutes ago

        I like AI. I think it’s great for quick references or a starting point, but I’ve already seen projects scrapped and restarted because a bunch of junior devs used AI with no understanding and management gave up on them after a year where the number of significant bugs never decreased. Take one down, feed it to the AI, two more bugs in the tracker.