When the AI bubble pops, what will remain? Cheap GPUs at firesale prices, skilled applied statisticians looking for work, and open source models that already do impressive things, but will grow far more impressive after being optimized:

  • kescusay@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    2 days ago

    Venture capital drying up.

    Here’s the thing… No LLM provider’s business is making a profit. None of them. Not OpenAI. Not Anthropic. Not even Google (they’re profitable in other areas, obviously). OpenAI optimistically believes it might start being profitable in 2029.

    What’s keeping them afloat? Venture capital. And what happens when those investors decide to stop throwing good money after bad?

    BOOM.

    • very_well_lost@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      OpenAI optimistically believes it might start being profitable in 2029.

      Which is absolutely buck wild when you consider they’ve already signed contacts to spend another trillion dollars over the next five years.

      How the fuck is a company that has $5 billion in revenue today going to grow that revenue by at minimum $995 billion by 2029? There’s just no fucking way, man…

      • kescusay@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        22 hours ago

        On top of that, there’s so much AI slop all over the internet now that the training for their models is going to get worse, not better.

        • very_well_lost@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          22 hours ago

          To an extent, I think that’s already happening. ChatGPT5 released with a huge amount of hype, but when users started playing with it, it was incredibly underwhelming — and flat-out worse than 4 in many cases… all while burning though even more tokens than ever. Definitely seems like that capabilities of this technology have hit a plateau that won’t be solved with more training.

          • kescusay@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            37 minutes ago

            I’m a software developer and my company is piloting the use of LLMs via Copilot right now. All of them suck to varying degrees, but everyone’s consensus is that GPT5 is the worst of them. (To be fair, no one has tested Grok, but that’s because no one in the company wants to.)