• unpossum@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 days ago

    Available on Ollama cloud at https://ollama.com/library/deepseek-v4-pro .

    According to the article/blog they self-report lagging frontier models (ie. OpenAI and Anthropic) by 3-6 months. My personal experience has been that GPT 5.3 codex and later models are genuinely useful in daily programming, so that should mean a lot of options for good coding agents outside of Claude and codex in a few months.

    • nymnympseudonym@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Ollama cloud

      In these times I can’t fault anyone for not having enough blood to sell to buy a GPU, but… IMO the utility of an open-weight model is that you can run it locally/privately

      • unpossum@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        I looked into running GLM 5.1 locally, but just the cooling system would probably cost more than a car, so I shelved it 😅