(image from a netizen on b2 lmfao)

I personally use Kimi K2.5 the most as it’s quite well-rounded and they have a good mobile app.

My use case is extremely boring: troubleshooting game mods, searching, summarising, brainstorming, etc. I have experimented with openclaw using K2.5 which is pretty dope but it’s very unreliable, but it did save me a few hours of work by organizing my files.

At some point when I upgrade my computer I’m going to try to switch to local models exclusively.

  • PeeOnYou [he/him]@lemmygrad.ml
    link
    fedilink
    arrow-up
    5
    ·
    3 days ago

    Deepseek is kinda old now, they need to do some updating, which they’re supposed to do any day now, but until then I’d probably steer clear of it because it’s quite outdated and gives a lot of wrong answers to stuff currently.

    Qwen is fabulous for me though.

    • davel@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      Are LLM years even faster than dot-com years were, or am I, a dotard, slowing down?

      • Loki@lemmygrad.mlOP
        link
        fedilink
        arrow-up
        6
        ·
        3 days ago

        Oh like 10x faster at least, and by how much faster is basically doubling every year, there’s been more AI progress in the last two months than the entire year of 2023

        • DonLongSchlong@lemmygrad.ml
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          As someone that did not follow AI at all, besides reading about it while scrolling by, what does “AI progress” look like? More application methods? Or just “better”?

          • Loki@lemmygrad.mlOP
            link
            fedilink
            arrow-up
            3
            ·
            2 days ago

            Both at the same time

            The US has been pretty dominant at software application uses and China has been dominant at physical applications (robotics and industrial automation)

            And China focuses a lot more on improving the fundamental architecture and solving the challenges that come with that whereas the US is mostly focusing on scale