(image from a netizen on b2 lmfao)
I personally use Kimi K2.5 the most as it’s quite well-rounded and they have a good mobile app.
My use case is extremely boring: troubleshooting game mods, searching, summarising, brainstorming, etc. I have experimented with openclaw using K2.5 which is pretty dope but it’s very unreliable, but it did save me a few hours of work by organizing my files.
At some point when I upgrade my computer I’m going to try to switch to local models exclusively.


Are LLM years even faster than dot-com years were, or am I, a dotard, slowing down?
yeah for sure… Deepseek was released only a year ago and it’s already way outdated
Oh like 10x faster at least, and by how much faster is basically doubling every year, there’s been more AI progress in the last two months than the entire year of 2023
As someone that did not follow AI at all, besides reading about it while scrolling by, what does “AI progress” look like? More application methods? Or just “better”?
Both at the same time
The US has been pretty dominant at software application uses and China has been dominant at physical applications (robotics and industrial automation)
And China focuses a lot more on improving the fundamental architecture and solving the challenges that come with that whereas the US is mostly focusing on scale