I love how corpos can just change the rules at will.
Edit: New prices:
https://docs.github.com/en/copilot/reference/copilot-billing/models-and-pricing
And if you look at the old pricing structure, some of the models are increasing by 27x
I love how corpos can just change the rules at will.
Edit: New prices:
https://docs.github.com/en/copilot/reference/copilot-billing/models-and-pricing
And if you look at the old pricing structure, some of the models are increasing by 27x
Just as open weight models are getting good. Qwen 3.6 27B just dropped with claimed performance approaching Opus 4.6, but it can run on a Mac with a M-series SoC. I tested it out today on a M4 Pro with Ollama and Cline and was impressed with its reasoning, but it was slow. Going to try with llama.cpp tomorrow and mess around tweaking it for speed.
https://ai.rs/ai-developer/qwen-3-6-27b-local-coding-model
AI coding agents are useful, but it’s time for the cloud-based models to chill out so we can get cheap RAM again to run our shit locally.
It’s almost like buying all the RAM so most people can only afford subscription services is the point.
Think of it like a happy little coincidence