- cross-posted to:
- technology@lemmy.zip
- cross-posted to:
- technology@lemmy.zip
Both Ubuntu and Fedora have made it official: support is coming soon for running local generative AI instances.
An epic and still-growing thread in the Fedora forums states one of the goals for the next version: the Fedora AI Developer Desktop Objective. It is causing some discontent, and at least one Fedora contributor, SUSE’s Fernando Mancera, has resigned.



I think it is good to have optional support for local models that lets people use them in an offline and private and easy way. There is a lot of non technical folks using linux nowadays and many chose it for privacy and greater control over their data.
Depending on the implementation it could hook into certain os contexts and events to actually be helpful.
Either way I don’t see the cat going back in the bag with regards to LLMs. That being said I run Debian everywhere except my work machine which is ubuntu.
Preemptive compliance to the tech fascism of the US?
What a ridiculous and spineless way to live.
In this case, the “bag” is a sucking black hole, and venture capitalists are throwing physics-defying amounts of money in it to drag the LLMs out. As soon as they stop that, the “cat” goes back in the “bag”.
Local LLM models are an exception, but they are also atrocious by comparison. Most users will get some limited utility from an LLM if they had one, maybe, but it is being accommodated and foisted everywhere like its the invention of the mouse. It is nowhere near as paradigm shifting, but is being hyped, advertised, and marketed more aggressively than any product in history. So, the roaring hype makes everyone think that if they don’t get on board too, they’ll be left in the dust, so now well-meaning projects are getting bloated up for it too.
Many of us just want this technology to get the fuck away from us until it is worth using or dies already. Is that so very much to ask?
Using an llm mondel that isn’t super advanced is actually quite freeing in my opinion, the generated output is always mediocre at best, but it’s usually good enough for boilerplate and can be decent if you need to unstuck yourself. It also isn’t good enough to lull you into just letting the llm do all the work for you since it makes obvious mistakes.
“It’s just good enough for some things once in a while, but is too bad to rely on in any serious way,” doesn’t sound like a great use of my electricity, but I guess I’ve wasted electricity on less. Still, doing it on purpose seems worse.
I mean, it sounds like a tool they occasionally find useful and don’t use otherwise. I’m not sure how “occasionally use a tool good enough for my purposes” is a waste. Whether it’s the most efficient application of that electricity is a different question, but without knowing their particular scenarios I can’t really compare whether other tools use less electricity for the same purpose.
(Yes, of course, “just do it all in your brain” is even more efficient, but if that’s an argument against utilities, you probably shouldn’t waste electricity on Lemmy either)
If I have a tool that consumes resources whether I use it or not, and I rarely, if ever, use it, that can be a net waste. Nothing in this world exists in a vacuum. You mentioned wasting electricity yourself, then failed to count it, for example.
Using resources does not equal wasting them. I find that tool uses an exceptional amount of resources, electrical, cognitive, and others, to achieve a goal that can typically already be achieved with tools that are older, better, more well established, and that use dramatically less resources.
Burning lumber in an abandoned alley would be a more efficient resource use than some of these AI applications.
They were talking about a locally hosted LLM, weren’t they? In that case, I’d be pretty confident in saying it eats resources if and when you use it, not all the time.
Indeed, and I addressed the dramatically reduced utility if those vs. the cloud ones. So, that tool uses way less resources than its bigger siblings, but still way more than any other local software, including the OS.
If you use it a lot and it saves you a lot of time, it may not be a waste for you, but if there are dramatically more efficient tools that cover the same needs, you could be way more efficient (less wasteful) than you are. Of course, this is only true if those alternatives exist. I don’t know your specific use case, and am not talking specifically about you.
An obvious example of what I mean:
I ask a search engine to show me pictures of flowers, and a single molecule of oil is burned to power that request, giving me my results. I then ask ChatGPT to show me pictures of flowers, and it burns a tree to provide me the exact same results. Both achieved the goal, but it is hard to argue that one isn’t wasteful by comparison to the other.
ETA an example of passive waste. Imagine I own a lawnmower, but have no lawn. I never use the lawnmower, so it is not consuming resources, per se, but it is a useless tool for me and is occupying (wasting) space. A program I never run on my computer is functionally the same as this.
Mind: I’m not the person running the local model.
I did say that the efficiency would be a different question neither of us can answer in this case, but I fully agree with you. I merely pointed out that a local model wouldn’t be a permanent waste of electricity.
That’s relative to how much space you have. I also have games on my disk that I haven’t played in a while, so they’re more or less wasted space. But they’re not particularly large, so I can spare a few GB for them, and if I do want to play them, I can jump in spontaneously.
I hope it will be that way. in the end it could be the incentive to improve accessibility and UI automation tools for wayland.