With mixed results I’ve used it for summarising the plots of books if I’m about to go back into a book series I’ve not read for a while.
Great for giving incantatons for ffmpeg, imagemagick, and other power tools.
“Use ffmpeg to get a thumbnail of the fifth second of a video.”
Anything where syntax is complicated, lots of half-baked tutorials exist for the AI to read, and you can immediately confirm if it worked or not. It does hallucinate flags, but fixes if you say “There is no --compress flag” etc.
This is the way.
I use a model in the app SherpaTTS to read articles from rssaggregator Feedme
One day I’m going to get around to hooking a local smart speaker to Home Assistant with ollama running locally on my server. Ideally, I’ll train the speech to text on Majel Barrett’s voice and be able to talk to my house like the computer in Star Trek.
Legitimately, no. I tried to use it to write code and the code it wrote was dog shit. I tried to use it to write an article and the article it wrote was dog shit. I tried to use it to generate a logo and the logo it generated was both dog shit and raster graphic, so I wouldn’t even have been able to use it.
It’s good at answering some simple things, but sometimes even gets that wrong. It’s like an extremely confident but undeniably stupid friend.
Oh, actually it did do something right. I asked it to help flesh out an idea and turn it into an outline, and it was pretty good at that. So I guess for going from idea to outline and maybe outline to first draft, it’s ok.
I’ve used LLMs to reverse engineer some recipes.
Do you just try and describe what it tastes like?
Can you make an example?
I can’t be too specific without giving away my location, but I’ve recreated a sauce that was sold by a vegan restaurant I used to go to that sold out to a meat-based chain (and no longer makes the sauce).
The second recipe was the seasoning used by a restaurant from my home state. In this case the AI was rather stupid: its first stab completely sucked and when I told it it said something along the lines of “well employees say it has these [totally different] ingredients.”
ChatGPT kind of sucks but is really fast. DeepSeek takes a second but gives really good or hilarious answers. It’s actually good at humor in English and Chinese. Love that it’s actually FOSS too
I love fantasy worldbuilding and write a lot. I use it as a grammar checker and sometimes use it to help gather my thoughts, but never as the final product.
LLMs are pretty good at reverse dictionary lookup. If I’m struggling to remember a particular word, I can describe the term very loosely and usually get exactly what I’m looking for. Which makes sense, given how they work under the hood.
I’ve also occasionally used them for study assistance, like creating mnemonics. I always hated the old mnemonic I learned in school for the OSI model because it had absolutely nothing to do with computers or communication; it was some arbitrary mnemonic about pizza. Was able to make an entirely new mnemonic actually related to the subject matter which makes it way easier to remember: “Precise Data Navigation Takes Some Planning Ahead”. Pretty handy.
I’m piping my in house camera to Gemini. Funny how it comments our daily lives. I should turn the best of in a book or something.
Another one;
Do you take any precautions to protect your privacy from Google or are you just like, eh, whatever?
yeah that looks creepy as fuck
AI’m on Observation Duty
Absolutely « whatever ». I became quite cynical after working for a while in telco / intelligence / data and AI. The small addition of a few pic is just adding few contextual clues to what they have already.
Night Hall Motion Detected, you left the broom out again, it probably slid a little against the wall. I bemoan my existence, is this what life is about? Reporting on broom movements?
Yeah I have a full collection of super sarcastic shit like that.
I use it for books/movies/music/games recommandations (at least while it isn’t used for ads…). You can ask for an artist similar to X or a short movie in genre X. The more demanding you are the better, like a “funny scifi book in the YA genre with a zero to hero plot”.
I bought a cheap barcode scanner and scanned all my books and physical games and put it into a spreadsheet. I gave the spreadsheet to ChatGPT and asked it to populate the titles and ratings, and genre. Allows me to keep them in storage and easily find what I need quickly.
Getting my ollama instance to act as Socrates.
It is great for introspection, also not being human, I’m less guarded in my responses, and being local means I’m able to trust it.
The image generator to 3D model to animation pipeline isn’t too bad. If you’re not a great visual artist, 3D modeler, or animator you can get out pretty decent results on your own that would normally take teams of multiple people dozens of hours after years of training
Before it was hot, I used ESRGAN and some other stuff for restoring old TV. There was a niche community that finetuned models just to, say, restore classic SpongeBob or DBZ or whatever they were into.
These days, I am less into media, but keep Qwen3 32B loaded on my desktop… pretty much all the time? For brainstorming, basic questions, making scripts, an agent to search the internet for me, a ‘dumb’ writing editor, whatever. It’s a part of my “degoogling” effort, and I find myself using it way more often since it’s A: totally free/unlimited, B: private and offline on an open source stack, and C: doesn’t support Big Tech at all. It’s kinda amazing how “logical” a 14GB file can be these days, and I can bounce really personal/sensitive ideas off it that I would hardly trust anyone with.
…I’ve pondered getting back into video restoration, with all the shiny locally runnable tools we have now.
Do you have any recommendations for a local Free Software tool to fix VHS artifacts (bad tracking etc., not just blurriness) in old videos?
Do you run this on NVIDIA or AMD hardware?
Nvidia.
Back then I had a 980 TI RN I am lucky enough to have snagged a 3090 before they shot up.
I would buy a 7900, or a 395 APU, if they were even reasonably affordable for the VRAM, but AMD is not pricing their stuff well…
But FYI you can fit Qwen 32B on a 16GB card with the right backend/settings.
How do you get it to search the internet?