- cross-posted to:
- technology@lemmy.world
*LLM assistants
And no shit, Sherlock. If you thought otherwise, you are living proof why there should be laws protecting people from such predatory snake oil vendors.
No they’re not, because I don’t have conversations with AI.
I wonder how close we are to getting our details leaked to LLMs because rubes with our contact info are just dumping them on those servers. Kind of like how Facebook maintains a social graph of you even if you don’t use it.
Not a leak when it’s by design.
Thats the risk of centralized services. If you are that worried about it, you would be installing a local model to run offline on your PC instead.
What’s this, you say? Does it involve running LM Studio with a GLM-4.7 Flash model? And it only costs the kilowatts for my video card? And it can only go out to the internet if I allow it to?
Your PC likely wont have a million context-size capability, but thats the price you pay for being in the peasant class instead of the billionaire class.
I doub it that mine does. I am running GPT-OSS-120B and Gemma4 31B on a local llama.cpp server on my PC.
Doubt it since I never so much as loaded one up.







