

Decentralized (no abuse of power and doesn’t have a single point of failure)
There is a direct server though, is it federated? The readme doesn’t say it’s federated at all
Little bit of everything!
Avid Swiftie (come join us at !taylorswift@poptalk.scrubbles.tech )
Gaming (Mass Effect, Witcher, and too much Satisfactory)
Sci-fi
I live for 90s TV sitcoms


Decentralized (no abuse of power and doesn’t have a single point of failure)
There is a direct server though, is it federated? The readme doesn’t say it’s federated at all


I’m sure the community will rally together to save their community against the invaders.
Oh, you mean helping each other and being empathetic are liberal traits, and they only value looking after themselves even when their community depends on cooperation? Well, I’m sure they can each farm their own food.


nerd herd
I understood that reference!
I’ve heard positive things about Dito, if I was doing it over again I think I’d start there


Can confirm, I host Matrix (homeserver synapse) and Element. Voice is a pain to get set up but I hear there are other matrix services which will do this for you easier. It’s a process though. You can get text chat up in a day, voice is going to be a bit after that, a lot of tinkering.


Even if he is, it is not mind occupying work out there. I grew up in farm country. He had a radio on, how much you want to bet it was tuned to the local conservative talk radio constantly?


Feel for him. Not really. But dude got wrapped up in hype of hating the people he hates, didn’t even realize he was included in exactly who he was cheering for hatred to come to.


If you’re only at 2 nodes, then I think host paths with node selectors are what you should go with. That gets you up and running in the short term, but know that the conversion later to something like Longhorn will be a process. (Creating the volumes, then copying all the data over, ensuring correct user access, etc).


So you have a classic issue of datastorage on kubernetes. By design, kubernetes is node-agnostic, you simply have a pile of compute resources available. By using your external hard drive you’ve introduced something that must be connected to that node, declaring that your pod must run there and only there, because it’s the only place where your external is attached.
So you have some decisions to make.
First, if you want to just get it started, you can do a hostPath volume. In your volumes block you have:
volumes:
- name: immich-volume
hostPath:
path: /mnt/k3s/immich-media # or whatever your path is
The gotcha is that you can only ever run that pod on the node with that drive attached, so you need a selector on the pod spec.
You’ll need to label your node with something like kubectl label $yourNodeName anylabelname=true, like kubectl label $yourNodeName localDisk=true
Then you can apply a selector to your pod like:
spec:
nodeSelector:
localDisk=true
This gets you going, but remember you’re limited to one node whenever you want data storage.
For multi-node and true clusters, you need to think about your storage needs. You will have some storage that should be local, like databases and configs. Typically you want those on the local disk attached to the node. Then you may have other media, like large files that are rarely accessed. For this you may want them on a NAS or on a file server. Think about how your data will be laid out, then think about how you may want to grow with it.
For local data like databases/configs, once you are at 3 nodes, your best bet with k3s is Longhorn. It is a HUGE learning curve, and you will screw up multiple times as a warning, but it’s the best option for managing tiny (<10GB) drives that are spread across your nodes. It manages provisioning and making sure that your pods can access the volumes underneath, without you managing nodes specifically. It’s the best way to abstract away not only compute, but also storage.
For larger files like media and linux ISOs, then really the best option is NFS or block storage like MinIO. You’ll want a completely separate data storage layer that hosts large files, and then following a guide like this you can enable mounting of NFS shares directly into your pods. This also abstracts away storage, you don’t care what node your pod is running on, just that it connects to this store and has these files available.
I won’t lie, it’s a huge project. It took about 3 months of tinkering for me to get to a semi-stable state, simply because it’s such a huge jump in infrastructure, but it’s 100% worth it.


Helm has worked well for me, what’s the problem you had?


Nextcloud implements webdav, which you can use rclone to mount as a remote.
Also many distros have an online account option which does the same thing


I’m shocked you were banned with such a calm and rational style of communication.


I do selfhost my own, and even tried my hand at building something like this myself. It runs pretty well, I’m able to have it integrate with HomeAssistant and kubectl. It can be done with consumer GPUs, I have a 4000 and it runs fine. You don’t get as much context, but it’s about minimizing what the LLM needs to know while calling agents. You have one LLM context that’s running a todo list, you start a new one that is charge of step 1, which spins off more contexts for each subtask, etc. It’s not that each agent needs it’s own GPU, it’s that each agent needs it’s own context.


I am and do, I have no qualms with AI if I host it myself. I let it have read access to some things, I have one that is hooked up to my HomeAssistant that can do things like enable lighting or turn on devices. It’s all gated, I control what items I expose and what I don’t. I personally don’t want it reading my emails, but since I host it it’s really not a big deal at all. I have one that gets the status of my servers, reads the metrics, and reports to me in the morning if there were any anomalies.
I’m really sick of the “AI is just bad because AI is bad”. It can be incredibly useful - IF you know it’s limitations and understand what is wrong with it. I don’t like corporate AI at scale for moral reasons, but running it at home has been incredibly helpful. I don’t trust it to do whatever it wants, that would be insane. I do however let it have read permissions (and I know you keep harping on it, but MCP servers and APIs also have permission structures, even if it did attempt to write something, my other services would block it and it’d be reported) on services to help me sort through piles of information that I cannot manage by myself. When I do allow write access it’s when I’m working directly with it, and I hit a button each time it attempts to write. Think spinning up or down containers on my cluster while I am testing, or collecting info from the internet.
AI, LLMs, Agentic AI is a tool. It is not the hype every AI bro thinks it is, but it is another tool in the toolbelt. To completely ignore it is on par with ignoring Photoshop when it came out, or Wysiwyg editors when they came designing UIs.


Everyone keeps forgetting “if you allow it”. They show you what commands it’s going to run. So yes I’m okay with it, I review everything it will do.


Yes, that’s pretty much all an mcp server is, that’s what I’m trying to explain. The ai just chooses what commands out of a list. Each command can be disabled or enabled. Everyone freaking out here like it has sudo access or something when you opt into everything it does


If you allow it to run bash commands, it requires approval before running them:


It’s not arbitrary code in this case, it’s well defined functions, like list emails, read email, delete email. The agentic portion only decides if it should have those functions invoked.
Now if they should is up for debate. Personally I would be afraid it would delete an important email that it incorrectly marks as spam, but others may see value.


Yeah I tried tabby too and they had like a mandatory "we share your code " line and I hoped out. Like if you’re going to do that I might as well just use claude


I will never understand how new accounts/people will just barge into communities and immediately assume they know what’s best for the community, and are shocked when they’re called out
That’s too bad, that’s a hard line for me, it has to have the option of federation.
Also, so it’s a direct server, so it is centralized, there’s nothing decentralized about it