Stand up a local lfs server or figure out a different way to store large files. I generally avoid lfs
Stand up a local lfs server or figure out a different way to store large files. I generally avoid lfs
Why not just run bare repos on your n100? That’s what I do. I have no need for a code forge with code collab when it’s just me pushing
https://git-scm.com/book/en/v2/Git-on-the-Server-Setting-Up-the-Server
If you want a web viewer use a static site git viewer like https://pgit.pico.sh/
While not the same I use an rss-to-email service that hits the minimal sweet spot for me
If you want low effort high value then get a synology 2 bay. If you want full control over the host OS then run Debian/arch with zfs
I went down a similar path as you. The entire proxmox community argues making it an appliance with nothing extra installed on the host. But the second you need to share data — like a nas — the tooling is a huge pain. I couldn’t reliably find a solution that felt right.
So my solution was to make my nas a zfs pool on my host. Bind mounting works for CTs but not VMs which is an annoying feature asymmetry. So I decided to also install an nfs server that exposed my nas.
I know that’s not what you want but just wanted to share what I did.
The feature asymmetry between CTs and VMs basically made CTs not part of my orchestration.
Here’s my homelab journey: https://bower.sh/homelab
Basically, containers and GPU is annoying to deal with, GPU pass through to a VM is even more annoying. Most modern hobbyist GPUs also do not support splitting your GPU. At the end of the day, it’s a bunch of tinkering which is valuable if that’s your goal. I learned what I wanted, now I’m back to arch running everything with systemd and quadlet
For anyone looking for a simple rss-to-email digest I recommend this service: https://pico.sh/feeds