Do you have any advice or suggestions about it?
- Hardware (what should be enough for a local PC, or VPS…)
- Software (OS [Debian, Yunohost, other…], “containerization” (Docker, virtual machines?), dashboard, management, backups, VPN tunneling…)
- “Utilities” to host (Lemmy, Peertube, Matrix, Mastodon, Actual Budget, Jellyfin, Forgejo, Invidious/Piped, local Pi-Hole, email, dedicated videogame servers like for Minecraft, SearXNG, personal file storage like Drive, AI [in the future, when I can afford a rig that can run a local model decently]…)
I’m aware it’s a lot of stuff to take on, so, do you have any advice on where to start? (how to find a cheap PC to experiment with, if not get a VPS, what to test on it, what “utilities” to try self-hosting first…)
Setup:
You’ll need a domain and you’ll need to point the root domain at your public IP with an A record. Then you can set up subdomains for each service with a CNAME record to point to your root domain (use “@“ as the host name). So like “example.com” points to “123.123.123.123” with an A record, and “nextcloud.example.com” points to “@“ (“example.com”) with a CNAME record.
For your domains, I recommend Cloudflare. They’re relatively easy to set up, but more importantly, they don’t charge a markup on domains.
From your router, give your server a DHCP reservation to make sure it’s IP address doesn’t change, then forward ports 80 and 443 to your server.
Software:
I prefer Kubuntu LTS, cause it’s super stable.
Docker and Docker Compose for sure.
NGINX Proxy Manager should be your first docker compose stack. Use “host” network mode, so it can talk to your services. Set up your SSL certificates with this, using the DNS option. Your certificate should have two domain entries, one wildcard and one for the root. So your entries would be like “*.example.com” and “example.com”. You can do that on the same cert.
Now you can set up some docker compose stacks with your services. Choose a port range for your services, like 8201, 8202, 8203, etc. Don’t forward ports to your DB. Machines in the same stack can talk to each other without having ports forwarded. Use regular directories for your shares, not Docker volumes.
Set up the subdomain for each service to point to its port in NPM. The address is just “127.0.0.1”, and the port is whatever you set it up as in the Docker Compose stack.
Start with Nextcloud using the “Nextcloud” docker hub image. It says it’s for advanced users, but I’ve been using it for years. It’s super easy.
All of the stuff from linuxserver.io is great, except Nextcloud, cause you can’t run Nextcloud Office with the built in server.
Next, try Immich. It’s awesome.
Then Jellyfin, Nephele WebDAV, Wordpress, Home Assistant.
Oh and RustDesk to access it remotely. You might need to get an HDMI dummy plug to make it work without a monitor. They’re super cheap.
Oooorrr, you can access it with SSH, but that’s a little more dangerous if you don’t set it up correctly.
Don’t try Podman, it’s very difficult to get working, and simply won’t work with NPM. Use the official Docker installation method, where you set up their repositories in Kubuntu.
Every one in a while, go through your docker stacks and update them. Usually that’s just a “docker compose pull” and “docker compose up -d”, but sometimes it needs manual intervention, like with Nextcloud’s upgrade script, “occ”. For that you’ll use “docker compose exec -it”.
Every once in a while, run “docker system prune -a --volumes” to clean up old stuff.
Depends if you’re hosting something public, or something private.
For public, a webserver is a simple start. Can be anything you want it to be, but as complexity increases, so does the amount of potential attack vectors, so keep that in mind of you’re considering adding things like WordPress and the like.
For private, a NAS and/or a simple game server is a simple and useful start.
As for how, there’s a million ways to do it, and I’m an old stubborn BOFH that still cling to the old ways of doing it (as in, no VMs, no containers), so I’ll defer to others for that.
While purpose built server hardware is always nice since it comes with some useful additions, the truth is that “any” machine will do. Old discarded PC will do just fine.
Copy/paste from another comment I made a while back:
Look into docker containers in general. If I was going to start from scratch in your position this is what I’d do:
Install a Linux distribution on the computer you plan to use for self hosting. This can be anything from a raspberry pi up to a custom build but I would recommend starting with something you have physical possession of. I found Debian with the KDE plasma desktop environment to be pretty familiar coming from Windows. You could technically do most of this on Windows but imo self hosting is pretty much the only thing that a casual user would find better supported through Linux than Windows. The tools are made for people who want to do things themselves and those kinds of people tend to use Linux.
Once you have a Linux distribution installed, get docker set up. Once docker is set up, install portainer as your first docker container. The steps above require some command line work, which may or may not be intimidating for you, but once you have portainer functional you will have a GUI for docker that is easier to use than CLI for most people.
From this point you can find the docker installation instructions for any service you want to run. Docker containers have all the required dependencies of a given service packaged together nicely so deploying new services is super easy once you get the hang of it. You basically just have to define where the container should store it’s data and what web port you want to access the service on. The rest is preconfigured for you by the people who created the container.
There’s certainly more to be said on this topic, some of which you would likely want to look into before you deploy something your whole family will be using (storage setup and backup capability, virtual machines to segregate services, remote accessibility, security, etc). However, the above is really all you need to get to the point where you can deploy pretty much anything you’d like on your local network. The rest is more about best practices and saving yourself headaches when something breaks than it is about functionality.
+1 for docker. So much easier than managing dependencies for a ton of services
One of my first self hosting projects was a jellyfin server. Double check, but I think the main hardware requirements are just 4GB of RAM and enough harddrive space for your videos/files!
I really like immich too. It’s like Google photos, but self hosted. It’s super fast for uploading and backing up your photos over your local network. Immich also needs at least 4GB of RAM I think
immich is not a backup solution. you need to use a backup solutiin forfir the stuff in immich.:)
Hardware: either
- use whatever you have lying around, e.g. an old laptop, or
- get a used thin client like e.g. a Dell Wyse. (passive cooling = no noise)
A Raspberry Pi is needlessly expensive for self-hosting, since it comes with GPIO pins etc. for controlling custom electronics.
Hardware (what should be enough for a local PC, or VPS…)
One of my “servers” I picked up for $15, saving from electronics “recycling”. Unless you’re transcoding video or hosting something with a hefty database that eats ram, whatever you can scrounge is generally good enough.
Software (OS [Debian, Yunohost, other…], “containerization” (Docker, virtual machines?), dashboard, management, backups, VPN tunneling…)
Debian and proxmox is pretty much my host for everything. I run a bunch of containers, usually lxc though a few docker containers here and there.
“Utilities” to host (Lemmy, Peertube, Matrix, Mastodon, Actual Budget, Jellyfin, Forgejo, Invidious/Piped, local Pi-Hole, email, dedicated videogame servers like for Minecraft, SearXNG, personal file storage like Drive, AI [in the future, when I can afford a rig that can run a local model decently]…)
Jellyfin doesn’t have much in the way of requirements if you’re not transcoding, and if you’ve got a relatively modern iGPU on intel, you’ve got plenty of power to transcode as well. Pi-hole is also pretty lightweight.
In terms of where to find something, I’d start with checking if there are local computer recycling companies, they will resell, and I’ve found they go cheap if you go direct. Otherwise, it depends on where you are. Craigslist occasionally has worthwhile stuff, sometimes ebay, sometimes (and I hate that its become so popular) facebook market. Or maybe just see when a business is getting rid of their off lease stuff and see if you can take something home.
At this point I’m almost exclusively tiny/mini/micro. When one dies (which happened recently), I gut the useful bits and move it somewhere else, or add it to the replacement - which is how my most recent addition, a nuc, has 32gb ram rather than 8gb, and a 500gb m2 rather than a 128gb m2.
Have fun!
Every self hoster will say start with something, like… and another will disagree.
My suggestion is look at what you have and think about what you want to do, and go from there.
I personally did not do that, so take what I said with a grain of salt, I saw ads that where super targeted at me and started to get a whole lot annoyed. This annoyance got me to buy a pi zero and started hosting pihole on my network, I did something and the SD card got fried so I got a pi 4 to replace the thing not yet realizing I probably just needed a new SD card. I got grumpy that some ads where getting through so I got another pi 4 to act as a secondary pihole.
I now can say that I have 1 pi zero 2 running wireguard just for DNS, 2 pi 5’s running pihole 1 of them also runs my Jellyfin server and sails the high seas for me while the other one has some other services doing other things. I also have a pi 4 running HAOS, as I try so hard to get out of proprietary systems. I plan on getting another pi 5 to be my firewall and another to act as my blog/email server.
Pinhole could be something good to start with, its pretty simple to setup, doesnt depend on other services, doesnt require hefty hardware, and has a meaningful impact.
Hardware is too wide to tell anything useful out of the blue, depends on what you can get your hands on (as in what’s available locally) and what you actually want to run. Used corporate desktop might be fine, raspberry pi might be good too, mini-pcs are popular and so on. All have their pros and cons.
For the OS proxmox is a solid choise. It has both containers and ‘full’ virtual machines as an option. Debian is good too.
And for the utilities, build something you actually want to use. Pihole is pretty nice. Gaming severs are good to practise with if you’re into that stuff. But if you just build stuff for the sake of it you’ll of course learn on the way but it leaves very little to actually enjoy on what you’ve built.
I really like my immich and nextcloud servers and they’re well worth my time to keep up and running. But with those there’s additional challenge to keep them backed up. Losing pihole server wouldn’t be that bad, it’s easy enough to rebuild, but losing a terabyte of photos is a bit another thing.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters CGNAT Carrier-Grade NAT DHCP Dynamic Host Configuration Protocol, automates assignment of IPs when connecting to a network DNS Domain Name Service/System HTTP Hypertext Transfer Protocol, the Web IP Internet Protocol LTS Long Term Support software version NAS Network-Attached Storage NAT Network Address Translation SSH Secure Shell for remote terminal access SSL Secure Sockets Layer, for transparent encryption TLS Transport Layer Security, supersedes SSL VPN Virtual Private Network VPS Virtual Private Server (opposed to shared hosting) nginx Popular HTTP server
[Thread #288 for this comm, first seen 14th May 2026, 21:10] [FAQ] [Full list] [Contact] [Source code]
You’ll need a public IP. CGNAT won’t do, unless you’ll also add Tailscale (or such) in all your client devices (which might actually be OK).
Grab an old laptop from a junk heap. Ask around for discarded computers. Possibly get an old dead Android (possibly with a broken if barely functional screen) and install Termux. Buying stuff is not green. A laptop with a 5-second battery life, completely broken screen and nonfunctional keyboard will be perfectly good for self-hosting - temporarily plug in an external screen + keyboard, until you get SSH up and running. The rest of access will happen over SSH anyway.
Install your favorite Linux distro (I like NixOS. Debian would be a good basic yet powerful choice).
Set up a webserver (nginx is a solid choice). Serve some static files in LAN.
Make your webserver accessible to the internet. Get a TLS certificate for it, for free, from Letsencrypt. Host a blog. Set up Vaultwarden and import existing passwords into it.At this moment, your possibilities widen up - many, many services are commonly set up behind a reverse proxy (nginx the webserver is going to decrypt the TLS, then proxy the plaintext connection to your services, so your actual service doesn’t need the hassle of TLS encryption). For example Nextcloud.
Set up a Snikket server and install Snikket app in your smartphone (Dino works well on Linux). Tell your significant other, or family, or the closest friend that this is a fallback communication channel in case your main one ever goes down. Set up a STUN/TURN server and start video calling each other.
Email is a hassle. I self-host it, along with an authoritative DNS server and several other things, but it’s not for the faint of the heart.
Hardware: what you currently have on hand to play around with.
Software: start with something simple and well documented. Not quite the driver for the learning phase, in my personal opinion.
“Utilities” as you call them: What is useful to you? What do you want to play with or need to improve your personal use case?
I don’t mean to be flippant with my answers here. Do a little introspection and determine what is genuinely useful for you to self host. I personally run Technitium, Jellyfin, a portion of the "-arr"s, Immich, and Navidrome. My family uses all of these services/utilities on a daily basis, so they are useful for me to host. I have some of the services that need CPU and GPU processing power running on my gaming PC and others running on a Lenovo ThinkCenter that I got for free from the IT department at work. They have bins of PCs slated for recycling that work perfectly fine but are “outdated”.
What are you trying to do?
In the business world, this would be your business requirements. Once you have those then you can spec the technical requirements.
Without having a target, you’ll just be all over the place.
Start with one thing, get that setup, get management for it in place, backup processes, etc.
Then do the next thing.
Iceberg made a great rec - start with Jellyfin. It’s pretty easy, but touches on all sorts of stuff like storage, backups (which media is worth backing up?), etc. Plus it has a high reward - watching what you want, when you want, from almost any device.




