Once the dump was complete, we transferred it to the new server using rsync over SSH. With 248 GB of compressed chunks, this was significantly faster than any other transfer method:
rsync -avz --progress /root/mydumper_backup/ root@NEW_SERVER:/root/mydumper_backup/that’s a bit weird. rsync -z is compression, but they did compress in the mydumper export already, so this is a slow down (or neutral at best). also in my experience rsync is as fast as scp is as fast as piping anything to the tcp port on the destination etc. rsync does not win for speed but for enabling resume so to say…
besides this: nice read!
nice writeup
Always nice to see people moving away from enshittified US services.
Ok so if I’m reading this correctly: They migrated from an OS and MySQL version receiving no updates since at least 2 years to MySQL 8.0 which will stop getting updates in 4 days. Also every service is running without any containerization and there is a single database for everything… and it all runs on a single host and I didn’t read one word about a backup strategy or disk encryption. Also not a single word about infrastructure as code like ansible so that you can reliably recreate the system… and The whole stuff is hosted in Germany for a Turkish software company - sounds like very good latency.
My personal conclusion: This system WILL fail and the guy who designed it is stuck somewhere 10-20 years in the past.
every service is running without any containerization and there is a single database for everything… and it all runs on a single host and I didn’t read one word about a backup strategy or disk encryption.
Man, a paragraph that can give someone some serious PTSD flashbacks…
The number of times I’ve had to clean up a customer’s environment after they let little Billy play corporate IT and things went boom…
Sounds like my homelab has better redundancy than these guys, and my monthly bill isn’t much different than their new one. I only pay for power and networking, since I own my own hardware. I’m colocating in my city, so my latency to home is about 1ms, and I’ve got a full mirrored server in my house. Certain files are further backed up elsewhere for proper 3-2-1 backup (+ each server running raidz2 with disk encryption). Even if my home Internet goes out, I still have full access to my files at home, and all my public services stay running in the data center. If either server fails, it’s all set up with containers so it’s easy to spin up each service somewhere else.
One thing that’s tricky to get right with disk encryption (especially with encrypted /boot) is having a redundant boot partition. I was able to hack this together by having sofware raid duplicate my boot partition to a second drive. Now if I remove either OS boot drive it falls back to the remaining one. To prevent breaking EFI boot, you need to use the Version 1 RAID format so the metadata is stored at the end of the partition, not the front where EFI reads.
Not a sysadmin but just an hobbyist: is it ok to have such a large install bare metal and not containerized?
For example the issue of MySQL 5 being unavailable would be a non-issue with a container
For example the issue of MySQL 5 being unavailable would be a non-issue with a container
So people careless enough to “just container it” for old, possibly security-compromised software - you call that a “non-issue”? How about upgrading and configuring for compatibility?
Yes it’s ok, in general. It’s not the most modern or efficient way of managing infrastructure but it’s worked for decades now. It all depends on what you’re hosting, for who, and for how many people.
If you’re hosting internal company infrastructure for a relatively static number of users in a single of set few regions to deliver to, bare metal monolithic stuff is absolutely fine. It’s when you’re an app or service company and your infrastructure is for the back end for a public service that needs to be able to scale dynamically, and you’re worried about high 24/7 uptime, and latency to end users is a global issue that things like microservice architecture, containerization, and iac starts becoming important.
The whole containerization crazy is important for microservices architecture where you split your app into different pieces. This lets you scale different parts of you app as needed, it prevents your entire app from failing just because one part of it failed, it allows for lifecycle management like blue/green deployments with no downtime, allows developers to work on different parts of the app and update at a faster cadence than one big release for the entire thing every time you update one small part of it, things like that.
you can just set up containers on your bare metal server. in fact if you’re going to install insecure services you definitely want to containerize them, though tbh you need to run really far away from whatever it is you’re doing that requires sql5, or at least don’t let it be reachable on the internet, that should be network-isolated, which really limits its utility.
in fact if you’re going to install insecure services you definitely want to containerize them,
While this is true, if you’re running a platform that is root by default (looking at you, docker), you’re not shielding yourself as much as you might think you are.
If you’re running an insecure app as root, you better hope they don’t also have an exploit to get out of the container after the app is popped, otherwise you’re fucked.
Don’t quote me, but as far as I know containers can’t fix the issue if the host kernel is too old.
Wha?
You do realize there are plenty of bare metal infrastructure deployments out in the world, yeah? Being in a container solves no problems in this scenario at all.
They may not. Hence hobbiest. Relax.
what it feels like charging my phone from my laptop
Wait until you learn that Hetzner can and will take your public IP away at their own will without any warning. Happened to me.
One downtime is enough for me to never use that service again, no matter how cheap it is.
Do tell
Copy Paste since I am lazy:
“ They suspected me of serving “illegal” traffic. The server was located in Germany and they were contacted by typical DMCA lawyers, who were referencing a US District Court Order.
Hetzner blindly waved that through and just took my public IPv4 address.
At no point they were trying to contact me. At no point they asked themselves if a US District Court Order has any validity. At no point anyone of Hetzner explained what happened. At no point they apologized for their obvious mistake.”
Interesting, why did it happen?
They suspected me of serving “illegal” traffic. The server was located in Germany and they were contacted by typical DMCA lawyers, who were referencing a US District Court Order.
Hetzner blindly waved that through and just took my public IPv4 address.
At no point they were trying to contact me. At no point they asked themselves if a US District Court Order has any validity. At no point anyone of Hetzner explained what happened. At no point they apologized for their obvious mistake.
Me running everything on a single postgress instance on my shitbox 0€/month
0? My energy company says I’m using power equivalent to a family of eight. And it’s just wifey , the servers and me. I had cops here asking if I grow weed 😁
So unless you steal power, it surely isn’t close to 0 😁
This is why im doing my homelab on low powered processors (5825u NAS boards). Runs way cooler and is way more efficient. Same performance as my 9900kf gaming PC cpu wise.
I realised I don’t need my servers being online 24/7, so for me that’s Raspberry Pi and equivalents, plus powering on computers on demand.
A trick I realized a few years ago: Caddy has a module you can build it with that does WOL. So I was able to run a Caddy reverse proxy that woke up my higher powered server on demand, and let it go back to sleep when I wasn’t using it. Might be a bad idea for a database sever, but for my uses it was pretty simple and effective.
… that woke up my higher powered server on demand, and let it go back to sleep when I wasn’t using it.
Get a load of this guy not using his high powered server 24/7/365.
Oh wow, that’s really cool! I do use Caddy too.
Is it that your service/website is on both (low powered server and high powered one) or is it only on the high powered? So, it’s like
- the lower powered server knows it needs help (sounds a bit surreal to me, but perhaps it’s doable)
- or the lower powered server does not serve anything, but wakes up the high powered when the thing is accessed?
I guess that’s the 2nd thing, but it’s very cool indeed! That way you can really have very convenient things for free, as it’s super cheap to run any hardware for a very while on demand. I don’t mind waiting a minute or even two when I need to access something very infrequently and don’t want to run my server 24/7. I do exactly that, but I wake up it via LAN manually.
The low powered server is the Caddy server, all it does us act as a reverse proxy for everything in my house, giving it an SSL cert and doing things like WOL. The caddy config basically just says “Here’s your reverse proxy target, if you don’t get a response within one second, send a WOL packet, wait a couple of seconds, then try again”.
The only requirement is for you to do a custom build of caddy (this is done with a dockerfile), and to have WOL enabled on the high power server.
It means the first web request for services on the high power server might take a few seconds, but everything after that is smooth.
Is it that high power server takes a few seconds to boot? What’s the hardware you have there? I’m curious that’s the average boot time for an average high power server? I do use heavily obsolete devices for my personal servers (think of DDR-2 era devices with Intel Atom or sometimes core 2 duo devices) usually without even SSDs. With an SSD, my desktop devices (all DDR-3 era with SATA-3 disks) boot within 20…30 seconds, which is good enough for me. I assume the more modern devices would be quicker, but [single-digit, I assume] seconds sounds very good. To me, that sounds like it’s a no-brainer to have this feature. I was thinking whether I can wait minutes for something I need occasionally to boot. Seconds is just too fast. I think that delay is tolerable even for a commercial / production server, where the expectations are just different.
Oh. Okay. That comes close to 0. Mine runs 24/7, just because it would take too long to power down and up all machines, VMS, switches etc 😁
More likely your system is more sophisticated, I have just joined the hobby, so to say. But I am sure you can go much cheaper than that with bare metal. If I’d really need to host something, I’d rather buy a real server, and invest in solar power instead of paying some rent. Was a happy Digital Ocean customer, before I realised I can do the same with a Raspberry Pi. I was buying a couple of Pis a year for them. Right now, de-facto one Pi can host everything I really need. I regret I wasted about half a thousand on nothing. Could have bought a great NUC instead of wasting money on the cheapest VM for years.
Yeah solar power is much more affordable these days. I live in a vehicle so I have a 500w panel on the roof charging a 200ah lithium battery. I only use a laptop and steam deck, but could easily upscale. The whole system, including the victron controllers & shunt and the 2k inverter came out around £700, but I’m pretty sure stuff has only got cheaper since I bought it. I have way more power than I need in summer, though there are maybe two months in winter when I have to charge everything in daylight. I could always add a small wind generator if I needed. Renewables are totally feasible these days
I think I’m at a family of four or five, but I’m alone with my dogs and my weed and my servers. Being able to legally self-host your own drug supply is great.
How does running a server, assuming it’s used some amount of internet bandwidth, handle residential internet speeds? If I’ve got a gig up and down, can I reasonably run like a jellyfin for my friends?
I was running it on a couple hundred Mbps up for a while, and gig up is fine
Most servers you rent are only going to have 1Gbps internet speeds too unless you’re paying extra, so if you’ve got symmetrical gigabit at home, you’re 100% good to go, except for maybe higher downtime than a datacenter. My fiber at home seems to go out for a bit overnight occasionally as they’re doing maintenance.
If I’ve got a gig up and down, can I reasonably run like a jellyfin for my friends?
Easily
I also self host and I wouldn’t say the cost is zero. In the UK, energy costs alone mean that a 40W computer cost £8 per month to run (assuming a 28p/kWh price).
Of course, that’s assuming you run it 24/7 at full energy use, but I know my PCs run on more than that.
28p per kWh? Holy smokes.I think it ranges from 5 cents to 8 cents per kWh here. There are a lot of fees tacked on but those are there anyway.
Yeah just went to check and the price in my area is 22-28p/kWh, can confirm.
It’s very fucked over here.
I got solars, my energy bill is very low, its like 40€ for 3 months
So jealous. My energy bill is like £130/m and £80 of that is electricity.
You might wanna do the math on solars, even if its a cloudy place like the UK, their lifespan and nearly 0 upkeep makes them great value
How’s the math work on a shady place? I’ve got a big-ass tree above most of my roof.
No idea tbh, but from what i’ve heard is that it only pushes the payback age a year or two but you’ll have to double check me on that
I don’t own a house yet, like most people in the UK under 40, but when I do I will definitely consider it.
I have 50.000 dollars of server hardware and zero visitors. 😵😜
50 dollars and 0 cents isn’t a lot
Its a Dutch number notation.
Seems like it would have been a good moment to split the database from the many web servers and reduce the single point of failure.
And get some replication in there. Even if there’s not a single point of failure, if a DB instance ever goes tits-up, you’d better have a standby.
Source: I’ve cleaned up others’ messes where they didn’t.
I’m in the US and when I tried migrating from DO to Hetzner, I got asked to upload my passport to prove I’m not spam or something. Same experience with OVH.
Is this a thing for all European hosting companies? I ended up finding some Canadian hosting that would just let me sign up and pay like normal.
When I signed up at Hetzner, I had to go through the same anti-abuse check. However I could choose to not upload my ID and pre-pay 20€ instead. Did that and have been a happy customer since.
Lots of respectable EU hosting companies, and also aparently OVH, if they think there’s a chance you’re taking the piss will ask for a ID so they can ban you. It’s not just anti-spam, it’s anti-abuse and for preventing non-payment. They think there was a risk involved in accepting your business (whatever that may be, obviously companies don’t dilvulge their criteria here), and if you go elsewhere they’re not upset about it for that reason.
Good. I’m sick of all those ddos attack and bot attacks from other cloud provides like alibaba etc.
I never had that kind of experience with Hetzner or OVH as a European. I suppose there are extra hoops to jump through for US customers for some reason?
I’m a U.S. user and did not have this problem.
I’m European and had to do the same, so it’s based on something else.
I don’t like uploading IDs. But recently I block almost all datacenters across the world due to ddos attack or other malicious attacks on my websites. So I don’t think it’s a bad idea to keep the web better. It’s a mess today due to all those cheap cloud providers
Is this a thing for all European hosting companies?
Absolutely not. At least not in Europe.
Have you tried netcup as well?
Netcup was the one I had most problems years ago about uploading identify check. Last year when I signed again they actually put a system in place that it’s simpler, you just show your face for some photos and show your identity card and it checks if it matches. So an external identity provider. Simpler than having to see how to upload copies by email with pgp (which they support and have documentation about).
It’s so weird. Where from? I never had any such requirements with any provider, even when I, from Europe, bought something from abroad.
I’m in europe. If you put a name that may seem a bit non standard they will fire that verification immediately. I think i tried hetzner before and same thing. On ovh was no problem for the little time I tried. Anyway they all always want full physical address and name and a bunch of personal details.
Let’s see! 🤞
Your order will be checked by one of our employees shortly. You will then receive further information on the status of your order by e-mail.
I love how they censored the title on the orange site, you have no price in it.















