A real-world production migration from DigitalOcean to Hetzner dedicated, handling 248 GB of MySQL data across 30 databases, 34 Nginx sites, GitLab EE, Neo4j, and live mobile app traffic — with zero downtime.
A trick I realized a few years ago: Caddy has a module you can build it with that does WOL. So I was able to run a Caddy reverse proxy that woke up my higher powered server on demand, and let it go back to sleep when I wasn’t using it. Might be a bad idea for a database sever, but for my uses it was pretty simple and effective.
Is it that your service/website is on both (low powered server and high powered one) or is it only on the high powered? So, it’s like
the lower powered server knows it needs help (sounds a bit surreal to me, but perhaps it’s doable)
or the lower powered server does not serve anything, but wakes up the high powered when the thing is accessed?
I guess that’s the 2nd thing, but it’s very cool indeed! That way you can really have very convenient things for free, as it’s super cheap to run any hardware for a very while on demand. I don’t mind waiting a minute or even two when I need to access something very infrequently and don’t want to run my server 24/7. I do exactly that, but I wake up it via LAN manually.
The low powered server is the Caddy server, all it does us act as a reverse proxy for everything in my house, giving it an SSL cert and doing things like WOL. The caddy config basically just says “Here’s your reverse proxy target, if you don’t get a response within one second, send a WOL packet, wait a couple of seconds, then try again”.
The only requirement is for you to do a custom build of caddy (this is done with a dockerfile), and to have WOL enabled on the high power server.
It means the first web request for services on the high power server might take a few seconds, but everything after that is smooth.
Is it that high power server takes a few seconds to boot? What’s the hardware you have there? I’m curious that’s the average boot time for an average high power server? I do use heavily obsolete devices for my personal servers (think of DDR-2 era devices with Intel Atom or sometimes core 2 duo devices) usually without even SSDs. With an SSD, my desktop devices (all DDR-3 era with SATA-3 disks) boot within 20…30 seconds, which is good enough for me. I assume the more modern devices would be quicker, but [single-digit, I assume] seconds sounds very good. To me, that sounds like it’s a no-brainer to have this feature. I was thinking whether I can wait minutes for something I need occasionally to boot. Seconds is just too fast. I think that delay is tolerable even for a commercial / production server, where the expectations are just different.
More likely your system is more sophisticated, I have just joined the hobby, so to say. But I am sure you can go much cheaper than that with bare metal. If I’d really need to host something, I’d rather buy a real server, and invest in solar power instead of paying some rent. Was a happy Digital Ocean customer, before I realised I can do the same with a Raspberry Pi. I was buying a couple of Pis a year for them. Right now, de-facto one Pi can host everything I really need. I regret I wasted about half a thousand on nothing. Could have bought a great NUC instead of wasting money on the cheapest VM for years.
Yeah solar power is much more affordable these days. I live in a vehicle so I have a 500w panel on the roof charging a 200ah lithium battery. I only use a laptop and steam deck, but could easily upscale. The whole system, including the victron controllers & shunt and the 2k inverter came out around £700, but I’m pretty sure stuff has only got cheaper since I bought it. I have way more power than I need in summer, though there are maybe two months in winter when I have to charge everything in daylight. I could always add a small wind generator if I needed. Renewables are totally feasible these days
I realised I don’t need my servers being online 24/7, so for me that’s Raspberry Pi and equivalents, plus powering on computers on demand.
A trick I realized a few years ago: Caddy has a module you can build it with that does WOL. So I was able to run a Caddy reverse proxy that woke up my higher powered server on demand, and let it go back to sleep when I wasn’t using it. Might be a bad idea for a database sever, but for my uses it was pretty simple and effective.
Get a load of this guy not using his high powered server 24/7/365.
Oh wow, that’s really cool! I do use Caddy too.
Is it that your service/website is on both (low powered server and high powered one) or is it only on the high powered? So, it’s like
I guess that’s the 2nd thing, but it’s very cool indeed! That way you can really have very convenient things for free, as it’s super cheap to run any hardware for a very while on demand. I don’t mind waiting a minute or even two when I need to access something very infrequently and don’t want to run my server 24/7. I do exactly that, but I wake up it via LAN manually.
The low powered server is the Caddy server, all it does us act as a reverse proxy for everything in my house, giving it an SSL cert and doing things like WOL. The caddy config basically just says “Here’s your reverse proxy target, if you don’t get a response within one second, send a WOL packet, wait a couple of seconds, then try again”.
The only requirement is for you to do a custom build of caddy (this is done with a dockerfile), and to have WOL enabled on the high power server.
It means the first web request for services on the high power server might take a few seconds, but everything after that is smooth.
Is it that high power server takes a few seconds to boot? What’s the hardware you have there? I’m curious that’s the average boot time for an average high power server? I do use heavily obsolete devices for my personal servers (think of DDR-2 era devices with Intel Atom or sometimes core 2 duo devices) usually without even SSDs. With an SSD, my desktop devices (all DDR-3 era with SATA-3 disks) boot within 20…30 seconds, which is good enough for me. I assume the more modern devices would be quicker, but [single-digit, I assume] seconds sounds very good. To me, that sounds like it’s a no-brainer to have this feature. I was thinking whether I can wait minutes for something I need occasionally to boot. Seconds is just too fast. I think that delay is tolerable even for a commercial / production server, where the expectations are just different.
Oh. Okay. That comes close to 0. Mine runs 24/7, just because it would take too long to power down and up all machines, VMS, switches etc 😁
More likely your system is more sophisticated, I have just joined the hobby, so to say. But I am sure you can go much cheaper than that with bare metal. If I’d really need to host something, I’d rather buy a real server, and invest in solar power instead of paying some rent. Was a happy Digital Ocean customer, before I realised I can do the same with a Raspberry Pi. I was buying a couple of Pis a year for them. Right now, de-facto one Pi can host everything I really need. I regret I wasted about half a thousand on nothing. Could have bought a great NUC instead of wasting money on the cheapest VM for years.
Yeah solar power is much more affordable these days. I live in a vehicle so I have a 500w panel on the roof charging a 200ah lithium battery. I only use a laptop and steam deck, but could easily upscale. The whole system, including the victron controllers & shunt and the 2k inverter came out around £700, but I’m pretty sure stuff has only got cheaper since I bought it. I have way more power than I need in summer, though there are maybe two months in winter when I have to charge everything in daylight. I could always add a small wind generator if I needed. Renewables are totally feasible these days