A real-world production migration from DigitalOcean to Hetzner dedicated, handling 248 GB of MySQL data across 30 databases, 34 Nginx sites, GitLab EE, Neo4j, and live mobile app traffic — with zero downtime.
For example the issue of MySQL 5 being unavailable would be a non-issue with a container
So people careless enough to “just container it” for old, possibly security-compromised software - you call that a “non-issue”? How about upgrading and configuring for compatibility?
Yes it’s ok, in general. It’s not the most modern or efficient way of managing infrastructure but it’s worked for decades now. It all depends on what you’re hosting, for who, and for how many people.
If you’re hosting internal company infrastructure for a relatively static number of users in a single of set few regions to deliver to, bare metal monolithic stuff is absolutely fine. It’s when you’re an app or service company and your infrastructure is for the back end for a public service that needs to be able to scale dynamically, and you’re worried about high 24/7 uptime, and latency to end users is a global issue that things like microservice architecture, containerization, and iac starts becoming important.
The whole containerization crazy is important for microservices architecture where you split your app into different pieces. This lets you scale different parts of you app as needed, it prevents your entire app from failing just because one part of it failed, it allows for lifecycle management like blue/green deployments with no downtime, allows developers to work on different parts of the app and update at a faster cadence than one big release for the entire thing every time you update one small part of it, things like that.
you can just set up containers on your bare metal server. in fact if you’re going to install insecure services you definitely want to containerize them, though tbh you need to run really far away from whatever it is you’re doing that requires sql5, or at least don’t let it be reachable on the internet, that should be network-isolated, which really limits its utility.
in fact if you’re going to install insecure services you definitely want to containerize them,
While this is true, if you’re running a platform that is root by default (looking at you, docker), you’re not shielding yourself as much as you might think you are.
If you’re running an insecure app as root, you better hope they don’t also have an exploit to get out of the container after the app is popped, otherwise you’re fucked.
You do realize there are plenty of bare metal infrastructure deployments out in the world, yeah? Being in a container solves no problems in this scenario at all.
Not a sysadmin but just an hobbyist: is it ok to have such a large install bare metal and not containerized?
For example the issue of MySQL 5 being unavailable would be a non-issue with a container
So people careless enough to “just container it” for old, possibly security-compromised software - you call that a “non-issue”? How about upgrading and configuring for compatibility?
Yes it’s ok, in general. It’s not the most modern or efficient way of managing infrastructure but it’s worked for decades now. It all depends on what you’re hosting, for who, and for how many people.
If you’re hosting internal company infrastructure for a relatively static number of users in a single of set few regions to deliver to, bare metal monolithic stuff is absolutely fine. It’s when you’re an app or service company and your infrastructure is for the back end for a public service that needs to be able to scale dynamically, and you’re worried about high 24/7 uptime, and latency to end users is a global issue that things like microservice architecture, containerization, and iac starts becoming important.
The whole containerization crazy is important for microservices architecture where you split your app into different pieces. This lets you scale different parts of you app as needed, it prevents your entire app from failing just because one part of it failed, it allows for lifecycle management like blue/green deployments with no downtime, allows developers to work on different parts of the app and update at a faster cadence than one big release for the entire thing every time you update one small part of it, things like that.
you can just set up containers on your bare metal server. in fact if you’re going to install insecure services you definitely want to containerize them, though tbh you need to run really far away from whatever it is you’re doing that requires sql5, or at least don’t let it be reachable on the internet, that should be network-isolated, which really limits its utility.
While this is true, if you’re running a platform that is root by default (looking at you, docker), you’re not shielding yourself as much as you might think you are.
If you’re running an insecure app as root, you better hope they don’t also have an exploit to get out of the container after the app is popped, otherwise you’re fucked.
Don’t quote me, but as far as I know containers can’t fix the issue if the host kernel is too old.
Wha?
You do realize there are plenty of bare metal infrastructure deployments out in the world, yeah? Being in a container solves no problems in this scenario at all.
They may not. Hence hobbiest. Relax.