I’m a retired Unix admin. It was my job from the early '90s until the mid '10s. I’ve kept somewhat current ever since by running various machines at home. So far I’ve managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of “interesting” reading and training.
It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.
I’m thinking it’s no longer a fad and I should invest some time getting comfortable with it?
Saves time, minimal compatibility, portability and you can update with 2 commands There’s really no reason not to use docker
But I can’t really tinker IN the docker-image, right? It’s maintained elsewhere and I just get what i got. But with way less tinkering? Do I have control over the amount/percentage of resources a container uses? And could I just freeze a container, move it to another physical server and continue it there? So it would be worth the time to learn everything about docker for my “just” 10 VMs to replace in the long run?
You can tinker in the image in a variety of ways, but make sure to preserve your state outside the container in some way:
docker exec -it containerName /bin/bash
Yes, you can set a variety of resources constraints, including but not limited to processor and memory utilization.
There’s no reason to “freeze” a container, but if your state is in a host or volume mount, destroy the container, migrate your data, and resume it with a run command or docker-compose file. Different terminology and concept, but same result.
It may be worth it if you want to free up overhead used by virtual machines on your host, store your state more centrally, and/or represent your infrastructure as a docker-compose file or set of docker-compose files.
Hm. That doesn’t really sound bad. Thanks man, I guess I will take some time to read into it. Currently on proxmox, but AFAIK it does containers too.
It’s really not! I migrated rapidly from orchestrating services with Vagrant and virtual machines to Docker just because of how much more efficient it is.
Granted, it’s a different tool to learn and takes time, but I feel like the tradeoff was well worth it in my case.
I also further orchestrate my containers using Ansible, but that’s not entirely necessary for everyone.
I only use like 10 VMs, guess there’s no need for overkill with additional stuff. Though I’d like a gui, there probably is one for docker? Once tested a complete os with docker (forgot the name) but it seemed very unfriendly and ovey convoluted.
There’s a container web UI called Portainer, but I’ve never used it. It may be what you’re looking for.
I also use a container called Watchtower to automatically update my services. Granted there’s some risk there, but I wrote a script for backup snapshots in case I need to revert, and Docker makes that easy with image tags.
There’s another container called Autoheal that will restart containers with failed healthchecks. (Not every container has a built in healthcheck, but they’re easy to add with a custom Dockerfile or a docker-compose.)
Thanks for the tips! But did i get it right here? A container can has access to other containers?
The Docker client communicates over a UNIX socket. If you mount that socket in a container with a Docker client, it can communicate with the host’s Docker instance.
It’s entirely optional.