Many of the posts I read here are about Docker. Is anybody using Kubernetes to manage their self hosted stuff? For those who’ve tried it and went back to Docker, why?
I’m doing my 3rd rebuild of a K8s cluster after learning things that I’ve done wrong and wanted to start fresh, but when enhancing my Docker setup and deciding between K8s and Docker Swarm, I decided on K8s for the learning opportunities and how it could help me at work.
What’s your story?
I run a 2 node k3s cluster. There are a few small advantages over docker swarm, built-in network policies to lock down my VPN/Torrent pod being the main one.
Other than that writing kubernetes yaml files is a lot more verbose than docker-compose. Helm does make it bearable, though.
Due to real-life my migration to the cluster is real slow, but the goal is to move all my services over.
It’s not “better” than compose but I like it and it’s nice to have worked with it.
I manage like 200 servers in Google cloud k8s but I don’t think I’d do that for home use. The core purpose is to manage multiple servers and assign processes between them, auto scaling, cluster internal network - running docker containers for single instance apps for personal use doesn’t require this kind of complexity
My NAS software has a docker thing just built into it. I can upload or specify a package and it just runs it on the local hardware. If you have a Linux shell, I guess all you really have to do is run
dockerd
to start the daemon, make sure your network config allows connections, and upload your docker containers to it for runningMy thinking is the same, I see lots of k8s mentions on here and from coworkers at home and all I use is docker and VMs because I don’t want all that complexity I have to deal with at work.
I do aks. I can’t say love is the right word for it. Lol
AKS is a shame. Most of azure, actually. I do my best to find ways around the insanity but it always seems to leak back in with something insane they chose to do for whatever Microsoft reason they have.
The Lemmy instance I’m speaking from right now is running in my k8s cluster.
Here’s a slightly different story: I run OpenBSD on 2 bare-metal machines in 2 different physical locations. I used k8s at work for a bit until I steered my career more towards programming. Having k8s knowledge handy doesn’t really help me so much now.
On OpenBSD there is no Kubernetes. Because I’ve got just two hosts, I’ve managed them with plain SSH and the default init system for 5+ years without any problems.
I am using Unraid to run docker, but want to use k3s (again) to turn some old laptops I have lying around into commodity hardware
I use k8s at work and have built a k8s cluster in my homelab… but I did not like it. I tore it down, and currently using podman, and don’t think I would go back to k8s (though I would definitely use docker as an alternative to podman and would probably even recommend it over podman for beginners even though I’ve settled on podman for myself).
- K8s itself is quite resource-consuming, especially on ram. My homelab is built on old/junk hardware from retired workstations. I don’t want the kubelet itself sucking up half my ram. Things like k3s help with this considerably, but that’s not quite precisely k8s either. If I’m going to start trimming off the parts of k8s I don’t need, I end up going all the way to single-node podman/docker… not the halfway point that is k3s.
- If you don’t use hostNetworking, the k8s model of traffic routes only with the cluster except for egress is all pure overhead. It’s totally necessary with you have a thousand engineers slinging services around your cluster, but there’s no benefit to this level fo rigor in service management in a homelab. Here again, the networking in podman/docker is more straightforward and maps better to the stuff I want to do in my homelab.
- Podman accepts a subset of k8s resource-yaml as a docker-compose-like config interface. This lets me use my familiarity with k8s configs iny podman setup.
Overall, the simplicity and lightweight resource consumption of podman/docker are are what I value at home. The extra layers of abstraction and constraints k8s employs are valuable at work, where we have a lot of machines and alot of people that must coordinate effectively… but I don’t have those problems at home and the overhead (compute overhead, conceptual overhead, and config-overhesd) of k8s’ solutions to them is annoying there.
I’d suggest Podman over docker if someone is starting fresh. I like Podman running as rootless, but moving an existing docker to Podman was a pain. Since the initial docker setup was also a pain, I’d rather have only done it once :/
For me the use case of K8s only makes sense with large use cases (in terms of volume of traffic and users). Docker / Podman is sufficient to self-host something small.
I like the concept, but hate the configuration schema and tooling which is all needlessly obtuse (eg. helm)
Helm is one of the reasons I became interested in Kubernetes. I really like the idea of a package where all I have to do is provide my preferences in a values file. Before swarm was mature, I was managing my containers with complicated shell scripts to bring stuff up in the right order and it became fragile and unmaintainable.
I love kubernetes. At the start of the year I installed k3s von my VPS and moved over all my services. It was a great learning opportunity that also helped immensely for my job.
It works just as well as my old docker compose setup, and I love how everything is contained in one place in the manifests. I don’t need to log in to the server and issue docker commands anymore (or write scripts / CI stages that do so for me).
Are most of your services just a single pod? Or do you actually have them scaled? How do you then handle non-cloud-native software?
Nomad all the way. K8s is so bloated. Docker swarm can only do docker. Nomad can do basically anything.
It’s a damn shame it’s going not free open source, I Just switched my lab over to nomad and consul last year and it has been incredibly smooth sailing.
Nomad is a breath of fresh air after working with k8s professionally.
Don’t get me wrong, love k8s, but it’s a bit much (until you need it)
I’ve been reading into k3s out of curiosity, which as I understand is supposed to be one of the simpler ones, and even as someone who works as a developer and maintains a small homelab, it just makes me feel utterly clueless lol. Which is to say, I’ll definitely be giving Nomad a good look.
Oh and if you do happen to have any other more newbie friendly suggestions, I’d love to hear about them!
There are dozens of us!
Seriously though I changed to nomad/consul/gluster and it’s been wonderful. I still have some other things running on my nas software like Jellyfin and audiobookshelf, but that’s just for performance and simplicity.
I was a bit put off by Hashicorps license change, but I don’t think I’m changing back to k3s anytime soon. Nomad is just so nice and easy.
I was looking into converting my docker services into a cluster to get high availability and to learn it for work, but while investigating it, I read that kubernetes is actually meant for scalability and just a single service per cluster.
Also read that docker swarm is actually what is recommended for my homelab use case. So I’m right now on my way to convert everything to docker stacks. What do you think?
I’m not sure what you mean by that.
It provides high availability if you have multiple nodes and pods.
Also what do you mean by single service per cluster? Because that’s not the idea at all.
Of course high availability always requires multiple nodes.
Its just that while choosing how to set up my cluster I looked up several options (proxmox, swarm, kubernetes…) and I noticed that kubernetes is generally meant for bigger deployments.
I only need a single replica for each of my containers and they can all run on a single node, so kubernetes is overkill just to get high availability For my use case
Kubernetes is useful if you have gone full cattle over pets. And that is very uncommon in home setups. If you only own one or two small machines you cannot destroy infra easily in a “cattle” way, and the bloatware that comes with Kubernetes doesn’t help you neither.
In homelabs and home servers the pros of Kubernetes are not very useful: high availability, auto-scaling, gitops integrations, etc: Why would you need autoscaling and HA for a SFTP used only by you? Instead you write a docker-compose.yml and call it a day.
This mostly, I haven’t seen a compelling reason to leave my docker setup.
I think the biggest reasons for me have been growth and professional development. I started my home cluster 8 years ago as a single node of basically just running the hack/ scripts on my Linux desktop. I’ve been able to grow that same cluster to 6 hosts as I’ve replaced desktops and as I got a bit into the used enterprise server scene. I’ve replaced multiple routers and moved behind cloudflare, added a private CA a few times, added solid persistence with rook+ceph, and built my ideal telemetry stack, added velero backups into Backblaze b2, and probably a lot more I’m not thinking of.
That whole time, I’ve had to do almost zero maintenance or upgrades on the side projects I’ve built over the years, or on the self hosted services I’ve run. If you ignore the day or so a year I’ve spent cursing my propensity to upgrade a tad too early and hit snags, though I’ve just about always been able to resolve them pretty quickly and have learned even more from those times.
And on top of that, I get to take a lot of that expertise to work where it happens to pay quite well. And I’ve spent some time working towards building the knowledge into a side gig. Maybe someday that’ll pay the bills too.
One line from your comment struck a chord. The part about maintenance and upgrades. I feel like I get stuff set up and working and go about my life and then a failure happens at the most inopportune moment. Mostly, the failures are when I have a few hours free and decide to upgrade the OS and everything breaks and all the dependencies fall apart and some feature is no longer supported. That’s where I started looking to K8s to just roll back until I have time to manage it.
This right here
The one exception to this is if you’re using your homelab to learn kubernetes.
That was the only time I used K8s and k3s on my homelab.
And for anything that I do want to set up in a HA/cattle kind of way, I use Docker Swarm, as it feels like a more comfortable extension of docker compose.
While you’re probably right overall, there are many good reasons to use k8s. The api provides all sorts of benefits. Kubectl, k9s, and other operational UIs . Good deployment models and tools like argo. Loads of helm charts that are (theoretically) ready to use.
No, those things aren’t free. There’s a lot of overhead to running k8s.
I run k3s and all my stuff runs in it no need to deal with docker anymore.
How did you write your templates? Did you use Kompose to translate from Docker compose files, or did you write them from scratch?
Could you list some of your “stuffs” that you run on your k3s? I’m curious.
Oh it is not that much, I run adguard DNS with adblocking, searxng as my search engine, vaultwarden as my password manager. All combined with Argo CD as GitOps engine, nginx ingress with cert-manager for lets encrypt certificates, longhorn as storage layer and metallb as loadbalancer solution. I am planning to completely replace my current setup (which is an old sandy bridge powered HP microserver) with a turing pi 2 clusterboard with 4 RPi4 CMs as soon as they get cheaper.
Wow you’re self-hosting a password manager! Don’t you feel scared if something went wrong?
I’m also running Adguard as my DNS-level adblocker on my Pi 3. Feels way more content than Pihole.
I’m not very familiar with kubernetes or k3s but I thought it was a way to manage docker containers. Is that not the case? I’m considering deploying a k3s cluster in my proxmox environment to test it out.
You can use kubernetes on any OCI container deployment.
So if you don’t want/need to install the docker program, you can go with containerd.
You may be getting hung up in the wording. “Docker” containers are OCI containers, so k3s is running the containers you’re familiar with, but docker the app and the company are not involved.
Kubernetes is abbreviated K8s (because there’s 8 letters between the “k” and the “s”. K3s is a “lite” version. Generally speaking, kubernetes manages your containers. You basicaly tell K8s what the state should be and it does what it needs to do to get the environment as you’ve declared. It’ll check and start or restart services, start containers on a node that can run them (like ensuring enough RAM is available). There’s a lot more, but that’s the general idea.
I feel like it took me quite a while to get the hang of Docker, and Kubernetes on a general look seems all that much more daunting! Hopefully one day I can break it down into smaller pieces so I can get started with it!
I’ve spent the last two weeks on getting a k3s cluster working and I’ve had nothing but problems but it has been a great catalysts for learning new tools like ansible and load balancers. I finally got the cluster working last night. If anyone else is having wierd issues with the cluster timing out ETCD needs fast storage. Moving my VMs from my spinning rust to a cheap SSD fixed all my problems.