☕ Coffee anyone ?
3 min read

Homelab. Part 5 : K3S, ArgoCD, Helm.

Homelab. Part 5 : K3S, ArgoCD, Helm.

I hold a CKA. The Linux Foundation will tell you to practice on kubeadm. They're not wrong — for the exam, that's the right call. For everything else, I'd argue differently.

The honest reason I passed is this homelab. Every job I've had came with constraints : locked-down hypervisors, underpowered workstations, clusters I could observe but not break. The kind of environment where you learn to be careful, not to be capable. At home, none of that. Full control, real workloads, and the freedom to make mistakes that actually teach something. K3S made that possible on hardware that would have buckled under a full kubeadm setup. Single binary, minimal footprint, production-grade Kubernetes under the hood. The simplifications it makes are well-documented and largely irrelevant for what a homelab needs to do.

The cluster runs as three nodes on Proxmox — one control plane, two workers, each in its own VM. Not bare metal, not cloud, but a fully functional multi-node setup on consumer hardware. It works well under normal conditions. Restarting all three VMs simultaneously is another story — the H2+ feels it, every other service on the host feels it, and the cluster takes its time coming back up. It's a known limitation. At some point the hardware needs to grow to match the ambition. That day will come.

GitOps with ArgoCD.

ArgoCD manages the cluster state. Applications are declared in a Git repository — Gitea, self-hosted — and ArgoCD reconciles the live state against it continuously. No manual kubectl apply in production, no configuration drift, no "what did I change last Tuesday" problems. The repo is the truth. That's the deal.

The structure is straightforward : one directory per application, each with its Helm values and any necessary overrides. ArgoCD picks it up, deploys it, watches it. New application means a new directory and a commit. That's the entire workflow.

Namespaces and organization.

Namespaces follow function, not whim. Monitoring in its own namespace, applications separated from infrastructure components, ArgoCD isolated. Nothing exotic — just enough separation that a broken deployment doesn't become a visibility problem. On this cluster, namespace isolation is logical rather than physical, but the discipline still matters. It's the same reasoning you'd apply on a production cluster, at a smaller scale.

Networking.

Ingress controllers are not part of this setup. Services are exposed either as LoadBalancer or NodePort — K3S handles the rest. The actual front-facing work is done by HAProxy, running in a dedicated VM on a separate subnet, built and maintained manually. No helper script here, no one-liner install — this one was done properly from the ground up : no sudo, fail2ban, firewalld, encrypted storage. It's the single controlled entry point for everything exposed to the outside. Clean separation of concerns : K3S manages workloads, HAProxy manages access. The two don't need to know much about each other, and that's by design.

Secrets.

Secrets management in a homelab is always a compromise. Sealed Secrets or external secret operators add complexity that doesn't pay off at this scale. Sensitive values live outside the Git repository and are applied manually when needed. Not elegant, not a problem either. The threat model doesn't justify the overhead.

What runs on it.

Wiki.js for documentation and notes. Postiz for social media scheduling. Both installed via Helm charts found on ArtifactHub, values overridden where needed, managed entirely through ArgoCD. Adding a new application is a twenty-minute exercise at most.

The whole stack runs on an H2+, and K3S is the reason it runs at all. Not a toy distribution — Kubernetes that respects the hardware it runs on. For anyone with limited resources and serious learning ambitions, that combination is hard to beat. The hardware will eventually need an upgrade. The approach won't.