Running Lightweight Kubernetes in Your Home Lab with k3s
Kubernetes has a reputation for being complex, resource-hungry, and overkill for anything smaller than a Fortune 500 company. That reputation is largely deserved if you're talking about a full upstream Kubernetes deployment — kubeadm, etcd clusters, control plane nodes, the works. For a home lab, that's like buying a semi-truck to pick up groceries.
k3s changes the equation. Built by Rancher (now part of SUSE), k3s is a fully conformant Kubernetes distribution that compiles down to a single binary under 100 MB. It replaces etcd with SQLite (by default), bundles Traefik as an ingress controller, includes a built-in load balancer, and strips out cloud-provider-specific bloat. You get real Kubernetes — passing every conformance test — running on hardware as modest as a Raspberry Pi.
If you've been curious about Kubernetes but didn't want to dedicate a rack of servers to learning it, k3s is your on-ramp.
Why k3s Over Full Kubernetes
Before diving into installation, it's worth understanding what k3s changes and what it doesn't.
What k3s removes or replaces:
- etcd replaced with SQLite (single-node) or embedded etcd/external databases (multi-node HA)
- Cloud controller manager removed (you're not on AWS/GCP)
- In-tree storage drivers stripped out (CSI is used instead)
- Alpha features removed
- Legacy/non-default admission controllers removed
What k3s adds:
- Traefik as the default ingress controller
- ServiceLB (formerly Klipper) as a basic load balancer
- Local-path-provisioner for simple persistent storage
- Flannel as the default CNI for pod networking
- CoreDNS for cluster DNS
What stays the same:
- Full Kubernetes API compatibility
- kubectl works identically
- Helm charts deploy without modification
- Every standard Kubernetes concept applies
The practical result: k3s uses about 512 MB of RAM for the server process on a single-node cluster. A full kubeadm deployment easily consumes 2-4 GB before you deploy anything. For home lab hardware, that difference matters.
Prerequisites
You need at least one Linux machine. k3s officially supports:
- Ubuntu 20.04+, Debian 11+, RHEL/CentOS 8+, Fedora, SLES, openSUSE
- x86_64, ARM64, or ARMv7
Minimum hardware for a single-node cluster:
- 1 CPU core (2+ recommended)
- 512 MB RAM (1 GB+ recommended)
- 10 GB disk (SSD preferred)
For this guide, we'll assume a single Ubuntu or Debian server. Multi-node clustering is covered later.
Make sure your system is updated and has curl available:
sudo apt update && sudo apt upgrade -y
sudo apt install -y curl
Installing k3s
k3s installation is one command:
curl -sfL https://get.k3s.io | sh -
That's it. The script installs k3s as a systemd service, starts it, and configures kubectl. After 30-60 seconds, you have a working Kubernetes cluster.
Verify it's running:
sudo k3s kubectl get nodes
You should see your node listed with status Ready.
Making kubectl Easier to Use
By default, the k3s kubeconfig lives at /etc/rancher/k3s/k3s.yaml and requires root. Let's fix that:
# Copy the kubeconfig to your user's home directory
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config
# Set the environment variable
echo 'export KUBECONFIG=~/.kube/config' >> ~/.bashrc
source ~/.bashrc
If you want to use kubectl instead of k3s kubectl, install it separately or create an alias:
# Option 1: Alias
echo 'alias kubectl="k3s kubectl"' >> ~/.bashrc
source ~/.bashrc
# Option 2: Install kubectl standalone
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
rm kubectl
Now confirm everything works:
kubectl get nodes
kubectl get pods -A
The -A flag shows pods in all namespaces. You'll see the system components: Traefik, CoreDNS, metrics-server, and the local-path-provisioner.
Installation Options
The install script accepts environment variables to customize the deployment:
# Install without Traefik (if you prefer Nginx or another ingress)
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik" sh -
# Install without the bundled load balancer
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=servicelb" sh -
# Specify a different flannel backend
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--flannel-backend=wireguard-native" sh -
# Set a specific version
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="v1.30.4+k3s1" sh -
Deploying Your First Workload
Let's deploy something useful: Uptime Kuma, a lightweight monitoring tool.
Create a file called uptime-kuma.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: uptime-kuma
labels:
app: uptime-kuma
spec:
replicas: 1
selector:
matchLabels:
app: uptime-kuma
template:
metadata:
labels:
app: uptime-kuma
spec:
containers:
- name: uptime-kuma
image: louislam/uptime-kuma:1
ports:
- containerPort: 3001
volumeMounts:
- name: data
mountPath: /app/data
volumes:
- name: data
persistentVolumeClaim:
claimName: uptime-kuma-data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: uptime-kuma-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: uptime-kuma
spec:
selector:
app: uptime-kuma
ports:
- port: 80
targetPort: 3001
type: ClusterIP
Apply it:
kubectl apply -f uptime-kuma.yaml
Check the deployment status:
kubectl get pods -w
kubectl get pvc
kubectl get svc
The pod will transition from ContainerCreating to Running. The PersistentVolumeClaim will be bound automatically by the local-path-provisioner, storing data in /var/lib/rancher/k3s/storage/ on the host.
Persistent Storage
The bundled local-path storage class is fine for single-node setups. It creates directories on the host filesystem and binds them to pods. Simple, fast, and adequate for most home lab workloads.
Check the default storage class:
kubectl get storageclass
You'll see local-path (default). Any PVC that doesn't specify a storageClassName will use this.
Limitations of Local Path Storage
- Data lives on one node only — no replication
- If the node dies, data is gone (unless you have backups)
- No dynamic resizing
- No snapshots
Longhorn for Distributed Storage
If you have multiple nodes and want replicated storage, Longhorn is the natural choice for k3s. It's also a Rancher/SUSE project and integrates cleanly.
# Install Longhorn with kubectl
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.7.2/deploy/longhorn.yaml
# Wait for all pods to come up
kubectl -n longhorn-system get pods -w
Once running, Longhorn provides:
- Block storage replicated across nodes
- Snapshots and backups to S3-compatible storage
- A web UI for managing volumes
- Automatic rebuilding when a replica is lost
Create PVCs using the longhorn storage class:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 10Gi
For a home lab, Longhorn is worth the overhead if you have 2+ nodes. For a single node, stick with local-path and maintain good backups.
Helm Charts
Helm is the package manager for Kubernetes. Instead of writing YAML for every resource, Helm charts bundle an entire application's configuration into a single deployable package with customizable values.
Installing Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Verify:
helm version
Using Helm Charts
Let's install Grafana as an example:
# Add the Grafana Helm repository
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
# Install Grafana with custom values
helm install grafana grafana/grafana \
--namespace monitoring \
--create-namespace \
--set persistence.enabled=true \
--set persistence.storageClassName=local-path \
--set persistence.size=5Gi \
--set adminPassword=your-secure-password
Check the status:
kubectl -n monitoring get pods
kubectl -n monitoring get svc
Custom Values Files
For more complex configurations, use a values file instead of --set flags:
# grafana-values.yaml
persistence:
enabled: true
storageClassName: local-path
size: 5Gi
adminPassword: your-secure-password
ingress:
enabled: true
ingressClassName: traefik
hosts:
- grafana.homelab.local
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
helm install grafana grafana/grafana \
--namespace monitoring \
--create-namespace \
-f grafana-values.yaml
Useful Helm Commands
# List installed releases
helm list -A
# Check release status
helm status grafana -n monitoring
# Upgrade with new values
helm upgrade grafana grafana/grafana -n monitoring -f grafana-values.yaml
# Roll back to previous version
helm rollback grafana 1 -n monitoring
# Uninstall
helm uninstall grafana -n monitoring
Ingress with Traefik
k3s ships with Traefik as the default ingress controller. Ingress resources tell Traefik how to route external HTTP/HTTPS traffic to your services.
Basic Ingress
Expose Uptime Kuma on a hostname:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: uptime-kuma
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
rules:
- host: kuma.homelab.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: uptime-kuma
port:
number: 80
kubectl apply -f uptime-kuma-ingress.yaml
Point kuma.homelab.local to your server's IP (via /etc/hosts, Pi-hole, or your router's DNS), and you can access Uptime Kuma by hostname.
TLS with Let's Encrypt
For real SSL certificates, configure Traefik's built-in ACME support or install cert-manager. Cert-manager is the more Kubernetes-native approach:
# Install cert-manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.16.3/cert-manager.yaml
# Wait for pods
kubectl -n cert-manager get pods -w
Create a ClusterIssuer for Let's Encrypt:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: your-email@example.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: traefik
Then update your ingress to use TLS:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: uptime-kuma
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- kuma.yourdomain.com
secretName: kuma-tls
rules:
- host: kuma.yourdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: uptime-kuma
port:
number: 80
This requires that kuma.yourdomain.com resolves to your server's public IP and port 80 is reachable from the internet for the HTTP-01 challenge. For internal-only services, use DNS-01 challenges or self-signed certificates.
Adding Worker Nodes
Scaling k3s to multiple nodes is straightforward. On your server (control plane) node, retrieve the join token:
sudo cat /var/lib/rancher/k3s/server/node-token
On each worker node, run:
curl -sfL https://get.k3s.io | K3S_URL=https://SERVER_IP:6443 K3S_TOKEN=YOUR_TOKEN sh -
Replace SERVER_IP with your control plane's IP and YOUR_TOKEN with the token you retrieved. After a minute, the worker appears:
kubectl get nodes
k3s agents use about 256 MB of RAM, so even modest hardware can be a useful worker node. Old laptops, Raspberry Pis, mini PCs — they all work.
Useful Operational Commands
These are the commands you'll use daily when managing a k3s cluster:
# Check cluster status
kubectl get nodes -o wide
kubectl cluster-info
# View all resources in a namespace
kubectl get all -n monitoring
# Describe a pod (debugging)
kubectl describe pod POD_NAME -n NAMESPACE
# Pod logs
kubectl logs POD_NAME -n NAMESPACE
kubectl logs POD_NAME -n NAMESPACE --previous # logs from crashed container
# Execute into a running pod
kubectl exec -it POD_NAME -n NAMESPACE -- /bin/sh
# Resource usage (requires metrics-server, included in k3s)
kubectl top nodes
kubectl top pods -A
# Delete and recreate a deployment
kubectl rollout restart deployment/DEPLOYMENT_NAME -n NAMESPACE
# Check k3s service status
sudo systemctl status k3s
# View k3s logs
sudo journalctl -u k3s -f
Uninstalling k3s
If things go sideways or you want to start fresh:
# On the server node
/usr/local/bin/k3s-uninstall.sh
# On agent nodes
/usr/local/bin/k3s-agent-uninstall.sh
This removes k3s completely, including all cluster data, pods, and configurations. It's a clean slate.
Trade-offs: Should You Run Kubernetes at Home?
Kubernetes adds real complexity. Even k3s, which strips away as much overhead as possible, introduces concepts you need to understand: pods, services, deployments, ingresses, persistent volume claims, namespaces, RBAC. There's a reason "it's overkill for a home lab" is common advice.
Run k3s if:
- You want to learn Kubernetes for your career (this is the best reason)
- You're running 10+ services and want consistent deployment, scaling, and management
- You want rolling updates and self-healing workloads
- You plan to expand to multiple nodes
- You're comfortable with YAML and enjoy infrastructure-as-code
Stick with Docker Compose if:
- You're running fewer than 10 services
- You have a single server
- You want the simplest possible setup
- You don't need Kubernetes skills for work
- You value your weekend free time
There's no shame in Docker Compose. It's simpler, well-documented, and handles most home lab workloads perfectly well. k3s is for when you've outgrown Compose or when learning Kubernetes itself is the goal.
The good news: k3s makes the learning curve as gentle as Kubernetes can be. One binary, one command to install, real Kubernetes underneath. If you're going to learn orchestration, this is the way to start.