← All articles
SECURITY Home Lab Security Hardening: A Practical Guide 2026-02-09 · security · hardening · firewall

Home Lab Security Hardening: A Practical Guide

Security 2026-02-09 security hardening firewall ssh

Home labs have a security problem, and it's not sophisticated hackers. It's the combination of "I'll fix that later" and "it's just my home network." Every open port, every default password, every service running as root is a door that automated scanners probe thousands of times per day. Your home IP is getting scanned right now — not by someone targeting you specifically, but by bots sweeping entire IP ranges looking for low-hanging fruit.

The good news is that home lab security doesn't require enterprise-grade paranoia. A handful of practical measures block the vast majority of attacks. This guide covers the specific steps that actually matter, ordered by impact.

SSH Hardening

SSH is the front door to your servers. If an attacker gets SSH access, they own the machine. Default SSH configurations are permissive by design — tightening them is the single highest-impact security step you can take.

Disable Password Authentication

Password-based SSH is vulnerable to brute force. Key-based authentication is not. Make the switch:

# Generate an SSH key pair (if you don't have one)
ssh-keygen -t ed25519 -C "your-email@example.com"
# This creates ~/.ssh/id_ed25519 (private) and ~/.ssh/id_ed25519.pub (public)

# Copy your public key to the server
ssh-copy-id -i ~/.ssh/id_ed25519.pub user@server

Once you've confirmed you can log in with your key, disable password authentication:

# /etc/ssh/sshd_config
PasswordAuthentication no
KbdInteractiveAuthentication no
PubkeyAuthentication yes
sudo systemctl restart sshd

Test that password login is rejected before closing your current session. Open a new terminal and try:

ssh -o PubkeyAuthentication=no user@server
# Should be rejected: "Permission denied (publickey)."

Disable Root Login

Nobody should SSH in as root. Use a regular user and sudo instead:

# /etc/ssh/sshd_config
PermitRootLogin no

Change the Default Port (Optional, Not Security)

Changing SSH from port 22 to something like 2222 doesn't add real security — any scanner worth its salt checks all ports. But it does reduce log noise from automated brute-force bots that only target port 22:

# /etc/ssh/sshd_config
Port 2222

If you change the port, remember to update your firewall rules and SSH client config:

# ~/.ssh/config (on your workstation)
Host myserver
    HostName 192.168.1.50
    Port 2222
    User admin
    IdentityFile ~/.ssh/id_ed25519

A Complete Hardened sshd_config

Here's a hardened configuration with explanations:

# /etc/ssh/sshd_config

# Protocol and authentication
PermitRootLogin no
PasswordAuthentication no
KbdInteractiveAuthentication no
PubkeyAuthentication yes
AuthenticationMethods publickey

# Limit users who can SSH (replace with your username)
AllowUsers admin

# Timeouts and limits
LoginGraceTime 30
MaxAuthTries 3
MaxSessions 5
ClientAliveInterval 300
ClientAliveCountMax 2

# Disable unused features
X11Forwarding no
PermitEmptyPasswords no
AllowAgentForwarding no
AllowTcpForwarding no

# Logging
LogLevel VERBOSE
# Validate config before restarting
sudo sshd -t
sudo systemctl restart sshd

Fail2ban

Fail2ban watches log files for repeated failed login attempts and temporarily bans the offending IP addresses. Even with key-only SSH, it's useful for reducing log noise and blocking scanners.

sudo apt install fail2ban

# Create a local config (don't edit the default — it gets overwritten on updates)
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
# /etc/fail2ban/jail.local

[DEFAULT]
bantime = 1h
findtime = 10m
maxretry = 3
banaction = nftables

[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 3
bantime = 3600
sudo systemctl enable --now fail2ban

# Check banned IPs
sudo fail2ban-client status sshd

# Unban an IP if you lock yourself out
sudo fail2ban-client set sshd unbanip 192.168.1.100

For a home lab, also consider adding jails for other services you expose — Nginx, Nextcloud, Vaultwarden, anything with a login page.

Firewall Configuration

A firewall controls what traffic is allowed in and out of your server. The principle is simple: deny everything by default, then allow only what's needed.

UFW (Uncomplicated Firewall)

UFW is the easiest way to manage firewall rules on Ubuntu/Debian:

sudo apt install ufw

# Set default policies
sudo ufw default deny incoming
sudo ufw default allow outgoing

# Allow SSH (do this BEFORE enabling the firewall!)
sudo ufw allow 22/tcp

# Allow specific services
sudo ufw allow 80/tcp    # HTTP
sudo ufw allow 443/tcp   # HTTPS
sudo ufw allow 8096/tcp  # Jellyfin

# Allow from specific subnets only
sudo ufw allow from 192.168.1.0/24 to any port 9090  # Prometheus, LAN only
sudo ufw allow from 192.168.1.0/24 to any port 9000  # Portainer, LAN only

# Enable the firewall
sudo ufw enable

# Check status
sudo ufw status verbose

Nftables

For more advanced setups, nftables (the successor to iptables) gives you full control:

# /etc/nftables.conf
#!/usr/sbin/nft -f

flush ruleset

table inet filter {
    chain input {
        type filter hook input priority 0; policy drop;

        # Allow established connections
        ct state established,related accept

        # Allow loopback
        iif lo accept

        # Allow ICMP (ping)
        ip protocol icmp accept
        ip6 nexthdr icmpv6 accept

        # Allow SSH from LAN only
        ip saddr 192.168.1.0/24 tcp dport 22 accept

        # Allow HTTP/HTTPS from anywhere
        tcp dport { 80, 443 } accept

        # Allow Jellyfin from LAN
        ip saddr 192.168.1.0/24 tcp dport 8096 accept

        # Log and drop everything else
        log prefix "nftables dropped: " counter drop
    }

    chain forward {
        type filter hook forward priority 0; policy drop;
    }

    chain output {
        type filter hook output priority 0; policy accept;
    }
}
sudo nft -f /etc/nftables.conf
sudo systemctl enable nftables

Firewall Best Practices

Automatic Updates

Unpatched software is the most common vulnerability in home labs. Automatic security updates cover the worst risks without requiring daily attention.

Ubuntu/Debian

sudo apt install unattended-upgrades

# Enable automatic security updates
sudo dpkg-reconfigure -plow unattended-upgrades

# Configure what gets updated
sudo nano /etc/apt/apt.conf.d/50unattended-upgrades
// /etc/apt/apt.conf.d/50unattended-upgrades
Unattended-Upgrade::Allowed-Origins {
    "${distro_id}:${distro_codename}-security";
    "${distro_id}ESMApps:${distro_codename}-apps-security";
};

// Auto-reboot if needed (set a time when you're not using the server)
Unattended-Upgrade::Automatic-Reboot "true";
Unattended-Upgrade::Automatic-Reboot-Time "04:00";

// Email notification (optional)
Unattended-Upgrade::Mail "you@example.com";
Unattended-Upgrade::MailReport "on-change";

// Remove unused dependencies
Unattended-Upgrade::Remove-Unused-Dependencies "true";

Fedora/RHEL

sudo dnf install dnf5-plugin-automatic

# Configure
sudo nano /etc/dnf/automatic.conf
[commands]
apply_updates = yes
upgrade_type = security

[emitters]
emit_via = stdio
sudo systemctl enable --now dnf5-automatic.timer

Docker Container Updates

Docker containers don't get updated by your OS package manager. You need a separate strategy:

# Manual update (per service)
cd ~/docker/jellyfin
docker compose pull && docker compose up -d

# Watchtower — automatic container updates (use with caution)
docker run -d \
  --name watchtower \
  -v /var/run/docker.sock:/var/run/docker.sock \
  containrrr/watchtower \
  --cleanup \
  --schedule "0 0 4 * * *"  # 4 AM daily

Watchtower automatically pulls new images and restarts containers. It's convenient but risky — a breaking change in an update can take down a service while you're asleep. A middle ground: run Watchtower in "monitor only" mode and update manually when notified:

docker run -d \
  --name watchtower \
  -v /var/run/docker.sock:/var/run/docker.sock \
  containrrr/watchtower \
  --monitor-only \
  --notification-url "discord://webhook-token@webhook-id"

Network Segmentation

Putting everything on one flat network means a compromised IoT device can reach your NAS, your password manager, and your lab servers. Network segmentation limits the blast radius.

The Quick Version (VLANs)

If you have a managed switch and a router that supports VLANs (pfSense, OPNsense), create separate VLANs:

VLAN Purpose Can Reach
10 - Trusted Your personal devices Everything
20 - Lab Servers and services Internet, limited LAN
30 - IoT Smart devices, cameras Internet only
40 - Guest Visitor WiFi Internet only

See our dedicated VLAN segmentation guide for step-by-step setup.

The Minimal Version (Docker Networks)

Even without VLANs, you can isolate services using Docker networks:

# Create isolated networks for different service groups
services:
  # Database only accessible by the app, not the internet
  app-db:
    image: postgres:16
    networks:
      - backend

  app:
    image: myapp:latest
    networks:
      - backend    # Can reach the database
      - frontend   # Can be reached by the proxy

  proxy:
    image: traefik:v3.2
    ports:
      - "443:443"
    networks:
      - frontend   # Can reach the app, but NOT the database

networks:
  frontend:
  backend:
    internal: true  # No internet access for the database network

The internal: true flag prevents containers on that network from reaching the internet — useful for databases that should only be accessed by your applications.

Service Isolation

Don't Run Everything as Root

This is the most common mistake in home labs. Running containers or services as root means a vulnerability in that service gives the attacker root access to your system.

# Docker Compose — run as a non-root user
services:
  myapp:
    image: myapp:latest
    user: "1000:1000"  # Run as UID 1000
# On the host — create a dedicated service user
sudo useradd -r -s /usr/sbin/nologin myservice
sudo chown -R myservice:myservice /opt/myservice

Use Read-Only Filesystems Where Possible

services:
  myapp:
    image: myapp:latest
    read_only: true
    tmpfs:
      - /tmp
      - /run
    volumes:
      - ./data:/data  # Only this directory is writable

Limit Container Capabilities

Docker containers get a set of Linux capabilities by default. Drop the ones you don't need:

services:
  myapp:
    image: myapp:latest
    cap_drop:
      - ALL
    cap_add:
      - NET_BIND_SERVICE  # Only add back what's needed
    security_opt:
      - no-new-privileges:true

Don't Expose Ports You Don't Need

Every exposed port is an attack surface. If a service only needs to be reached by other containers on the same Docker network, don't publish the port:

# Bad — database accessible from the network
services:
  db:
    image: postgres:16
    ports:
      - "5432:5432"  # Don't do this unless you need external access

# Good — database only accessible from linked containers
services:
  db:
    image: postgres:16
    # No ports section — only reachable via Docker network

Secrets Management

The Problem

Passwords, API keys, and tokens end up in Docker Compose files, shell history, and environment variables. They get committed to Git, stored in plain text, and shared in config files with world-readable permissions.

Practical Solutions

Docker secrets (Compose):

services:
  myapp:
    image: myapp:latest
    environment:
      DB_PASSWORD_FILE: /run/secrets/db_password
    secrets:
      - db_password

secrets:
  db_password:
    file: ./secrets/db_password.txt
# Create the secrets directory
mkdir -p secrets
chmod 700 secrets

# Create secret files
echo "my-secure-password" > secrets/db_password.txt
chmod 600 secrets/db_password.txt

# Add to .gitignore
echo "secrets/" >> .gitignore

Environment files:

# .env file (gitignored)
POSTGRES_PASSWORD=change-this-to-something-secure
ADMIN_TOKEN=long-random-string-here
services:
  myapp:
    image: myapp:latest
    env_file:
      - .env

Password generation:

# Generate random passwords
openssl rand -base64 32

# Generate a random hex token
openssl rand -hex 32

What NOT to Do

HTTPS for Internal Services

Even on your local network, running services over HTTPS prevents passive eavesdropping and credential sniffing. It's easier than you'd think.

Self-Signed Certificates with a Local CA

Create your own Certificate Authority and generate certificates for internal services:

# Generate a CA key and certificate
openssl genrsa -out ca.key 4096
openssl req -x509 -new -nodes -key ca.key -sha256 -days 3650 \
  -out ca.crt -subj "/CN=Homelab CA"

# Generate a server certificate
openssl genrsa -out server.key 2048
openssl req -new -key server.key -out server.csr \
  -subj "/CN=*.lab.home"

# Sign the certificate with your CA
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \
  -CAcreateserial -out server.crt -days 365 -sha256

Install ca.crt as a trusted CA on your devices, and browsers will trust your internal certificates without warnings.

Let's Encrypt with a Reverse Proxy

If your services have public domain names, use Let's Encrypt for real certificates. Traefik and Caddy handle this automatically:

# Caddy (automatic HTTPS for everything)
services:
  caddy:
    image: caddy:latest
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data
# Caddyfile
jellyfin.yourdomain.com {
    reverse_proxy 192.168.1.50:8096
}

nextcloud.yourdomain.com {
    reverse_proxy 192.168.1.50:8081
}

Caddy automatically obtains and renews Let's Encrypt certificates for each domain.

Common Security Mistakes

1. Exposing Management Interfaces to the Internet

Proxmox web UI (8006), Portainer (9000), database admin panels — these should never be accessible from outside your LAN. Use a VPN for remote management, not port forwarding.

2. Running Services Without Updates for Months

That Docker container you deployed last year? It probably has known CVEs. Set up a notification system (Watchtower in monitor mode, Diun, or just a weekly calendar reminder) to check for updates.

3. No Backups of Critical Data

Security isn't just about keeping attackers out — it's about recovering when things go wrong. Ransomware, accidental deletion, disk failure: backups are your safety net. Follow the 3-2-1 rule: 3 copies, 2 different media, 1 off-site.

4. Using the Same Credentials Everywhere

If your Proxmox password is the same as your Nextcloud password and your database password, compromising one service compromises everything. Use a password manager (Vaultwarden is a great self-hosted option) and generate unique passwords for every service.

5. Ignoring Logs

Your servers are telling you what's happening. Set up basic log monitoring:

# Check auth logs for failed login attempts
sudo journalctl -u sshd --since "1 hour ago" | grep "Failed"

# Check fail2ban bans
sudo fail2ban-client status sshd

# Set up a simple log watcher with systemd
# Monitor auth failures and get notified
sudo journalctl -f -u sshd | grep --line-buffered "Failed" &

For a more complete solution, ship logs to a Grafana Loki instance and set up alerts.

6. Port Forwarding Instead of VPN

Every port you forward is visible to the entire internet. Use Tailscale, WireGuard, or Cloudflare Tunnels instead. If you must port-forward, limit it to a reverse proxy (443) and nothing else.

A Practical Hardening Checklist

Run through this list for each server in your lab:

[ ] SSH: Key-only authentication enabled
[ ] SSH: Root login disabled
[ ] SSH: fail2ban installed and running
[ ] Firewall: Default deny incoming
[ ] Firewall: Only necessary ports open
[ ] Firewall: Management ports restricted to LAN
[ ] Updates: Automatic security updates enabled
[ ] Updates: Docker containers on a regular update schedule
[ ] Users: No services running as root unnecessarily
[ ] Passwords: Unique password for every service
[ ] Passwords: Default passwords changed
[ ] Secrets: No credentials in Git
[ ] Backups: Critical data backed up regularly
[ ] Network: Services isolated appropriately (VLANs or Docker networks)
[ ] Logging: Failed auth attempts being monitored

You don't need to do everything at once. Start with SSH hardening and a firewall — those two steps eliminate the most common attack vectors. Then work through the rest as you have time. Each item you check off makes your lab meaningfully more secure than the average home network.

Security isn't a state you reach; it's a practice you maintain. The goal isn't perfection — it's making your lab harder to compromise than the next target. Automated attackers move on quickly when they hit resistance. A locked-down SSH config and a properly configured firewall will stop the overwhelming majority of threats your home lab will ever face.