← All articles
VIRTUALIZATION LXC Containers on Proxmox: Lightweight Alternatives ... 2026-02-09 · lxc · proxmox · containers

LXC Containers on Proxmox: Lightweight Alternatives to VMs

Virtualization 2026-02-09 lxc proxmox containers virtualization linux

If you're running Proxmox in your homelab, you've probably been creating VMs for everything. A VM for Pi-hole, a VM for your reverse proxy, a VM for Nextcloud. Each one gets its own kernel, its own boot process, and its own chunk of RAM it holds onto whether it needs it or not.

LXC containers offer a different approach. They share the host's Linux kernel, boot in seconds, and use a fraction of the resources. For many homelab workloads, they're the better choice. But not always — knowing when to use LXC vs a VM is the real skill.

What LXC Actually Is

LXC (Linux Containers) is an OS-level virtualization technology. Unlike a full VM that emulates hardware and runs a complete operating system with its own kernel, an LXC container is just an isolated group of processes running on the host's kernel.

Think of it this way:

Docker containers are also based on similar kernel features (namespaces and cgroups), but LXC containers are "system containers" — they run a full init system and feel like a lightweight VM. Docker containers are "application containers" designed to run a single process.

When to Use LXC vs VMs

Use LXC containers when:

Use VMs when:

In practice, a good homelab uses both. Run your NAS, desktop VMs, and Docker hosts as VMs. Run your DNS servers, reverse proxies, monitoring tools, and lightweight services as LXC containers.

Setting Up LXC Templates in Proxmox

Proxmox makes LXC surprisingly easy. You don't download ISOs — you download pre-built templates.

Download a Template

In the Proxmox web UI:

  1. Click on your storage (usually local)
  2. Go to CT Templates
  3. Click Templates to browse available downloads

You'll see templates for Ubuntu, Debian, Alpine, Fedora, CentOS, Arch, and others. Download the one you want. For most purposes:

Or from the command line:

pveam update
pveam available --section system
pveam download local debian-12-standard_12.7-1_amd64.tar.zst

Create a Container

In the web UI, click Create CT (top right). Walk through the wizard:

  1. General: Set a hostname and password. Check "Unprivileged container" (more secure — leave this on unless you have a specific reason not to).
  2. Template: Select the template you downloaded.
  3. Disks: Set the root disk size. 8 GB is fine for lightweight services. 20-32 GB for anything with substantial data.
  4. CPU: 1-2 cores for most services. Containers share CPU time efficiently, so you can overcommit here.
  5. Memory: 256-512 MB for lightweight services (DNS, monitoring agents). 1-2 GB for larger apps. Unlike VMs, unused memory in containers is available to other containers.
  6. Network: Assign a network interface. Use DHCP or set a static IP.

Or via command line:

pct create 100 local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst \
  --hostname pihole \
  --memory 512 \
  --cores 1 \
  --rootfs local-lvm:8 \
  --net0 name=eth0,bridge=vmbr0,ip=192.168.1.53/24,gw=192.168.1.1 \
  --unprivileged 1 \
  --start 1

That creates, configures, and starts a container in one command.

Managing Containers

Basic Operations

# Start, stop, restart
pct start 100
pct stop 100
pct restart 100

# Open a shell into the container
pct enter 100

# List all containers
pct list

# Show container config
pct config 100

# Resize disk
pct resize 100 rootfs +10G

Accessing the Container

From the Proxmox web UI, select your container and click Console for a terminal session.

From the Proxmox host:

pct enter 100

Or SSH into it directly if you've installed and configured SSH inside the container.

Resource Limits

One of the best things about LXC on Proxmox is how easy it is to adjust resources without downtime:

# Change memory (takes effect after restart)
pct set 100 --memory 1024

# Change CPU cores (takes effect after restart)
pct set 100 --cores 2

# CPU limit (fraction of a core, live)
pct set 100 --cpulimit 0.5

Practical Container Examples

Pi-hole in LXC

This is one of the most popular LXC use cases. Create a Debian container with 256 MB RAM and 1 core:

pct create 101 local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst \
  --hostname pihole \
  --memory 256 \
  --cores 1 \
  --rootfs local-lvm:4 \
  --net0 name=eth0,bridge=vmbr0,ip=192.168.1.53/24,gw=192.168.1.1 \
  --unprivileged 1 \
  --start 1

Enter the container and install Pi-hole:

pct enter 101
apt update && apt install -y curl
curl -sSL https://install.pi-hole.net | bash

Total resource usage: about 80 MB RAM when running. A VM doing the same thing would use 400+ MB.

Nginx Proxy Manager in LXC

Create a Debian container with 512 MB RAM:

pct enter 102
apt update && apt install -y curl gnupg lsb-release

# Install Docker inside the container (requires some config - see below)
# Or install Nginx Proxy Manager directly without Docker

WireGuard in LXC

WireGuard needs kernel module access. In an unprivileged container, you need to load the module on the host and make the device available:

On the Proxmox host:

modprobe wireguard

Add to the container config (/etc/pve/lxc/103.conf):

lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file

Then install WireGuard inside the container normally.

Unprivileged vs Privileged Containers

This is an important security distinction:

Unprivileged containers (the default and recommended mode) map the container's root user to a non-root user on the host. Even if someone breaks out of the container, they have no privileges on the host system. Always use unprivileged unless you can't.

Privileged containers run with the container's root mapping to host root. This is needed for some operations:

To create a privileged container, uncheck "Unprivileged container" during creation or add --unprivileged 0 to the pct create command.

Bind Mounts: Sharing Host Storage

You can mount directories from the Proxmox host (or an NFS/CIFS share mounted on the host) directly into a container:

pct set 100 --mp0 /mnt/nas/media,mp=/media

This mounts /mnt/nas/media from the host to /media inside the container. Great for giving a media server container access to your NAS storage without mounting it separately inside the container.

For unprivileged containers, you'll need to handle UID/GID mapping. The container's root (UID 0) maps to UID 100000 on the host by default. Either change ownership of the shared directory or configure ID mapping:

# On the host, make the directory accessible
chown -R 100000:100000 /mnt/nas/media

# Or add a UID/GID mapping in the container config
# /etc/pve/lxc/100.conf
lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 65536

Backups and Snapshots

Snapshots

If your container is on ZFS or LVM-thin storage:

pct snapshot 100 before-upgrade --description "Before apt upgrade"

# Restore if something breaks
pct rollback 100 before-upgrade

Backups

Proxmox can back up containers just like VMs:

vzdump 100 --storage local --compress zstd --mode snapshot

Or schedule backups in the web UI under Datacenter > Backup. Container backups are much faster and smaller than VM backups because there's less data (no kernel, no virtual hardware state).

Performance Comparison

Here's a rough comparison for a typical lightweight service (DNS server):

Metric LXC Container Full VM
Boot time 1-2 seconds 15-30 seconds
RAM usage (idle) 30-80 MB 300-500 MB
Disk footprint 200-500 MB 2-4 GB
CPU overhead ~0% 1-5%
Startup after host reboot Near instant Slow (sequential)

Multiply those savings by 10 containers and the difference is significant, especially on homelab hardware with 32-64 GB of RAM.

Common Pitfalls

Docker inside LXC: It works, but requires either a privileged container or specific nesting configuration. In the Proxmox UI, check "Nesting" under Options, or add features: nesting=1 to the container config. Even then, some Docker features (like overlay2 storage driver) may need additional tweaks.

Kernel limitations: Since containers share the host kernel, you can't run a different kernel version or load custom kernel modules from inside a container. If your application needs kernel 6.x features and Proxmox is running 5.x, you need a VM.

Filesystem choices matter: Use ext4 for container root disks for maximum compatibility. ZFS datasets work too and give you better snapshot support.

LXC containers on Proxmox hit a sweet spot for homelab use. They're lighter than VMs, more structured than Docker, and Proxmox's tooling makes them just as easy to manage. Start by converting your lightest workloads (DNS, monitoring, reverse proxy) from VMs to containers, and you'll immediately notice how much faster and more efficient everything feels.