Managing Your Home Lab with Terraform and Proxmox
There's a moment in every home lab journey where you realize you've spent the last three hours clicking through the Proxmox UI to rebuild the same set of VMs you accidentally destroyed. You had an Ubuntu VM, a Docker host, a Pi-hole container, a dev box — and now you're recreating each one from scratch, trying to remember the exact CPU, memory, and disk settings.
Terraform solves this by letting you define your infrastructure as code. Instead of clicking through a web UI, you write a configuration file that describes your VMs, and Terraform creates them for you. Blow everything away? Run terraform apply and it all comes back exactly as it was.
This guide covers using Terraform with Proxmox — the most popular hypervisor in the home lab community. We'll start with the basics, build up to a multi-VM setup, and talk honestly about when infrastructure as code actually makes sense at home lab scale.
Why Terraform for a Home Lab?
The Case For
- Reproducibility: Your entire lab is defined in files. Delete everything and rebuild it identically in minutes.
- Documentation: The Terraform files are living documentation of your infrastructure. No more "what settings did I use for that VM?"
- Version control: Put your
.tffiles in Git. Track every change to your infrastructure over time. - Learning: Terraform is one of the most in-demand DevOps skills. Your home lab is the perfect place to learn it.
- Experimentation: Spin up a complex test environment, experiment, tear it down, and recreate a clean version instantly.
The Case Against
- Overhead: For 2-3 VMs that rarely change, Terraform is overkill. Just use the Proxmox UI.
- Learning curve: Terraform has its own language (HCL), state management quirks, and provider-specific gotchas.
- Proxmox API limitations: The Proxmox provider is community-maintained and doesn't support every feature. Some things still require manual steps.
- State drift: If you change something in the Proxmox UI after Terraform created it, Terraform's state gets out of sync.
The sweet spot: if you have more than 5 VMs, or if you frequently rebuild your environment, Terraform pays for itself quickly. If you have 2 VMs that you set up once and never touch, stick with the UI.
Prerequisites
You'll need:
- A Proxmox VE server (7.x or 8.x)
- Terraform installed on your workstation (or on a management VM)
- A Proxmox API token
- A VM template (cloud-init based) to clone from
Installing Terraform
# Ubuntu/Debian
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install terraform
# Fedora/RHEL
sudo dnf install -y dnf-plugins-core
sudo dnf config-manager addrepo --from-repofile=https://rpm.releases.hashicorp.com/fedora/hashicorp.repo
sudo dnf install terraform
# Verify
terraform version
Creating a Proxmox API Token
Don't use root credentials in Terraform. Create a dedicated API token:
# SSH into your Proxmox host
# Create a Terraform user
pveum user add terraform@pve
# Create a role with the necessary permissions
pveum role add TerraformRole -privs "Datastore.AllocateSpace Datastore.Audit Pool.Allocate Sys.Audit Sys.Console Sys.Modify SDN.Use VM.Allocate VM.Audit VM.Clone VM.Config.CDROM VM.Config.Cloudinit VM.Config.CPU VM.Config.Disk VM.Config.HWType VM.Config.Memory VM.Config.Network VM.Config.Options VM.Migrate VM.Monitor VM.PowerMgmt"
# Assign the role to the user
pveum aclmod / -user terraform@pve -role TerraformRole
# Create an API token (save the output — you'll need the token value)
pveum user token add terraform@pve terraform-token --privsep=0
Save the token ID and secret. The token ID looks like terraform@pve!terraform-token and the secret is a UUID.
Creating a VM Template
Terraform clones VMs from templates. The easiest approach is a cloud-init enabled template:
# On your Proxmox host — download Ubuntu cloud image
wget https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
# Create a VM to use as a template
qm create 9000 --name ubuntu-cloud --memory 2048 --cores 2 --net0 virtio,bridge=vmbr0
# Import the cloud image as a disk
qm importdisk 9000 noble-server-cloudimg-amd64.img local-lvm
# Attach the disk
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
# Add cloud-init drive
qm set 9000 --ide2 local-lvm:cloudinit
# Set boot order
qm set 9000 --boot c --bootdisk scsi0
# Enable the serial console (required for cloud-init)
qm set 9000 --serial0 socket --vga serial0
# Convert to template
qm template 9000
VM ID 9000 is now a template. Terraform will clone this template to create new VMs.
Basic Terraform Configuration
Project Structure
homelab-terraform/
├── main.tf # Provider configuration
├── variables.tf # Variable definitions
├── terraform.tfvars # Variable values (gitignored — contains secrets)
├── vms.tf # VM definitions
├── network.tf # Network configuration
├── outputs.tf # Output values
└── .gitignore
Your .gitignore:
*.tfstate
*.tfstate.backup
.terraform/
terraform.tfvars
*.auto.tfvars
Provider Configuration
# main.tf
terraform {
required_providers {
proxmox = {
source = "bpg/proxmox"
version = "~> 0.69"
}
}
}
provider "proxmox" {
endpoint = var.proxmox_url
api_token = var.proxmox_api_token
insecure = true # Set to false if you have valid SSL certs
ssh {
agent = true
}
}
There are two popular Proxmox providers: bpg/proxmox and Telmate/proxmox. The bpg/proxmox provider is more actively maintained and has broader feature support. This guide uses bpg/proxmox.
# variables.tf
variable "proxmox_url" {
description = "Proxmox API URL"
type = string
}
variable "proxmox_api_token" {
description = "Proxmox API token"
type = string
sensitive = true
}
variable "ssh_public_key" {
description = "SSH public key for VM access"
type = string
}
# terraform.tfvars (DO NOT commit this file)
proxmox_url = "https://192.168.1.50:8006"
proxmox_api_token = "terraform@pve!terraform-token=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
ssh_public_key = "ssh-ed25519 AAAA... your-key-here"
Initialize and Validate
terraform init # Download the provider plugin
terraform validate # Check syntax
terraform plan # Preview what Terraform would do
Defining Your First VM
# vms.tf
resource "proxmox_virtual_environment_vm" "docker_host" {
name = "docker-01"
node_name = "pve"
vm_id = 101
clone {
vm_id = 9000 # Our Ubuntu cloud-init template
}
cpu {
cores = 4
type = "host"
}
memory {
dedicated = 8192 # 8 GB
}
disk {
datastore_id = "local-lvm"
interface = "scsi0"
size = 50 # GB
}
network_device {
bridge = "vmbr0"
}
initialization {
ip_config {
ipv4 {
address = "192.168.1.101/24"
gateway = "192.168.1.1"
}
}
dns {
servers = ["1.1.1.1", "8.8.8.8"]
}
user_account {
username = "admin"
keys = [var.ssh_public_key]
}
}
on_boot = true
}
Apply it:
terraform plan # Review the plan
terraform apply # Create the VM (type 'yes' to confirm)
Terraform will clone the template, configure cloud-init settings, resize the disk, and start the VM. Within a minute, you can SSH in:
ssh admin@192.168.1.101
Building a Multi-VM Lab
Now let's define a realistic home lab environment with multiple VMs:
# vms.tf
locals {
vms = {
"docker-01" = {
vm_id = 101
cores = 4
memory = 8192
disk = 50
ip = "192.168.1.101"
}
"docker-02" = {
vm_id = 102
cores = 4
memory = 8192
disk = 50
ip = "192.168.1.102"
}
"pihole" = {
vm_id = 110
cores = 1
memory = 1024
disk = 10
ip = "192.168.1.5"
}
"monitoring" = {
vm_id = 120
cores = 2
memory = 4096
disk = 30
ip = "192.168.1.120"
}
"dev-box" = {
vm_id = 130
cores = 8
memory = 16384
disk = 100
ip = "192.168.1.130"
}
}
}
resource "proxmox_virtual_environment_vm" "lab" {
for_each = local.vms
name = each.key
node_name = "pve"
vm_id = each.value.vm_id
clone {
vm_id = 9000
}
cpu {
cores = each.value.cores
type = "host"
}
memory {
dedicated = each.value.memory
}
disk {
datastore_id = "local-lvm"
interface = "scsi0"
size = each.value.disk
}
network_device {
bridge = "vmbr0"
}
initialization {
ip_config {
ipv4 {
address = "${each.value.ip}/24"
gateway = "192.168.1.1"
}
}
dns {
servers = ["1.1.1.1"]
}
user_account {
username = "admin"
keys = [var.ssh_public_key]
}
}
on_boot = true
}
# outputs.tf
output "vm_ips" {
value = {
for name, vm in proxmox_virtual_environment_vm.lab :
name => local.vms[name].ip
}
}
Run terraform apply and all five VMs spin up in parallel. Run terraform output to see the IP addresses:
$ terraform output vm_ips
{
"dev-box" = "192.168.1.130"
"docker-01" = "192.168.1.101"
"docker-02" = "192.168.1.102"
"monitoring" = "192.168.1.120"
"pihole" = "192.168.1.5"
}
Working with LXC Containers
The bpg/proxmox provider also supports LXC containers, which are lighter than full VMs and perfect for single-service workloads:
resource "proxmox_virtual_environment_container" "nginx" {
node_name = "pve"
vm_id = 200
initialization {
hostname = "nginx-proxy"
ip_config {
ipv4 {
address = "192.168.1.200/24"
gateway = "192.168.1.1"
}
}
}
cpu {
cores = 2
}
memory {
dedicated = 1024
}
disk {
datastore_id = "local-lvm"
size = 8
}
network_interface {
name = "eth0"
bridge = "vmbr0"
}
operating_system {
template_file_id = "local:vztmpl/ubuntu-24.04-standard_24.04-2_amd64.tar.zst"
type = "ubuntu"
}
unprivileged = true
start_on_boot = true
}
You'll need to download the LXC template first:
# On Proxmox host
pveam update
pveam download local ubuntu-24.04-standard_24.04-2_amd64.tar.zst
State Management
Terraform tracks the current state of your infrastructure in a state file (terraform.tfstate). This file is critical — Terraform uses it to know what exists, what needs to be created, and what needs to be destroyed.
Local State (Default)
By default, state is stored in a local file. This works fine for a personal home lab:
ls -la terraform.tfstate
# -rw-r--r-- 1 user user 12345 Jan 15 10:30 terraform.tfstate
Important: Back up your state file. If you lose it, Terraform doesn't know about your existing VMs and will try to create duplicates. You can import existing resources, but it's tedious.
Remote State (Optional)
For more robust state management, you can store state remotely. Even in a home lab, this protects against accidental deletion:
# Using a local MinIO/S3 backend
terraform {
backend "s3" {
bucket = "terraform-state"
key = "homelab/terraform.tfstate"
region = "us-east-1"
endpoint = "http://192.168.1.51:9000"
access_key = "minioadmin"
secret_key = "minioadmin"
skip_credentials_validation = true
skip_metadata_api_check = true
skip_region_validation = true
force_path_style = true
use_path_style = true
}
}
This stores your state in a MinIO instance running on your lab. Overkill? Maybe. But if you're using Terraform to learn, learning state management is part of the deal.
Dealing with State Drift
If you manually change a VM through the Proxmox UI (resize a disk, change memory), Terraform's state goes stale. On the next terraform plan, it will try to revert your manual changes.
Options:
# Update state to match reality (doesn't change infrastructure)
terraform refresh
# Import a manually created resource into Terraform
terraform import proxmox_virtual_environment_vm.myvm pve/qemu/105
# Remove a resource from state without destroying it
terraform state rm proxmox_virtual_environment_vm.myvm
The pragmatic approach: use Terraform for the things you want managed as code, and leave other things outside of Terraform entirely. Not everything needs to be in Terraform.
Using Modules for Reusable Components
As your configuration grows, modules help you avoid repetition. A module is just a directory with .tf files that you call from your main configuration:
homelab-terraform/
├── main.tf
├── modules/
│ └── vm/
│ ├── main.tf
│ ├── variables.tf
│ └── outputs.tf
└── vms.tf
# modules/vm/variables.tf
variable "name" { type = string }
variable "node_name" { type = string }
variable "vm_id" { type = number }
variable "template_id" { type = number }
variable "cores" { type = number, default = 2 }
variable "memory" { type = number, default = 2048 }
variable "disk_size" { type = number, default = 20 }
variable "ip_address" { type = string }
variable "gateway" { type = string }
variable "ssh_key" { type = string }
variable "bridge" { type = string, default = "vmbr0" }
variable "datastore" { type = string, default = "local-lvm" }
# modules/vm/main.tf
resource "proxmox_virtual_environment_vm" "vm" {
name = var.name
node_name = var.node_name
vm_id = var.vm_id
clone {
vm_id = var.template_id
}
cpu {
cores = var.cores
type = "host"
}
memory {
dedicated = var.memory
}
disk {
datastore_id = var.datastore
interface = "scsi0"
size = var.disk_size
}
network_device {
bridge = var.bridge
}
initialization {
ip_config {
ipv4 {
address = "${var.ip_address}/24"
gateway = var.gateway
}
}
user_account {
username = "admin"
keys = [var.ssh_key]
}
}
on_boot = true
}
# modules/vm/outputs.tf
output "vm_id" {
value = proxmox_virtual_environment_vm.vm.vm_id
}
output "ip_address" {
value = var.ip_address
}
Now your main config becomes clean and readable:
# vms.tf
module "docker_host" {
source = "./modules/vm"
name = "docker-01"
node_name = "pve"
vm_id = 101
template_id = 9000
cores = 4
memory = 8192
disk_size = 50
ip_address = "192.168.1.101"
gateway = "192.168.1.1"
ssh_key = var.ssh_public_key
}
module "pihole" {
source = "./modules/vm"
name = "pihole"
node_name = "pve"
vm_id = 110
template_id = 9000
cores = 1
memory = 1024
disk_size = 10
ip_address = "192.168.1.5"
gateway = "192.168.1.1"
ssh_key = var.ssh_public_key
}
Terraform + Ansible: The Full Pipeline
Terraform creates the infrastructure. Ansible configures it. Together, they give you a fully automated pipeline:
- Terraform creates VMs with the right specs and IP addresses
- Ansible connects to those VMs and installs software, applies configurations
# Generate an Ansible inventory from Terraform outputs
resource "local_file" "ansible_inventory" {
content = templatefile("${path.module}/inventory.tftpl", {
docker_hosts = [
module.docker_host_1.ip_address,
module.docker_host_2.ip_address,
]
monitoring = [module.monitoring.ip_address]
})
filename = "${path.module}/../ansible/inventory.ini"
}
# inventory.tftpl
[docker]
%{ for ip in docker_hosts ~}
${ip}
%{ endfor ~}
[monitoring]
%{ for ip in monitoring ~}
${ip}
%{ endfor ~}
This way, adding a new VM in Terraform automatically updates the Ansible inventory. Run terraform apply then ansible-playbook site.yml and your new VM is fully configured.
Practical Tips
Use terraform plan Before Every Apply
Always review the plan. Terraform will tell you exactly what it's going to create, modify, or destroy. A mistyped variable could destroy the wrong VM.
terraform plan -out=plan.tfplan # Save the plan
terraform apply plan.tfplan # Apply exactly that plan
Pin Provider Versions
required_providers {
proxmox = {
source = "bpg/proxmox"
version = "~> 0.69" # Allow patch updates, not major
}
}
Provider updates can introduce breaking changes. Pin the version and update deliberately.
Use Workspaces for Environments
terraform workspace new dev
terraform workspace new prod
terraform workspace select dev
This lets you maintain separate state for different environments — for example, a dev cluster you blow away regularly and a prod cluster with your actual services.
Keep Secrets Out of Git
Use terraform.tfvars for secrets and add it to .gitignore. For extra safety, use environment variables:
export TF_VAR_proxmox_api_token="terraform@pve!token=xxx"
terraform apply
When IaC Doesn't Make Sense
Terraform is a tool, not a religion. There are cases where clicking through the Proxmox UI is the right choice:
- One-off experiments: If you're spinning up a VM to test something for an hour, just use the UI.
- Proxmox host configuration: Terraform manages guests (VMs, containers), not the host itself. Host-level config (storage, networking, clustering) is done in the Proxmox UI or CLI.
- Tiny labs: If your entire lab is 2 VMs and a container, the Terraform overhead isn't worth it.
- Rapidly changing setups: If you're changing VM configs multiple times a day during initial setup, the plan/apply cycle slows you down.
The pragmatic approach: start with the UI, learn what your lab looks like, then codify it in Terraform once you have a stable configuration you want to preserve. Terraform is best as a "record and reproduce" tool, not a "figure it out as you go" tool.
Your home lab is a learning environment. If Terraform makes it more fun and teaches you useful skills, use it. If it feels like unnecessary ceremony for three VMs, skip it. You can always add it later.