Self-Hosting Your Photos with Immich
Google Photos is convenient until it isn't. The free unlimited storage era is over, Google's AI trains on your photos, and your entire photo library lives on someone else's server under someone else's terms. If you've got a homelab, you can do better.
Immich is the most capable self-hosted Google Photos alternative available. It's fast, it looks good, and it has the features that actually matter: automatic phone backups, face recognition, map view, smart search, and a timeline that handles tens of thousands of photos without choking. It's not a finished product — the developers are upfront that it's under active development and not yet production-stable — but it's already good enough that many people have migrated their entire libraries to it.
This guide walks you through deploying Immich in your homelab, configuring mobile backups, understanding the ML pipeline, planning your storage, and migrating from Google Photos.
Hardware Requirements
Immich is more resource-hungry than most self-hosted apps because of its machine learning features. Here's what you need:
Minimum (functional, but ML will be slow):
- 4 CPU cores
- 4 GB RAM
- 50 GB for the application (plus however much your photo library needs)
Recommended (comfortable with ML processing):
- 4+ CPU cores
- 8 GB RAM
- SSD for the database, HDD is fine for photo storage
- GPU optional but helpful (see below)
Storage math: A typical smartphone photo is 3-6 MB. A library of 50,000 photos needs roughly 150-300 GB. Videos are much larger — 4K video at a minute can be 300-500 MB. Plan your storage accordingly.
The database (PostgreSQL with pgvecto-rs) is the performance-sensitive component. Put it on an SSD. The actual photo files can live on spinning rust without any noticeable impact on browsing speed, since Immich generates thumbnails and serves those for the timeline view.
Docker Deployment
Immich provides an official Docker Compose file. Create a directory for the deployment:
mkdir -p /opt/immich
cd /opt/immich
Create a .env file with your configuration:
# /opt/immich/.env
# Database
DB_PASSWORD=change-this-to-a-strong-password
DB_USERNAME=immich
DB_DATABASE_NAME=immich
# Paths
UPLOAD_LOCATION=/opt/immich/upload
THUMB_LOCATION=/opt/immich/thumbs
# Timezone
TZ=America/New_York
# Immich version — pin to a release for stability
IMMICH_VERSION=release
Create docker-compose.yml:
# /opt/immich/docker-compose.yml
services:
immich-server:
image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION}
container_name: immich-server
restart: unless-stopped
ports:
- "2283:2283"
volumes:
- ${UPLOAD_LOCATION}:/usr/src/app/upload
- /etc/localtime:/etc/localtime:ro
environment:
DB_HOSTNAME: immich-db
DB_USERNAME: ${DB_USERNAME}
DB_PASSWORD: ${DB_PASSWORD}
DB_DATABASE_NAME: ${DB_DATABASE_NAME}
REDIS_HOSTNAME: immich-redis
depends_on:
- immich-db
- immich-redis
immich-machine-learning:
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION}
container_name: immich-ml
restart: unless-stopped
volumes:
- immich-ml-cache:/cache
environment:
DB_HOSTNAME: immich-db
DB_USERNAME: ${DB_USERNAME}
DB_PASSWORD: ${DB_PASSWORD}
DB_DATABASE_NAME: ${DB_DATABASE_NAME}
REDIS_HOSTNAME: immich-redis
immich-redis:
image: redis:7-alpine
container_name: immich-redis
restart: unless-stopped
healthcheck:
test: redis-cli ping || exit 1
immich-db:
image: tensorchord/pgvecto-rs:pg16-v0.2.0
container_name: immich-db
restart: unless-stopped
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_DB: ${DB_DATABASE_NAME}
POSTGRES_INITDB_ARGS: "--data-checksums"
volumes:
- immich-db-data:/var/lib/postgresql/data
healthcheck:
test: pg_isready --dbname='${DB_DATABASE_NAME}' --username='${DB_USERNAME}' || exit 1
interval: 10s
timeout: 5s
retries: 5
volumes:
immich-ml-cache:
immich-db-data:
Start everything:
docker compose up -d
Open http://your-server:2283 and create your admin account. The first account created becomes the administrator.
Mobile App Configuration
The Immich mobile app is available on iOS (App Store) and Android (Play Store / F-Droid). This is the primary way most people interact with Immich day-to-day.
Initial Setup
- Install the app on your phone
- Enter your server URL:
http://your-server-ip:2283(or your domain if you've set up a reverse proxy with HTTPS) - Log in with your account credentials
- Grant photo/video access when prompted
Background Backup Configuration
Go to the app's backup settings and configure:
- Auto backup: Enable this. Photos will upload automatically when new ones are taken.
- Background backup: Enable this. On iOS, this uses significant location changes to trigger uploads. On Android, it runs as a foreground service.
- Wi-Fi only: Enable this unless you have unlimited data. Photo uploads over cellular will drain your plan fast.
- Require charging: Optional but recommended if you have a large initial upload.
The Initial Upload
If you have thousands of photos on your phone, the first sync takes a while. Let it run overnight on Wi-Fi and power. Subsequent syncs are fast since only new photos are uploaded.
Tip: If you want to upload photos that are already on your phone but aren't in the camera roll (screenshots, downloads, WhatsApp images), the app lets you select additional albums to back up.
Remote Access
To access Immich outside your home network, you need one of:
- Reverse proxy with a domain (Traefik, Caddy, or Nginx Proxy Manager with Let's Encrypt)
- Tailscale/WireGuard VPN back to your home network
- Cloudflare Tunnel pointed at your Immich instance
The mobile app works with any of these. Just update the server URL to your external address.
Machine Learning Features
Immich's ML pipeline is what separates it from simpler photo management tools. Here's what it does and what resources it uses.
Face Recognition
Immich detects and clusters faces across your library. After processing, you can name people and browse all photos of a specific person. The clustering is surprisingly good — it handles different angles, lighting, and ages reasonably well, though you'll need to manually merge some clusters and split others.
Face recognition runs automatically on new uploads. For an existing library, it processes in the background. A library of 30,000 photos takes several hours on a 4-core CPU.
Smart Search (CLIP)
Immich uses OpenAI's CLIP model to enable natural language photo search. You can search for "dog on beach" or "birthday cake" and it finds relevant photos without any manual tagging. This is the feature that makes people go "okay, this is actually better than Google Photos."
CLIP processing is CPU-intensive on the initial pass but very fast for searches once the embeddings are generated.
Object Detection
Automatic tagging of objects in photos (car, tree, food, building, etc.). This feeds into both search and the explore page.
GPU Acceleration
If you have an NVIDIA GPU, you can dramatically speed up ML processing:
immich-machine-learning:
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION}-cuda
container_name: immich-ml
restart: unless-stopped
volumes:
- immich-ml-cache:/cache
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
You'll need the NVIDIA Container Toolkit installed on your host:
# Ubuntu/Debian
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt update && sudo apt install nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
With a GPU, face recognition and CLIP encoding are 5-10x faster. An RTX 3060 can process a 50,000-photo library in under an hour. Without a GPU, the same library takes 6-10 hours on a modern CPU.
Even a used GTX 1060 or 1070 makes a meaningful difference and can be found for cheap on the used market. If you're processing a large backlog, even temporarily adding a GPU is worth it.
Storage Backends and Planning
Default: Local Filesystem
By default, Immich stores everything in the upload directory you specify. This is the simplest option and works well for most homelabs.
Organize your storage:
/opt/immich/
upload/ # Original photos and videos
thumbs/ # Generated thumbnails (can be regenerated)
External Library Support
Immich can scan external directories without moving files. This is useful if you already have an organized photo library on your NAS:
- Mount your NAS directory to the Immich container
- In the Immich web UI, go to Administration > External Libraries
- Add the path and configure scan settings
# Add a volume mount for your existing library
volumes:
- ${UPLOAD_LOCATION}:/usr/src/app/upload
- /mnt/nas/photos:/mnt/nas/photos:ro # Read-only access to existing library
Immich will index and process the photos without copying them. The originals stay where they are.
Storage Tiering
For larger libraries, consider separating hot and cold storage:
- SSD: Database (
immich-db-datavolume) and thumbnails. This is what affects browsing speed. - HDD/NAS: Original photo and video files. These are only accessed when viewing full-resolution images or downloading.
If your upload directory is on a network share (NFS/SMB), make sure it has reliable connectivity. Immich handles brief network interruptions gracefully, but extended NAS downtime will cause issues.
Migrating from Google Photos
Google Takeout
- Go to takeout.google.com
- Select only Google Photos
- Choose your export format (recommended: .zip, split into 10 GB files)
- Request the export and download when ready
Google Takeout can take hours to days depending on library size. The export includes both photos and metadata in JSON sidecar files.
The Metadata Problem
Google Takeout exports metadata (dates, descriptions, GPS coordinates) as separate JSON files alongside each photo. Immich doesn't natively read these sidecar files during a standard upload.
Use the immich-go CLI tool to handle the import with metadata:
# Install immich-go
go install github.com/simulot/immich-go@latest
# Or download a pre-built binary from the releases page
wget https://github.com/simulot/immich-go/releases/latest/download/immich-go_Linux_x86_64.tar.gz
tar xzf immich-go_Linux_x86_64.tar.gz
# Import from Google Takeout
./immich-go -server http://your-server:2283 \
-key YOUR_API_KEY \
upload \
-google-photos \
/path/to/takeout-*.zip
Get your API key from the Immich web UI: click your profile icon > Account Settings > API Keys > New API Key.
immich-go reads the JSON metadata, matches it to photos, and uploads everything with correct dates and locations preserved.
How Long Does Migration Take?
Depends on library size and upload speed:
| Library Size | Local Network (~100 MB/s) | Gigabit (~100 MB/s) |
|---|---|---|
| 10,000 photos (~50 GB) | ~10 minutes | ~10 minutes |
| 50,000 photos (~250 GB) | ~45 minutes | ~45 minutes |
| 100,000 photos (~500 GB) | ~90 minutes | ~90 minutes |
ML processing happens afterward and takes significantly longer. Plan for several hours of background processing after the upload completes.
Backup Strategy
Your photo library is probably the most irreplaceable data in your homelab. Back it up accordingly.
What to Back Up
- The upload directory — Your original photos and videos. This is the critical data.
- The database — Contains all metadata, face recognition data, album structure, and user accounts.
- The ML cache — Contains computed embeddings. Can be regenerated but takes hours.
Database Backup
# Dump the PostgreSQL database
docker exec immich-db pg_dump -U immich immich > /path/to/backups/immich-db-$(date +%Y%m%d).sql
# Or use pg_dump with compression
docker exec immich-db pg_dump -U immich -Fc immich > /path/to/backups/immich-db-$(date +%Y%m%d).dump
Automated Backup Script
#!/bin/bash
# /opt/immich/backup.sh
BACKUP_DIR="/mnt/nas/backups/immich/$(date +%Y-%m-%d)"
mkdir -p "$BACKUP_DIR"
# Database dump
docker exec immich-db pg_dump -U immich -Fc immich > "$BACKUP_DIR/database.dump"
# Photo files — use rsync for incremental copies
rsync -av --delete /opt/immich/upload/ "$BACKUP_DIR/upload/"
# Keep last 7 daily backups
find /mnt/nas/backups/immich/ -maxdepth 1 -type d -mtime +7 -exec rm -rf {} \;
echo "Immich backup completed: $BACKUP_DIR"
chmod +x /opt/immich/backup.sh
Schedule it with a cron job or systemd timer:
# Run daily at 3 AM
echo "0 3 * * * /opt/immich/backup.sh" | sudo tee -a /etc/crontab
For offsite protection, sync the backup directory to cloud storage with rclone or include it in your existing restic/borg backup workflow.
Updating Immich
Immich is under active development with frequent releases. Updating is straightforward:
cd /opt/immich
# Pull latest images
docker compose pull
# Restart with new images
docker compose up -d
# Clean up old images
docker image prune -f
Important: Read the release notes before updating. Immich occasionally has breaking changes that require manual migration steps. The developers document these clearly, but you need to actually read them.
Pin to a specific version in your .env if you want more control:
IMMICH_VERSION=v1.118.0
Tips and Gotchas
Start with a small test library. Don't migrate your entire Google Photos library on day one. Upload a few hundred photos, test the mobile app, verify face recognition works, and make sure your backup strategy is solid. Then do the full migration.
Shared albums work. You can create multiple user accounts and share albums between them. Each user gets their own storage and timeline, but shared albums appear in everyone's library.
Duplicate detection exists but isn't perfect. Immich detects exact duplicates during upload. If you upload the same library twice, it won't double-count. But slightly different versions of the same photo (different compression, crop, or resolution) will be treated as separate files.
The web UI is not the mobile app. The web UI is great for browsing, organizing, and admin tasks. But the mobile app is the primary interface for most users — it's where automatic backup happens and where you'll browse your timeline day to day.
Memory usage spikes during ML processing. When Immich is chewing through a large backlog of face recognition and CLIP encoding, RAM usage can spike to 6-8 GB. If you're on a memory-constrained system, consider processing in batches or limiting the ML container's memory.
Immich isn't done yet. The developers are refreshingly honest about this — they don't claim production stability, and the API and database schema still change between versions. But for a homelab audience that's comfortable with containers and backups, it's already the best self-hosted photo solution available. The combination of a polished mobile app, fast timeline browsing, and genuinely useful ML features makes it worth the rough edges.