Skip to content

Docker vs VMs in the Homelab: Why Not Both?

People treat this like an either/or decision. It's not. Docker and VMs solve different problems and the right homelab runs both. Here's how I think about which one to reach for.

The Short Answer

Use Docker for services. Use VMs for things that need full OS isolation, a different kernel, or Windows. Run them side by side — they're complementary, not competing.

What Docker Is Good At

Docker containers are great for running services — apps, databases, reverse proxies, monitoring stacks. They start fast, they're easy to move, and Docker Compose makes multi-service setups manageable with a single file.

# docker-compose.yml — a simple example
services:
  app:
    image: myapp:latest
    ports:
      - "8080:8080"
    volumes:
      - ./data:/app/data
    restart: unless-stopped

  db:
    image: postgres:16
    environment:
      POSTGRES_PASSWORD: secret
    volumes:
      - pgdata:/var/lib/postgresql/data
    restart: unless-stopped

volumes:
  pgdata:

The key advantages: - Density — you can run a lot of containers on modest hardware - Portability — move a service to another machine by copying the compose file and a data directory - Isolation from other services (but not from the host kernel) - Easy updates — pull a new image, recreate the container

What VMs Are Good At

VMs give you a completely separate kernel and OS. That matters when:

  • You need a Windows environment on Linux hardware (gaming server, specific Windows-only tools)
  • You're running something that needs a different kernel version than the host
  • You want stronger isolation — a compromised container can potentially escape to the host, a compromised VM is much harder to escape
  • You're testing a full OS install, distro setup, or something destructive
  • You need hardware passthrough — GPU, USB devices, etc.

On Linux, KVM + QEMU is the stack. virt-manager gives you a GUI if you want it.

# Install KVM stack on Fedora/RHEL
sudo dnf install qemu-kvm libvirt virt-install virt-manager

# Start and enable the libvirt daemon
sudo systemctl enable --now libvirtd

# Verify KVM is available
sudo virt-host-validate

How I Actually Use Both

In practice: - Self-hosted services (Nextcloud, Gitea, Jellyfin, monitoring stacks) → Docker Compose - Gaming/Windows stuff that needs the real deal → VM with GPU passthrough - Testing a new distro or destructive experiments → VM, snapshot before anything risky - Network appliances (pfSense, OPNsense) → VM, not a container

The two coexist fine on the same host. Docker handles the service layer, KVM handles the heavier isolation needs.

Gotchas & Notes

  • Containers share the host kernel. That's a feature for performance and density, but it means a kernel exploit affects everything on the host. For sensitive workloads, VM isolation is worth the overhead.
  • Networking gets complicated when both are running. Docker creates its own bridge networks, KVM does the same. Know which traffic is going where. Naming your Docker networks explicitly helps.
  • Backups are different. Backing up a Docker service means backing up volumes + the compose file. Backing up a VM means snapshotting the QCOW2 disk file. Don't treat them the same.
  • Don't run Docker inside a VM on your homelab unless you have a real reason. It works, but you're layering virtualization overhead for no benefit in most cases.

See Also

  • [[managing-linux-services-systemd-ansible]]
  • [[tuning-netdata-web-log-alerts]]