Debugging Broken Docker Containers¶
When something in Docker breaks, there's a sequence to work through. Check logs first, inspect the container second, deal with network and permissions third. Resist the urge to rebuild immediately — most failures tell you what's wrong if you look.
The Short Answer¶
# Step 1: check logs
docker logs containername
# Step 2: inspect the container
docker inspect containername
# Step 3: get a shell inside
docker exec -it containername /bin/bash
# or /bin/sh if bash isn't available
The Debugging Sequence¶
1. Check the logs first¶
Before anything else:
docker logs containername
# Follow live output
docker logs -f containername
# Last 100 lines
docker logs --tail 100 containername
# With timestamps
docker logs --timestamps containername
Most failures announce themselves in the logs. Crash loops, config errors, missing environment variables — it's usually right there.
2. Check if the container is actually running¶
If the container shows as Exited (1) or any non-zero exit code, it crashed. The logs will usually tell you why.
3. Inspect the container¶
docker inspect dumps everything — environment variables, mounts, network config, the actual command being run:
Too verbose? Filter for the part you need:
# Just the mounts
docker inspect --format='{{json .Mounts}}' containername | jq
# Just the environment variables
docker inspect --format='{{json .Config.Env}}' containername | jq
# Exit code
docker inspect --format='{{.State.ExitCode}}' containername
4. Get a shell inside¶
If the container is running but misbehaving:
If it crashed and won't stay up, override the entrypoint to get in anyway:
5. Check port conflicts¶
Container starts but the service isn't reachable:
# See what ports are mapped
docker ps --format "table {{.Names}}\t{{.Ports}}"
# See what's using a port on the host
sudo ss -tlnp | grep :8080
Two containers can't bind the same host port. If something else grabbed the port first, the container will start but the port won't be accessible.
6. Check volume permissions¶
Permission errors inside containers are almost always a UID mismatch. The user inside the container doesn't own the files on the host volume.
# Check ownership of the mounted directory on the host
ls -la /path/to/host/volume
# Find out what UID the container runs as
docker inspect --format='{{.Config.User}}' containername
Fix by chowning the host directory to match, or by explicitly setting the user in your compose file:
7. Recreate cleanly when needed¶
If you've made config changes and things are in a weird state:
docker compose down
docker compose up -d
# Nuclear option — also removes volumes (data gone)
docker compose down -v
Don't jump straight to the nuclear option. Only use -v if you want a completely clean slate and don't care about the data.
Common Failure Patterns¶
| Symptom | Likely cause | Where to look |
|---|---|---|
| Exits immediately | Config error, missing env var | docker logs |
| Keeps restarting | Crash loop — app failing to start | docker logs, exit code |
| Port not reachable | Port conflict or wrong binding | docker ps, ss -tlnp |
| Permission denied inside container | UID mismatch on volume | ls -la on host path |
| "No such file or directory" | Wrong mount path or missing file | docker inspect mounts |
| Container runs but service is broken | App config error, not Docker | shell in, check app logs |
Gotchas & Notes¶
docker restartdoesn't pick up compose file changes. Usedocker compose up -dto apply changes.- Logs persist after a container exits. You can still
docker logsa stopped container — useful for post-mortem on crashes. - If a container won't stay up long enough to exec into it, use the entrypoint override (
--entrypoint /bin/bash) withdocker runagainst the image directly. - Watch out for cached layers on rebuild. If you're rebuilding an image and the behavior doesn't change, add
--no-cachetodocker build.
See Also¶
- [[docker-vs-vms-homelab]]
- [[tuning-netdata-web-log-alerts]]