Log Overflow
Docker stores container logs in JSON files on the host at /var/lib/docker/containers/CONTAINER_ID/CONTAINER_ID-json.log. Without log rotation configured, verbose containers (web servers, databases) can fill the host disk within days or weeks.
When the host disk hits 100%, writes fail silently across all containers. The symptoms are non-obvious: cron jobs appear to run but write nothing, application errors increase, containers restart unexpectedly. The root cause — a full disk — is often discovered late.
Docker Log Rotation Options
The json-file logging driver (default) supports max-size and max-file options. Setting max-size: "10m" and max-file: "3" keeps at most 30MB of logs per container. This can be set globally in /etc/docker/daemon.json or per-service in docker-compose.yml under the logging key.
Checking Disk Usage
Run docker system df to see Docker's total disk usage. Run du -sh /var/lib/docker/containers/*/ to see per-container log sizes. A single misbehaving container can account for gigabytes of logs.
Related Tools
Fix Guides
Frequently Asked Questions
{"log-driver": "json-file", "log-opts": {"max-size": "10m", "max-file": "3"}}. This applies to all containers globally. Or set per-service in docker-compose.yml under the logging: key.sudo truncate -s 0 /var/lib/docker/containers/CONTAINER_ID/CONTAINER_ID-json.log. Or run docker system prune to remove stopped containers and their logs. Running containers' logs can only be truncated, not deleted.json-file with rotation limits is simplest. For centralized log collection: loki (with Grafana Loki) or syslog. For AWS deployments: awslogs. Avoid none — it discards all logs, making debugging impossible.