Cron is one of the oldest scheduling tools in Unix and one of the most reliably misused. Not because it's complicated — the five-field format is actually quite simple — but because the mistakes are invisible until they compound into something you can't ignore.
Mistake 1: Everything at midnight
This is the most common cron mistake on the planet. When you need to run something "once a day when nobody's around," you pick midnight. So does every other developer on the team. So did whoever set up the server three years ago. Before long, half your crontab fires simultaneously at 00:00 and your server resembles a traffic jam.
# The typical result: 0 0 * * * /usr/bin/db-backup.sh # takes 15 minutes 0 0 * * * /usr/bin/cleanup-old-files.sh # hammers disk 0 0 * * * /usr/bin/generate-reports.sh # heavy queries 0 0 * * * /usr/bin/send-daily-digest.sh # lots of DB reads
The fix is simple: stagger your jobs. Spread them across different minutes or even different hours. The exact time usually doesn't matter — almost nothing needs to run at exactly midnight.
0 0 * * * /usr/bin/db-backup.sh 15 0 * * * /usr/bin/cleanup-old-files.sh 30 0 * * * /usr/bin/generate-reports.sh 45 0 * * * /usr/bin/send-daily-digest.sh
Mistake 2: No flock on jobs that run more than a few seconds
If a job runs every 5 minutes and occasionally takes 6 minutes, you now have two copies running simultaneously. Both hitting the same database. Both writing to the same files. Both consuming memory. This compounds — the extra load makes the next run take even longer.
# Vulnerable: */5 * * * * /usr/bin/process-orders.sh # Protected: */5 * * * * flock -n /tmp/process-orders.lock /usr/bin/process-orders.sh
flock is installed on every Linux system. There's no reason not to use it on any job that does real work.
Mistake 3: Silently discarding output
Cron sends job output to the system mail. Most servers either don't have a mail daemon configured or the mail goes somewhere nobody reads. The result is that cron job failures are completely silent.
# Silent — you'll never know if this fails: 0 2 * * * /usr/bin/backup.sh # Auditable — errors go to a log you can check: 0 2 * * * /usr/bin/backup.sh >> /var/log/backup.log 2>&1 # Even better — log with timestamps: 0 2 * * * echo "$(date): starting backup" >> /var/log/backup.log && /usr/bin/backup.sh >> /var/log/backup.log 2>&1
Check your crontab right now: how many jobs redirect output? If the answer is "none," you have no idea whether half your cron jobs are working.
Mistake 4: Wrong timezone
Cron uses the system timezone. Your server is probably set to UTC. Your laptop is probably set to your local timezone. When you write a cron job thinking "I want this to run at 9 AM," you need to know which 9 AM you mean.
# Check your server timezone: timedatectl | grep "Time zone" date # If your server is UTC and you want 9am New York time: # New York is UTC-5 (winter) or UTC-4 (summer) # So 9am New York = 14:00 UTC (winter) or 13:00 UTC (summer) 0 14 * * * /usr/bin/morning-report.sh # winter 0 13 * * * /usr/bin/morning-report.sh # summer
The daylight saving time transition is particularly painful — a job set to "9am Eastern" in winter runs an hour off in summer if you calculated it in UTC wrong. Use UTC for everything and convert deliberately.
Mistake 5: Jobs that grow over time
A backup job that takes 3 minutes today will take 30 minutes in two years as your data grows. Nobody goes back and adjusts the schedule. Eventually it's still running when the next day's backup starts and you have two backups running simultaneously, each taking twice as long as it should.
The fix is to monitor job duration and review it periodically — or use flock so overlapping instances are prevented regardless of how long the job takes.
# Check how long your jobs actually take:
time /usr/bin/backup.sh
# Add to cron with timing logged:
0 2 * * * { time /usr/bin/backup.sh; } >> /var/log/backup.log 2>&1
Mistake 6: Using cron for things that should be monitored
Cron fires and forgets. If a job fails, nothing happens. If a job stops running entirely (because the crontab got wiped, or the server rebooted and something didn't start), nothing happens. No alert. No notification. No indication anything is wrong until someone notices the data hasn't been processed in three days.
For anything critical, add a dead man's switch: a service like Healthchecks.io or a simple curl to a monitoring endpoint at the end of your job. If the ping doesn't arrive, you get alerted.
# Ping a healthcheck endpoint on success: 0 2 * * * /usr/bin/backup.sh && curl -s https://hc-ping.com/your-uuid
Mistake 7: Hardcoding paths that change
# Will break if Python moves or a virtualenv is activated differently: 0 * * * * python3 /app/scripts/sync.py # More robust — use absolute paths: 0 * * * * /usr/bin/python3 /app/scripts/sync.py # Or activate virtualenv explicitly: 0 * * * * /app/venv/bin/python /app/scripts/sync.py
The quick audit
Run crontab -l and check each job against this list:
- Is it scheduled at exactly
0 0 * * *? Can it be staggered? - Does it run more than once per hour? Does it have flock protection?
- Does it redirect output to a log file?
- Do you know how long it takes? Does that fit inside its interval?
- Is the timezone correct for what you intended?
Paste your crontab to see every job on a timeline, spot midnight pile-ups, and identify jobs that need flock protection.
Open Cron Visualiser →