Fix: robots.txt Crawl-Delay Too High
The Crawl-delay directive in robots.txt tells crawlers to wait N seconds between requests. Setting it too high (10+ seconds) slows how quickly new content is indexed and can cause Googlebot to crawl fewer pages per day. Google does not officially support Crawl-delay — use Google Search Console's crawl rate settings instead.
The Problem
Crawl-delay was added to robots.txt to protect servers from aggressive crawling. However, Google ignores the Crawl-delay directive — it manages crawl rate internally based on server response times and your GSC settings. Setting a high Crawl-delay only affects crawlers that honour it (Bing, some smaller bots) without protecting against Google's crawl load.
The Fix
# BEFORE — high crawl-delay slows indexing for crawlers that honour it: # User-agent: * # Crawl-delay: 30 # AFTER — remove crawl-delay (Google ignores it, others over-throttle): User-agent: * Allow: / Sitemap: https://yourdomain.com/sitemap.xml # If you need to throttle Bing specifically, use a low value: User-agent: Bingbot Crawl-delay: 2
Remove Crawl-delay for User-agent: *. To control Googlebot's crawl rate, use Google Search Console → Settings → Crawl rate settings. For Bing specifically, a Crawl-delay of 1–2 seconds is reasonable. Values above 10 seconds significantly reduce indexing speed for the crawlers that honour the directive.
Validate your robots.txt live — fetch any URL and get a corrected file in one click.
Open robots.txt Validator →