Fix: robots.txt Blocking CSS and JavaScript Files

Blocking CSS and JavaScript files in robots.txt prevents Googlebot from fully rendering your pages. Google evaluates pages as a rendered browser experience — if it cannot load your stylesheets and scripts, it sees a broken page and may rank it lower or index it incorrectly.

The Problem

Older SEO advice recommended blocking CSS and JS with robots.txt to save crawl budget. This is now incorrect. Googlebot renders JavaScript and needs CSS to understand page structure. Blocking /assets/, /static/, or *.css patterns prevents rendering and causes Google Search Console to report 'Page could not be rendered' warnings.

The Fix

CORRECTED robots.txt — Remove CSS/JS blocks
# BEFORE (incorrect — blocks rendering):
# User-agent: Googlebot
# Disallow: /assets/
# Disallow: /static/
# Disallow: /*.css$
# Disallow: /*.js$

# AFTER (correct — allow everything):
User-agent: *
Allow: /

# Only block paths you genuinely don't want indexed:
Disallow: /admin/
Disallow: /api/
Disallow: /.well-known/

Sitemap: https://yourdomain.com/sitemap.xml

Remove all Disallow rules targeting CSS, JS, image, or font files. Only disallow paths that should genuinely not appear in search results — admin panels, API endpoints, and internal tooling. Use the Google Search Console Coverage report to check for 'blocked by robots.txt' warnings after the fix.

Validate your robots.txt live — fetch any URL and get a corrected file in one click.

Open robots.txt Validator →

Frequently Asked Questions

Should I block /wp-admin/ in robots.txt?
Yes. /wp-admin/ should be disallowed — it should never appear in search results and is a common target for brute force attacks. However, /wp-includes/ and /wp-content/ should be allowed so Googlebot can load WordPress assets for rendering.
How do I check if Googlebot can render my pages?
Use Google Search Console's URL Inspection tool → 'Test Live URL' → View rendered page. GSC shows exactly what Googlebot sees when rendering your page and reports any blocked resources.
Does blocking JS in robots.txt save crawl budget?
No longer a valid strategy. Crawl budget savings from blocking assets are negligible, while the rendering penalty (Google seeing a broken page) is significant. Allow all rendering resources and control indexing with noindex meta tags or canonical tags instead.

Related Guides