Fix: Conflicting robots.txt Allow and Disallow Rules

When a URL matches both an Allow and a Disallow rule in robots.txt, most crawlers apply the most-specific-rule-wins precedence — the rule with the longer matching path takes priority. Understanding this prevents accidental blocks and allows precise control over which URLs are crawled.

The Problem

A common pattern is to disallow a directory but allow specific files within it. For example, Disallow: /private/ with Allow: /private/public-page.html. The Allow rule is more specific (longer path match) so it takes precedence — the public page is crawlable. But if both rules are equal length, behaviour varies by crawler.

The Fix

robots.txt — Conflict resolution examples
# CORRECT — Allow takes precedence (more specific path):
User-agent: *
Disallow: /admin/
Allow: /admin/public-status/   # This page IS crawled (longer match)

# CORRECT — Disallow takes precedence:
User-agent: *
Allow: /products/
Disallow: /products/draft/      # These pages NOT crawled (longer match)

# AMBIGUOUS — equal length, crawler-dependent:
User-agent: *
Disallow: /page
Allow: /page   # Googlebot: Allow wins. Others: varies.
# FIX: Use explicit longer path to remove ambiguity:
Disallow: /page$
Allow: /page/public/

For Googlebot: when Allow and Disallow match a URL at the same path length, Allow wins. For other crawlers, behaviour is undefined. Eliminate ambiguity by making the intended-to-win rule more specific (longer). Use the URL Tester in ConfigClarity's robots.txt Validator to verify your rules behave as expected.

Validate your robots.txt live — fetch any URL and get a corrected file in one click.

Open robots.txt Validator →

Frequently Asked Questions

Does Allow override Disallow in robots.txt?
Not unconditionally. The most-specific rule (longest matching path) wins. If Allow and Disallow have the same path length and both match a URL, Googlebot gives precedence to Allow. Bing and other crawlers may handle this differently.
What is the correct order for Allow and Disallow rules?
Rule order does not affect precedence in Googlebot — specificity wins, not order. However, for clarity and compatibility with older crawlers that use first-match semantics, put more specific rules before general ones and put Allow rules before their corresponding Disallow rules.
How do I test which rule wins for a specific URL?
Paste your robots.txt into ConfigClarity's robots.txt Validator and use the URL Tester — enter any path and select a user agent. The tool shows BLOCKED or ALLOWED and which rule matched.

Related Guides