{"id":122674,"date":"2025-09-12T12:00:19","date_gmt":"2025-09-12T12:00:19","guid":{"rendered":"https:\/\/www.bluehost.com\/blog\/?p=122674"},"modified":"2026-02-16T03:59:44","modified_gmt":"2026-02-16T03:59:44","slug":"robots-txt-disallow-all","status":"publish","type":"post","link":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/","title":{"rendered":"Robots.txt Disallow Explained: Syntax, Use Cases &amp; SEO Best Practices"},"content":{"rendered":"\n<h3 class=\"wp-block-heading\" id=\"h-key-highlights-nbsp\">Key highlights&nbsp;<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Understand what robots.txt Disallow, robots.txt Disallow all truly do and how they impact SEO, crawling and indexing.<\/li>\n\n\n\n<li>Learn how to safely configure your robots.txt file to block unwanted bots without accidentally deindexing your entire site.<\/li>\n\n\n\n<li>Explore real-world examples and wildcard patterns to target crawl restrictions efficiently and avoid common mistakes.<\/li>\n\n\n\n<li>Uncover the hidden risks of using Disallow: \/ on live environments and how to reverse crawl-blocking errors.<\/li>\n\n\n\n<li>Know when to use robots.txt vs. meta robots tags or HTTP headers for better privacy, crawl control and content protection.&nbsp;<\/li>\n<\/ul>\n\n\n\n<p>It\u2019s one of the smallest plain text files on your website, and one of the easiest to get wrong. The robots.txt file controls what search engines can and can\u2019t crawl on your site. When used right, it helps you protect sensitive folders, focus on your crawl budget and clean up low-value pages from search results.&nbsp;<\/p>\n\n\n\n<p>But when used carelessly, like adding robots.txt Disallow all directive, it can block Google&#8217;s crawler entirely and wipe your site off the map.&nbsp;<\/p>\n\n\n\n<p>If you\u2019ve launched a site and wondering why it\u2019s not showing up in search, your robots.txt file may be to blame. In this guide, we\u2019ll break down exactly what robots txt Disallow means, when to use it and when not to. We&#8217;ll also explore how to configure robots txt file without risking visibility, rankings or control. &nbsp;&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-tl-dr-what-does-robots-txt-disallow-all-do-nbsp\">TL;DR: What does robots.txt Disallow All do?&nbsp;<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>robots.txt controls what search engine bots can crawl on your site.&nbsp;<\/li>\n\n\n\n<li>The Disallow: \/ directive blocks all bots from accessing any page.&nbsp;<\/li>\n\n\n\n<li>Useful for staging sites or testing, but dangerous on live sites. It can deindex your entire site.&nbsp;<\/li>\n\n\n\n<li>Use meta tags like noindex, password protection or authentication to truly hide content.<\/li>\n\n\n\n<li>Always test your robots.txt file using tools like Google Search Console or technicalseo.com.&nbsp;&nbsp;<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-what-is-a-robots-txt-file-and-what-does-disallow-do\">What is a robots.txt file and what does \u2018Disallow\u2019 do?<\/h2>\n\n\n\n<p>A <a href=\"https:\/\/www.bluehost.com\/help\/article\/robots-txt\">robots.txt file is a simple text file<\/a> placed in your website\u2019s root <a href=\"https:\/\/www.bluehost.com\/domains\">domain<\/a> that tells search engine bots which pages or sections of your website they can or cannot crawl. The robots.txt Disallow directive is used to block specific URLs from being accessed by search engine crawlers.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/www.bluehost.com\/blog\/wp-content\/smush-webp\/2025\/06\/Different-file-types-and-robots.txt-file.png.webp\" alt=\"Different file types and robots.txt file\"\/><\/figure>\n\n\n\n<p>A robots.txt file is located in your website\u2019s root domain directory and follows the <a href=\"https:\/\/datatracker.ietf.org\/doc\/rfc9309\/\" target=\"_blank\" rel=\"noreferrer noopener\">Robots Exclusion Protocol(REP)<\/a>. It\u2019s a set of guidelines that different search engines follow when crawling websites.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-role-of-disallow-directive\">Role of Disallow directive<\/h3>\n\n\n\n<p>The Disallow directive in the robots.txt file instructs search engines on what not to crawl. When you use this command, you&#8217;re essentially placing a &#8216;Do Not Enter&#8217; sign on specific areas of your website.<\/p>\n\n\n\n<p>For example, adding &#8216;Disallow: \/private-folder\/&#8217; tells search engines to avoid crawling anything within that folder. This can help keep sensitive or irrelevant sections out of search results.<\/p>\n\n\n\n<p>But Disallow doesn\u2019t stop indexing if the content is linked elsewhere. That\u2019s why it\u2019s not a reliable method for protecting private data, only for managing crawler access.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/www.bluehost.com\/blog\/wp-content\/smush-webp\/2025\/06\/what-robots.txt-can-and-cant-do.png.webp\" alt=\"what robots.txt can and can't do\"\/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-how-do-google-search-engines-interpret-robots-txt\">How do Google search engines interpret robots.txt?<\/h3>\n\n\n\n<p>Without a well-configured robots txt file, Google bots can roam freely, indexing everything. It may include pages you don\u2019t want to appear in search results, such as admin pages, duplicate content or test environments.&nbsp;<\/p>\n\n\n\n<p>If you mistakenly use robots.txt Disallow all directive (Disallow: \/ for all user agents), it blocks search engine crawlers from accessing any part of your website. This can wipe your entire site from Google search results and cause a critical SEO error.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/www.bluehost.com\/blog\/wp-content\/smush-webp\/2025\/06\/How-Google-interprets-robots.txt-file.png.webp\" alt=\"How Google interprets robots.txt file\"\/><\/figure>\n\n\n\n<p><strong>Note:<\/strong> Google enforces a 500 KiB size limit for robots.txt files. Any content exceeding the maximum file size is ignored. The robots.txt file is not a mechanism for keeping a web page out of Google. To prevent a page from appearing in search results, you need to use a noindex directive.&nbsp;<\/p>\n\n\n\n<p>You can create a robots.txt file manually through web server files or use tools like the&nbsp;<a href=\"https:\/\/www.bluehost.com\/blog\/yoast-seo-wordpress-review\/\" target=\"_blank\" rel=\"noreferrer noopener\">Yoast SEO<\/a>&nbsp;plugin. For example, publishers sometimes add a Googlebot news disallow rule to prevent Google News from crawling certain sections of their site. Platforms like Google Search Console also let you test and troubleshoot your file to ensure everything works as expected.&nbsp;&nbsp;<\/p>\n\n\n\n<p><strong>Also read: <\/strong><a href=\"https:\/\/www.bluehost.com\/blog\/exclude-indexing-add-to-cart-wordpress-page\/\"><strong>How to Exclude Google from Indexing Add to Cart WordPress Page using Yoast SEO<\/strong><\/a>&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-robots-txt-syntax-syntax-and-examples-for-robots-txt-directives\">robots.txt syntax: Syntax and examples for robots.txt directives<\/h2>\n\n\n\n<p>Managing how search engines interact with your website starts with understanding the core rules in a robots.txt file. This quick robots.txt syntax guide explains the exact rules and examples you should follow. Let&#8217;s look at the six key syntax rules, including how the Disallow directive works:<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-6-core-robots-txt-syntax\">6 core robots.txt syntax<\/h3>\n\n\n\n<p>Understanding robots.txt is easier when you are familiar with its basic rules. These simple rules help manage how search engine bots work with your website. Each directive should be written on a separate line, paying attention to case sensitive elements like directory name and forward slash placement:&nbsp;&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>User-agent:<\/strong> This rule tells which bot or crawler the following guidelines are for.<\/li>\n\n\n\n<li><strong>Disallow:<\/strong> This rule tells bots not to visit specific files, folders or pages on your site that may include certain regular expressions.<\/li>\n\n\n\n<li><strong>Allow:<\/strong> This rule allows bots to crawl specific files, folders or pages.<\/li>\n\n\n\n<li><strong>Sitemap: <\/strong>This rule directs search engines to your website\u2019s XML sitemap location.<\/li>\n\n\n\n<li><strong>Crawl-delay:<\/strong> This rule asks bots to crawl your site more slowly. However, not all search engines follow this rule.<\/li>\n\n\n\n<li><strong>Noindex:<\/strong> This rule requests bots not to index some pages or parts of your site. Yet, <a href=\"https:\/\/developers.google.com\/search\/docs\/crawling-indexing\/block-indexing#:~:text=Specifying%20the%20noindex%20rule%20in,other%20rules%20that%20control%20indexing.\" target=\"_blank\" rel=\"noreferrer noopener\">Google\u2019s support for noindex rule in robots.txt<\/a> is inconsistent.<\/li>\n<\/ul>\n\n\n\n<p>To help you compare these core directives at a glance, here\u2019s a quick breakdown of how each one works:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><th><strong>Metric<\/strong><\/th><th><strong>User-agent<\/strong><\/th><th><strong>Disallow<\/strong><\/th><th><strong>Allow<\/strong><\/th><th><strong>Sitemap<\/strong><\/th><th><strong>Crawl-delay<\/strong><\/th><th><strong>Noindex<\/strong><\/th><\/tr><tr><td><strong>Function<\/strong><\/td><td>Targets specific bots<\/td><td>Blocks crawling of paths<\/td><td>Allows access to files<\/td><td>Points to XML sitemap<\/td><td>Sets crawl rate (in sec)<\/td><td>Prevents indexing (deprecated)<\/td><\/tr><tr><td><strong>Example<\/strong><\/td><td>User-agent: *<\/td><td>Disallow: \/admin\/<\/td><td>Allow: \/public\/logo.png<\/td><td>Sitemap: https:\/\/example.com\/sitemap.xml<\/td><td>Crawl-delay: 10<\/td><td>Noindex: \/temp\/<\/td><\/tr><tr><td><strong>Google support<\/strong><\/td><td>\u2705 Yes<\/td><td>\u2705 Yes<\/td><td>\u2705 Yes<\/td><td>\u2705 Yes<\/td><td>\u274c No<\/td><td>\u274c No (use meta or header)<\/td><\/tr><tr><td><strong>Use case<\/strong><\/td><td>Apply rules to all\/specific bots<\/td><td>Hide private or duplicate pages<\/td><td>Make exceptions in blocked folders<\/td><td>Help bots discover key pages<\/td><td>Reduce load from bots<\/td><td>Use safer alternatives<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>Each directive plays a specific role in managing how bots interact with your site. Let\u2019s now look at them in more detail:<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-1-user-agent-directive-nbsp-nbsp\">1. User-agent directive&nbsp;&nbsp;<\/h4>\n\n\n\n<p>The \u2018User-agent\u2019 rule is important for your robots.txt file. It shows which bot or crawler the rules apply to. Each search engine has a specific user agent name. For example, Google\u2019s web crawler calls itself \u2018Googlebot\u2019.&nbsp;&nbsp;<\/p>\n\n\n\n<p>If you want to target a specific user agent, such as Googlebot only, write:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>User-agent: Googlebot<\/code><\/pre>\n\n\n\n<p>You can type different user agents separately, each with their own rules. You can also use the wildcard \u2018*\u2019 to make the rules apply to all user agents.&nbsp;&nbsp;<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-2-disallow-robots-txt-directive-nbsp-nbsp-nbsp\">2. Disallow robots.txt directive&nbsp;&nbsp;&nbsp;<\/h4>\n\n\n\n<p>The robots.txt \u2018Disallow\u2019 rule is very important for deciding which parts of your website should be hidden from search engines. This rule instructs search engine bots not to access specific path components, such as folders, file types or individual URLs, on your site.<\/p>\n\n\n\n<p><strong>Blocking a directory&nbsp;&nbsp;<\/strong><\/p>\n\n\n\n<p>For example, you can use the \u2018Disallow\u2019 rule to stop bots from entering the admin area of your website:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>User-agent: *\nDisallow: \/admin\/<\/code><\/pre>\n\n\n\n<p>This will prevent all URLs starting with \u2018\/admin\/\u2019 from being indexed by search engine bots.&nbsp;&nbsp;<\/p>\n\n\n\n<p><strong>Using wildcards<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>User-agent: *\nDisallow: \/*.pdf$<\/code><\/pre>\n\n\n\n<p>With the wildcard \u2018*\u2019, you can block all PDF files on your website. Remember to check your robots.txt file after making changes to make sure you don\u2019t block any important parts of the site.&nbsp;&nbsp;<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-3-allow-directive-nbsp-nbsp-nbsp\">3. Allow directive&nbsp;&nbsp;&nbsp;<\/h4>\n\n\n\n<p>\u2018Disallow\u2019 blocks access to certain areas of a website, whereas the \u2018Allow\u2019 directive can make exceptions in these blocked areas. It works together with \u2018Disallow\u2019 to allow specific files or pages to be accessed even when a whole directory is blocked.&nbsp;&nbsp;<\/p>\n\n\n\n<p>Think about a directory that has images. If you want Google Images to see one special image in that directory, here\u2019s how you can do it:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>User-agent: Googlebot-Image\nAllow: \/images\/featured-image.jpg\nUser-agent: *\nDisallow: \/images\/<\/code><\/pre>\n\n\n\n<p>In this case, you are first letting Googlebot-Image access \u2018featured-image.jpg\u2019. Then, block all other bots from seeing the \u2018\/images\/\u2019 directory.&nbsp;&nbsp;<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-4-sitemap-directive-nbsp-nbsp\">4. Sitemap directive&nbsp;&nbsp;<\/h4>\n\n\n\n<p>The \u2018Sitemap\u2019 directive instructs search engines on where to locate your XML sitemap. An XML sitemap is a file that shows all the key pages on your site. This makes it easier for search engines to crawl and index your content.&nbsp;&nbsp;<\/p>\n\n\n\n<p>Adding your sitemap to your robots.txt file is easy:&nbsp;&nbsp;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>Sitemap: https:\/\/www.&#91;yourwebsitename].com\/sitemap.xml<\/code><\/pre>\n\n\n\n<p>Make sure to change \u2018https:\/\/www.[yourwebsitename].com\/sitemap.xml\u2019 to your real sitemap <a href=\"https:\/\/www.bluehost.com\/blog\/what-is-a-url\/\">URL<\/a>. You can submit your sitemap using Google Search Console. However, placing it in your robots.txt file ensures that all search engines can index it.<\/p>\n\n\n\n<p><strong>Also read: <\/strong><a href=\"https:\/\/www.bluehost.com\/in\/blog\/verify-website-ownership-google-search-console\/\"><strong>How to Verify Website Owners on Google Search Console<\/strong><\/a>&nbsp;<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-5-crawl-delay-directive-nbsp-nbsp-nbsp\">5. Crawl-delay directive&nbsp;&nbsp;&nbsp;<\/h4>\n\n\n\n<p>The \u2018Crawl-delay\u2019 directive controls how fast search engines crawl your website. Its main goal is to prevent your web server from becoming too busy when many bots attempt to access pages simultaneously.&nbsp;&nbsp;<\/p>\n\n\n\n<p>The \u2018Crawl-delay\u2019 time is measured in seconds. For example, you can pair a Bingbot disallow directive with crawl delay like this:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>User-agent: Bingbot\nCrawl-delay: 10<\/code><\/pre>\n\n\n\n<p>Be cautious when setting crawl delays. A prolonged delay can harm your website\u2019s indexing and ranking. This is especially true if your site has a large number of pages and is frequently updated.&nbsp;&nbsp;<\/p>\n\n\n\n<p><strong>Note:<\/strong> Google\u2019s crawler, <a target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/developers.google.com\/search\/docs\/crawling-indexing\/robots\/robots_txt#syntax\">Googlebot, doesn\u2019t follow this directive<\/a>. But you can adjust the crawl rate through Google Search Console to avoid web server overload.&nbsp;&nbsp;&nbsp;<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-6-noindex-directive-nbsp-nbsp\">6. Noindex directive&nbsp;&nbsp;<\/h4>\n\n\n\n<p>The \u2018noindex\u2019 command prevents search engines from indexing specific pages on your website. However, Google no longer officially supports this rule.&nbsp;&nbsp;<\/p>\n\n\n\n<p>Some tests indicate that \u2018noindex\u2019 in robots.txt can still be effective. However, it isn\u2019t a good idea to rely solely on this method. Instead, you can use meta robots tags or the X-Robots-Tag HTTP header for better control over indexing.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-real-world-disallow-examples-nbsp\">Real-world disallow examples&nbsp;<\/h3>\n\n\n\n<p>Robots.txt has different rules depending on how much access you want to give search engine bots. Here are a few common examples:&nbsp;&nbsp;<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-example-1-allowing-all-bots-to-access-the-entire-website-nbsp-nbsp\">Example 1: Allowing all bots to access the entire website&nbsp;&nbsp;<\/h4>\n\n\n\n<pre class=\"wp-block-code\"><code>User-agent: *\nDisallow:<\/code><\/pre>\n\n\n\n<p><strong>What it does:<\/strong>&nbsp;&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The \u2018User-agent: *\u2019 means all search engine bots (Googlebot, Bingbot, etc.) can access the site.<\/li>\n\n\n\n<li>The \u2018empty Disallow\u2019 field means no restrictions, and bots can crawl everything.&nbsp;&nbsp;<\/li>\n<\/ul>\n\n\n\n<p><strong>When to use it:<\/strong> If you want full search engine visibility for your entire website.&nbsp;&nbsp;<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-example-2-disallowing-all-bots-from-accessing-specific-file-or-directory-nbsp-nbsp\">Example 2: Disallowing all bots from accessing specific file or directory&nbsp;&nbsp;<\/h4>\n\n\n\n<pre class=\"wp-block-code\"><code>User-agent: *&nbsp;&nbsp;&nbsp;\nDisallow: \/private-directory\/<\/code><\/pre>\n\n\n\n<p><strong>What it does: <\/strong>Blocks all search engine bots (such as Googlebot disallow)from accessing anything inside \u2018\/private-directory\/\u2019.&nbsp;&nbsp;<\/p>\n\n\n\n<p><strong>When to use it:<\/strong> This prevents search engines (Googlebot disallow and Bingbot disallow) from accessing confidential data. It may include a staging site, backup folder or any other separate directory that shouldn\u2019t be crawled.&nbsp;&nbsp;<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-example-3-allowing-googlebot-while-disallowing-others-from-a-directory-nbsp\">Example 3: Allowing Googlebot while disallowing others from a directory&nbsp;<\/h4>\n\n\n\n<pre class=\"wp-block-code\"><code>User-agent: Googlebot\nDisallow: \/images\/\nUser-agent: *&nbsp;\nDisallow: \/private-directory\/<\/code><\/pre>\n\n\n\n<p><strong>What it does:<\/strong>&nbsp;&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Googlebot cannot access the \/images\/ directory.&nbsp;<\/li>\n\n\n\n<li>All other bots cannot access \/private-directory\/.&nbsp;<\/li>\n<\/ul>\n\n\n\n<p><strong>When to use it:<\/strong> If you want to control access for specific bots, such as allowing Google to crawl certain parts of your site while blocking others (such as Bingbot disallow).<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-example-4-specifying-the-location-of-your-xml-sitemap-nbsp\">Example 4: Specifying the location of your XML Sitemap&nbsp;<\/h4>\n\n\n\n<pre class=\"wp-block-code\"><code>User-agent: *&nbsp;&nbsp;&nbsp;\nDisallow:&nbsp;&nbsp;&nbsp;&nbsp;\nSitemap: https:\/\/www.&#91;yourwebsitename].com\/sitemap.xml<\/code><\/pre>\n\n\n\n<p>&nbsp;<strong>What it does:<\/strong>&nbsp;&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Allows full access to search engine bots.<\/li>\n\n\n\n<li>Tells search engines where to find the XML Sitemap, helping them index web pages efficiently.<\/li>\n<\/ul>\n\n\n\n<p><strong>When to use it:<\/strong> If you want search engines to easily find and crawl your sitemap.&nbsp;&nbsp;<\/p>\n\n\n\n<p><strong>Also read: <\/strong><a href=\"https:\/\/www.bluehost.com\/blog\/how-to-create-a-sitemap-and-why-you-should\/\"><strong>How to Create a WordPress sitemap<\/strong><\/a>&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-wildcard-pattern-matching-examples\">Wildcard pattern matching examples<\/h3>\n\n\n\n<p>Wildcards in the robots.txt file let you create flexible rules for blocking or allowing multiple URLs that follow a similar pattern. This is especially useful for filter parameters, file types or dynamically generated web pages that don\u2019t need to be crawled.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-example-1-block-urls-with-specific-query-parameters\">Example 1: Block URLs with specific query parameters<\/h4>\n\n\n\n<pre class=\"wp-block-code\"><code>Disallow: \/*?filter=*<\/code><\/pre>\n\n\n\n<p>This tells bots (like Googlebot disallow) to avoid crawling any URL that contains &#8216;?filter=&#8217;, no matter what value follows. It\u2019s helpful for eCommerce or blog filters that can create dozens of crawlable variations.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-example-2-block-all-pdf-files\">Example 2: Block all PDF files<\/h4>\n\n\n\n<pre class=\"wp-block-code\"><code>Disallow: \/*.pdf$<\/code><\/pre>\n\n\n\n<p>The \u2018*\u2019 wildcard matches any path, and the \u2018$\u2019 ensures only URLs that end with .pdf are blocked. This prevents search engines from wasting crawl budgets on downloadable documents that don&#8217;t need to be indexed.&nbsp;<\/p>\n\n\n\n<p>Using wildcards to match URLs or entire directory names helps you manage multiple groups of pages at once, rather than blocking only one group manually. It also keeps low-value or duplicate content out of Google search results. Always test your rules before deploying them to avoid over-blocking important URLs.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-difference-between-robots-txt-vs-meta-robots-tag-vs-x-robots-tag-nbsp\">Difference between robots.txt vs. meta robots tag vs. X-Robots-Tag&nbsp;<\/h2>\n\n\n\n<p>To truly control how search engines handle your website, you need to understand the difference between three tools: robots.txt, meta robots tags and X-Robots-Tag headers.<\/p>\n\n\n\n<p>Each one handles crawling and indexing differently and choosing the right one depends on what you\u2019re trying to achieve.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Robots.txt: <\/strong>Lets you disallow search engines (such as Googlebot disallow) from crawling content, though indexing may still occur via external links.<\/li>\n\n\n\n<li><strong>Meta robots tag: <\/strong>Directly influences indexing and crawling of individual web pages.<\/li>\n\n\n\n<li><strong>X-Robots-Tag:<\/strong> Controls indexing of non-HTML files like PDFs, images and videos.&nbsp;&nbsp;<\/li>\n<\/ul>\n\n\n\n<p>Here\u2019s a quick side-by-side comparison to help you decide which directive works best for your use case:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Feature<\/strong>&nbsp;&nbsp;<\/td><td><strong>Robots.txt<\/strong>&nbsp;&nbsp;<\/td><td><strong>Meta robots tags<\/strong>&nbsp;&nbsp;<\/td><td><strong>X-Robots-Tag<\/strong>&nbsp;&nbsp;<\/td><\/tr><tr><td><strong>Location<\/strong>&nbsp;&nbsp;<\/td><td>Root directory (\/robots.txt)&nbsp;&nbsp;<\/td><td>&lt;head&gt; section of a webpage&nbsp;&nbsp;<\/td><td>HTTP header response&nbsp;&nbsp;<\/td><\/tr><tr><td><strong>Controls<\/strong>&nbsp;&nbsp;<\/td><td>Entire sections of a site&nbsp;&nbsp;<\/td><td>Indexing and crawling of specific pages&nbsp;&nbsp;<\/td><td>Indexing of non-HTML files&nbsp;&nbsp;<\/td><\/tr><tr><td><strong>Example<\/strong>&nbsp;&nbsp;<\/td><td>Disallow: \/private\/&nbsp;&nbsp;<\/td><td>&lt;meta name=\u201drobots\u201d content=\u201dnoindex\u201d&gt;&nbsp;&nbsp;<\/td><td>X-Robots-Tag: noindex&nbsp;&nbsp;<\/td><\/tr><tr><td><strong>Impact on SEO<\/strong>&nbsp;&nbsp;<\/td><td>Stops bots from crawling, but does not prevent indexing if linked elsewhere&nbsp;&nbsp;<\/td><td>Prevents a page from being indexed and appearing in search results&nbsp;&nbsp;<\/td><td>Ensures non-HTML files are not indexed&nbsp;&nbsp;<\/td><\/tr><tr><td><strong>Best use case<\/strong>&nbsp;&nbsp;<\/td><td>Block search engines from entire directories&nbsp;&nbsp;<\/td><td>Prevent specific pages from appearing in search results&nbsp;&nbsp;<\/td><td>Control indexing of PDFs, images and other files &nbsp;<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>As you can see, robots.txt is ideal for managing crawl access across sections of your site, but it doesn\u2019t guarantee those pages won\u2019t get indexed. If you need stricter control, especially for individual URLs or non-HTML files, meta robots tags and X-Robots-Tag headers offer more precision.<\/p>\n\n\n\n<p>In many cases, a combined strategy works best. Use robots.txt to manage crawl budget and server load, and pair it with meta or header-level tags to handle indexing control.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-why-is-robots-txt-important-for-seo-nbsp-nbsp\">Why is robots.txt important for SEO?&nbsp;&nbsp;<\/h2>\n\n\n\n<p>Robots.txt plays a crucial role in how search engines interact with your website. When used strategically, it helps improve SEO performance by guiding bots on what to crawl and what to skip. Here&#8217;s how it contributes:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Optimize crawl budget<\/li>\n\n\n\n<li>Block duplicate and non-public pages<\/li>\n\n\n\n<li>Hide resources (with caution)<\/li>\n<\/ol>\n\n\n\n<p>Now, let\u2019s dive deeper into each of these SEO benefits and learn how to use robots.txt settings more effectively.&nbsp;&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-1-optimize-crawl-budget-nbsp-nbsp-nbsp\">1. Optimize crawl budget&nbsp;&nbsp;&nbsp;<\/h3>\n\n\n\n<p>The crawl budget is the number of pages that Googlebot will index on your website within a specific time frame. If you optimize your crawl budget effectively, Google will prioritize your essential content.&nbsp;&nbsp;<\/p>\n\n\n\n<p>You can use robots.txt to block Google from visiting unnecessary pages and spend more time on your valuable content.&nbsp;&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-2-block-duplicate-and-non-public-pages-nbsp-nbsp\">2. Block duplicate and non-public pages&nbsp;&nbsp;<\/h3>\n\n\n\n<p>Duplicate content is a common issue that can negatively impact your SEO. It confuses search engines and weakens your website\u2019s authority.&nbsp;&nbsp;<\/p>\n\n\n\n<p>Using robots.txt, you can block access to duplicate pages, like PDF versions or older content. This way, search engines can focus on the original and most important versions of your pages.&nbsp;<\/p>\n\n\n\n<p><strong>Also read: <\/strong><a href=\"https:\/\/www.bluehost.com\/blog\/what-is-duplicate-content\/\"><strong>What is Duplicate Content: How to Spot and Prevent It<\/strong><\/a>&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-3-hide-resources-nbsp\">3. Hide resources&nbsp;<\/h3>\n\n\n\n<p>Hiding CSS or <a href=\"https:\/\/www.bluehost.com\/blog\/what-is-javascript\/\">JavaScript<\/a> files from search engines may sound like a good idea for managing your website\u2019s crawl budget. But it\u2019s not.&nbsp;&nbsp;&nbsp;<\/p>\n\n\n\n<p>Search engines use these files to properly display your pages and understand how your website works. If you block these files, search engines may struggle to evaluate your website\u2019s user experience. This hurts your search rankings.&nbsp;<\/p>\n\n\n\n<p>Want to make sure your site is optimized for search? Try our <a href=\"https:\/\/www.bluehost.com\/seo-checker\">free SEO Checker tool<\/a>. We scan your website for common SEO issues like broken links, slow load times, missing meta tags and more. You\u2019ll get a detailed report along with actionable tips to boost your site\u2019s visibility at no cost.&nbsp;<\/p>\n\n\n\n<svg version=\"1.1\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" viewBox=\"0 0 1000 300\"> \n\n  <image width=\"1000\" height=\"300\" xlink:href=\"https:\/\/www.bluehost.com\/blog\/wp-content\/uploads\/2025\/04\/SEO.jpg\"><\/image> <a xlink:href=\"https:\/\/www.bluehost.com\/seo-checker\"> \n\n    <rect x=\"49\" y=\"146\" fill=\"#fff\" opacity=\"0\" width=\"222\" height=\"78\"><\/rect> \n\n  <\/a> \n\n<\/svg>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-how-to-configure-robots-txt-disallow-rules-to-safely-control-search-engine-access-nbsp\">How to configure robots.txt Disallow rules to safely control search engine access?&nbsp;<\/h2>\n\n\n\n<p>You can check your site&#8217;s robots.txt file by simply adding \u2018robots.txt&#8217; at the end of a URL. For example, https:\/\/www.bluehost.com\/robots.txt. Let&#8217;s check how you can configure the robots.txt file using Bluehost File Manager. To configure these rules, simply add the following code in your robots.txt file, making sure each directive appears on its own line.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-1-access-the-file-manager-nbsp-nbsp\">1. Access the File Manager&nbsp;&nbsp;<\/h3>\n\n\n\n<p>a. Log in to your <a href=\"https:\/\/www.bluehost.com\/my-account\/login\">Bluehost account manager<\/a>.<\/p>\n\n\n\n<p>b. Navigate to the \u2018Hosting\u2019 tab in the left-hand menu.<\/p>\n\n\n\n<p>c. Click on \u2018File Manager\u2019 under the \u2018Quick Links\u2019 section.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/www.bluehost.com\/blog\/wp-content\/smush-webp\/2025\/02\/Access-Fill-Manager-from-Bluehost-account-manager-1024x406.png.webp\" alt=\"access File manager\"\/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-2-locate-the-robots-txt-file-nbsp-nbsp\">2. Locate the robots.txt file&nbsp;&nbsp;<\/h3>\n\n\n\n<p>d. In the \u2018File Manager\u2019, open the \u2018public_html\u2019 directory, which contains your website\u2019s files.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/www.bluehost.com\/blog\/wp-content\/smush-webp\/2025\/02\/access-public_html-in-file-manager-1024x372.png.webp\" alt=\"access public_html\"\/><\/figure>\n\n\n\n<p>e. Look for the \u2018robots.txt\u2019 filename in this directory.&nbsp;&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/www.bluehost.com\/blog\/wp-content\/smush-webp\/2025\/02\/locate-robots.txt-file-1024x368.png.webp\" alt=\"locate robots.txt\"\/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-3-create-a-robots-txt-file-if-it-doesn-t-exist-nbsp\">3. Create a robots.txt file (if it doesn\u2019t exist)&nbsp;<\/h3>\n\n\n\n<p>If the robots.txt file is not present, you can create it. Here\u2019s how to create a robots.txt file:&nbsp;&nbsp;<\/p>\n\n\n\n<p>f. Click on the \u2018+ File\u2019 button at the top-left corner.&nbsp;&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/www.bluehost.com\/blog\/wp-content\/smush-webp\/2025\/02\/create-robots.txt-file.png.webp\" alt=\"\"\/><\/figure>\n\n\n\n<p>g. Name the new file \u2018robots.txt\u2019. Ensure it is placed in the \u2018\/public_html\u2019 directory.&nbsp;&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/www.bluehost.com\/blog\/wp-content\/smush-webp\/2025\/02\/name-new-file.png.webp\" alt=\"name robots.txt new file\"\/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-4-edit-the-robots-txt-file-nbsp-nbsp\">4. Edit the robots.txt file&nbsp;&nbsp;<\/h3>\n\n\n\n<p>h. Right-click on the \u2018robots.txt\u2019 file and select \u2018Edit\u2019.&nbsp;&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/www.bluehost.com\/blog\/wp-content\/smush-webp\/2025\/02\/Edit-robots.txt-file-1.png.webp\" alt=\"robots.txt file text editor\"\/><\/figure>\n\n\n\n<p>i. A text file editor will open, allowing you to add or modify directives.&nbsp;&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/www.bluehost.com\/blog\/wp-content\/smush-webp\/2025\/02\/robots.txt-File-Editor-1024x263.png.webp\" alt=\"robots.txt file editor\"\/><\/figure>\n\n\n\n<p><strong>Also read: <\/strong><a href=\"https:\/\/www.bluehost.com\/blog\/website-seo-basics-how-to-optimize-your-content\/\"><strong>How to Optimize Content for SEO on WordPress<\/strong><\/a>&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-5-configure-robots-txt-to-disallow-search-engines-nbsp-nbsp\">5. Configure robots.txt to disallow search engines&nbsp;&nbsp;<\/h3>\n\n\n\n<p>To control how search engines interact with your site, you can add specific directives to the robots.txt file. Here are some common configurations:&nbsp;&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>\u2018Disallow all\u2019 search engines from accessing the entire site:<\/strong> To prevent all search engine bots from crawling any part of your site, add the following lines:&nbsp;&nbsp;<\/li>\n<\/ul>\n\n\n\n<pre class=\"wp-block-code\"><code>User-agent: *\nDisallow: \/<\/code><\/pre>\n\n\n\n<p>This tells all user agents (denoted by the asterisk *) not to access any pages on your site.&nbsp;&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Disallow specific search engines from a specific folder:<\/strong> If you want to disallow search engine\u2019s specific bot (Googlebot disallow) from crawling a specific directory, specify the bot\u2019s user-agent and the directory. For example, Bingbot disallow:<\/li>\n<\/ul>\n\n\n\n<pre class=\"wp-block-code\"><code>User-agent: Bingbot&nbsp;&nbsp;&nbsp;\nDisallow: \/example-subfolder\/<\/code><\/pre>\n\n\n\n<p>This example blocks Google\u2019s bot from accessing the \/example-subfolder\/ directory.&nbsp;&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>\u2018Disallow all\u2019 bots from specific directories:<\/strong> To block all bots from certain directories, list them as follows:<\/li>\n<\/ul>\n\n\n\n<pre class=\"wp-block-code\"><code>User-agent: *&nbsp;&nbsp;&nbsp;\nDisallow: \/cgi-bin\/&nbsp;&nbsp;&nbsp;\nDisallow: \/tmp\/&nbsp;&nbsp;&nbsp;\nDisallow: \/junk\/<\/code><\/pre>\n\n\n\n<p>This configuration prevents all user agents from accessing the \/cgi-bin\/, \/tmp\/ and \/junk\/ directories.&nbsp;&nbsp;&nbsp;&nbsp;<\/p>\n\n\n\n<p>Ready to take control of your site\u2019s SEO? Start optimizing your site with Bluehost web hosting plans. We allow you to edit your robots.txt file, manage crawl rules and block unwanted bots using a single dashboard.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-robots-txt-disallow-all-what-it-means-and-why-to-avoid-it-on-live-sites-nbsp\">Robots.txt Disallow All: What it means and why to avoid it on live sites &nbsp;<\/h2>\n\n\n\n<p>The robots.txt disallow all directive is written as:&nbsp;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>User-agent: * \nDisallow: \/<\/code><\/pre>\n\n\n\n<p>This tells all bots not to crawl any part of your site. It\u2019s useful for staging environments, site migrations or development versions of your website where you don\u2019t want search engines indexing content prematurely.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-what-happens-when-you-disallow-all\">What happens when you Disallow All?<\/h3>\n\n\n\n<p>Imagine applying a robots txt Disallow all rule on a staging version of your website. Within a few days, <a href=\"https:\/\/search.google.com\/search-console\/about\" target=\"_blank\" rel=\"noreferrer noopener\">Google Search Console<\/a> will show a noticeable change:&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Crawled URLs drop to zero, which means bots are no longer allowed to fetch content.&nbsp;<\/li>\n\n\n\n<li>If your pages were already indexed, they may still appear in search results, often with outdated titles or meta descriptions.&nbsp;<\/li>\n\n\n\n<li>No new content is discovered, and existing content won\u2019t refresh.&nbsp;<\/li>\n<\/ul>\n\n\n\n<p>This proves that Disallow: \/ blocks crawling but not indexing, unless the content is removed or deindexed through other methods.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/www.bluehost.com\/blog\/wp-content\/smush-webp\/2025\/06\/what-happens-when-robots.txt-disallow-all-is-applied.png.webp\" alt=\"what happens when robots.txt disallow all is applied\"\/><\/figure>\n\n\n\n<h4 class=\"wp-block-heading\" id=\"h-why-is-this-risky-on-live-sites\">Why is this risky on live sites?<\/h4>\n\n\n\n<p>If you accidentally apply robots.txt Disallow: \/ on your live site:&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Google won\u2019t crawl new content or updates.<\/li>\n\n\n\n<li>Existing indexed pages may become stale.<\/li>\n\n\n\n<li>Over time, your visibility will drop.<\/li>\n\n\n\n<li>Search snippets may show outdated information or disappear altogether.&nbsp;&nbsp;<\/li>\n<\/ul>\n\n\n\n<p><strong>Pro tip:<\/strong> Combine Disallow: \/ with noindex meta tags only on staging or test environments and always remove the block before going live.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-how-to-test-and-validate-your-robots-txt-file\">How to test and validate your robots.txt file?<\/h2>\n\n\n\n<p>Even small errors in your robots.txt file can block critical pages or mislead search engine bots. That\u2019s why it\u2019s essential to test and validate your file regularly, especially after making changes.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-tools-to-use-nbsp\">Tools to use&nbsp;<\/h3>\n\n\n\n<p>Several reliable tools can help you check if your robots.txt directives are working as expected:&nbsp;<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Google Search Console<\/strong>&nbsp;<\/li>\n<\/ol>\n\n\n\n<p>Use the \u2018robots.txt Tester\u2019 under the Legacy Tools and Reports section. It shows whether your file is accessible and whether specific URLs are blocked or allowed by your current rules.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/www.bluehost.com\/blog\/wp-content\/smush-webp\/2025\/06\/robots.txt-Google-Search-Console-1024x463.png.webp\" alt=\"robots.txt Google Search Console\"\/><\/figure>\n\n\n\n<ol start=\"2\" class=\"wp-block-list\">\n<li><strong>technicalseo.com Robots.txt Tester<\/strong>&nbsp;<\/li>\n<\/ol>\n\n\n\n<p>A free online tool for testing various User-agent and Disallow combinations. Great for quick checks without logging in.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/www.bluehost.com\/blog\/wp-content\/smush-webp\/2025\/06\/robots.txt-tester-1024x456.png.webp\" alt=\"robots.txt tester\"\/><\/figure>\n\n\n\n<ol start=\"3\" class=\"wp-block-list\">\n<li><strong>TametheBot&#8217;s robots.txt testing tool<\/strong>&nbsp;<\/li>\n<\/ol>\n\n\n\n<p>This tool offers an interactive way to simulate how different bots interpret your robots.txt file. You can test multiple user-agents, preview how rules are applied and validate syntax with real-time feedback.&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/www.bluehost.com\/blog\/wp-content\/smush-webp\/2025\/06\/TametheBot-robots.txt-testing-tool-1024x419.png.webp\" alt=\"TametheBot robots.txt testing tool\"\/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-advanced-seo-audit-tools-for-robots-txt-analysis-nbsp\">Advanced SEO audit tools for robots.txt analysis&nbsp;<\/h3>\n\n\n\n<p>If you want to go deeper into technical audits, tools like Screaming Frog, Ahrefs and Sitebulb offer advanced insights:&nbsp;<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Screaming Frog SEO Spider<\/strong>: Helps identify blocked URLs, crawl errors and how your robots.txt is affecting indexability.&nbsp;<\/li>\n<\/ol>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/www.bluehost.com\/blog\/wp-content\/smush-webp\/2025\/06\/SCreaming-frog-seo-tool-1024x307.png.webp\" alt=\"SCreaming frog seo tool\"\/><\/figure>\n\n\n\n<ol start=\"2\" class=\"wp-block-list\">\n<li><strong>Ahrefs Site Audit<\/strong>: Detects crawl issues, reports on disallowed pages and flags affected internal links.&nbsp;<\/li>\n<\/ol>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/www.bluehost.com\/blog\/wp-content\/smush-webp\/2025\/06\/ahrefs-site-auditos-1024x379.png.webp\" alt=\"ahrefs site auditos\"\/><\/figure>\n\n\n\n<ol start=\"3\" class=\"wp-block-list\">\n<li><strong>Sitebulb<\/strong>: Visualizes crawl paths and highlights robots.txt conflicts with dynamic JavaScript or server logic.&nbsp;<\/li>\n<\/ol>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/www.bluehost.com\/blog\/wp-content\/smush-webp\/2025\/06\/sitebulb-1024x418.png.webp\" alt=\"sitebulb\"\/><\/figure>\n\n\n\n<p>These tools are especially useful after large-scale site changes, WordPress migrations, or if you&#8217;re troubleshooting crawl budget issues.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-common-errors-to-watch-out-for-nbsp\">Common errors to watch out for&nbsp;<\/h3>\n\n\n\n<p>When testing your robots.txt file, be on the lookout for these common mistakes:&nbsp;<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>File not found: <\/strong>If your robots.txt file isn\u2019t located in the root directory (for example, [yourwebsite].com\/robots.txt), search engines won\u2019t see it and may assume full access.<\/li>\n\n\n\n<li><strong>Syntax errors: <\/strong>Even a missing colon or slash can break a directive. Use validators or testing tools to catch typos before they go live.<\/li>\n\n\n\n<li><strong>Misplaced directives: <\/strong>Rules must be placed under the correct User-agent line. A Disallow without an associated User-agent will be ignored.<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-top-robots-txt-disallow-best-practices-to-avoid-seo-mistakes-nbsp-nbsp\">Top robots.txt Disallow best practices to avoid SEO mistakes&nbsp;&nbsp;<\/h2>\n\n\n\n<p>A small mistake in your robots.txt file can lead to big SEO consequences. Follow these robots.txt best practices to prevent accidental deindexing, make your crawl budget work for your most important pages and manage crawler behavior safely:<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-1-place-robots-txt-in-the-correct-file-location\">1. Place robots.txt in the correct file location<\/h3>\n\n\n\n<p>Your robots.txt file must be in the top level directory of your website (for example, https:\/\/www.[example].com\/robots.txt). If placed elsewhere, search engines won\u2019t find it and may assume full crawl access.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-2-avoid-blocking-important-pages\">2. Avoid blocking important pages<\/h3>\n\n\n\n<p>Never block high-value URLs like \/blog\/, \/services\/ or product categories unless absolutely necessary. Doing so can prevent them from being indexed, leading to traffic loss.<\/p>\n\n\n\n<p>Instead of blocking entire sections, use specific Disallow rules:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>Disallow: \/category\/private-subpage\/<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-3-use-wildcards-for-efficient-targeting\">3. Use wildcards for efficient targeting<\/h3>\n\n\n\n<p>Wildcards help you block patterns like filtered URLs or file types:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>Disallow: \/*?filter=* \nDisallow: \/*.pdf$<\/code><\/pre>\n\n\n\n<p>This improves crawl efficiency and prevents duplicate or low-value content from being indexed.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-4-don-t-block-css-or-js-files-nbsp\">4. Don\u2019t block CSS or JS files&nbsp;<\/h3>\n\n\n\n<p>Blocking CSS or JavaScript can stop Google from rendering your pages correctly. This affects how your site appears in search results and may reduce rankings.&nbsp;<\/p>\n\n\n\n<p>Allow bots to access essential assets to maintain good <a href=\"https:\/\/developers.google.com\/search\/docs\/appearance\/core-web-vitals\" target=\"_blank\" rel=\"noreferrer noopener\">Core Web Vitals<\/a>.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-5-link-your-xml-sitemap-nbsp\">5. Link your XML sitemap&nbsp;<\/h3>\n\n\n\n<p>Add a Sitemap: directive in your robots.txt file to help search engine crawlers find all your key pages:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>Sitemap: https:\/\/www.example.com\/sitemap.xml<\/code><\/pre>\n\n\n\n<p>This boost indexing efficiency, especially on larger sites.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-6-use-robots-txt-for-crawl-budget-optimization-nbsp\">6. Use robots.txt for crawl budget optimization&nbsp;<\/h3>\n\n\n\n<p>If your site has thousands of low-priority pages (for example, tag pages and filtered archives), blocking them allows bots to focus on your most important content, like product or service pages.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-7-don-t-use-robots-txt-for-sensitive-content-nbsp\">7. Don\u2019t use robots txt for sensitive content&nbsp;<\/h3>\n\n\n\n<p>Robots.txt only blocks crawling, not indexing. Sensitive pages may still appear in searches if other sites link to them. Use password protection or authentication when restricting access to sensitive content, not just robots.txt.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-8-validate-your-file-regularly-nbsp\">8. Validate your file regularly&nbsp;<\/h3>\n\n\n\n<p>Use tools like Google Search Console, <a href=\"http:\/\/technicalseo.com\" target=\"_blank\" rel=\"noreferrer noopener\">technicalseo.com<\/a> or TametheBot\u2019s robots.txt testing tool to catch errors like:&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Missing file or wrong location<\/li>\n\n\n\n<li>Syntax issues (for example, misplaced colons, wildcards)<\/li>\n\n\n\n<li>Broken Disallow rules<\/li>\n<\/ul>\n\n\n\n<p>Also check whether your robots.txt file is hosted on the same host as your main site, otherwise search engines may ignore it. A mismatch can lead to bots crawling other pages or even the whole site unintentionally.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-5-costly-robots-txt-disallow-errors-and-how-to-fix-them-nbsp\">5 costly robots.txt Disallow errors (and how to fix them)&nbsp;<\/h2>\n\n\n\n<p>Creating a robots.txt file is simple, but one wrong directive can damage your SEO or accidentally expose your site. Here are the most frequent errors to watch out for:&nbsp;<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Blocking everything unintentionally<\/li>\n\n\n\n<li>Disallowing important content<\/li>\n\n\n\n<li>Using robots.txt for private content<\/li>\n\n\n\n<li>Skipping testing and validation&nbsp;<\/li>\n\n\n\n<li>Failing to update the file as your site evolves&nbsp;<\/li>\n<\/ol>\n\n\n\n<p>Let&#8217;s break down how to use robots.txt more effectively without making these common SEO-damaging mistakes.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-1-blocking-everything-unintentionally-nbsp\">1. Blocking everything unintentionally&nbsp;<\/h3>\n\n\n\n<p>Using this directive \u2018User-agent: * Disallow: \/&#8217; will stop all bots from crawling your entire site. This mistake is often described as \u201crobots.txt disallow everything\u201d or \u201crobots.txt deny all\u201d; both mean `User-agent: *` + `Disallow: \/`. Unless you&#8217;re working on a staging or test environment, this can wipe out your entire site from search engine results.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-2-disallowing-important-content-nbsp\">2. Disallowing important content&nbsp;<\/h3>\n\n\n\n<p>Blocking directories like \/blog\/, \/products\/ or \/services\/ can prevent your best content from being indexed. Always check your Disallow rules carefully, especially during migrations or redesigns, to avoid hiding high-value pages.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-3-using-robots-txt-for-private-content-nbsp\">3. Using robots.txt for private content&nbsp;<\/h3>\n\n\n\n<p>Robots.txt only tells bots not to crawl; it doesn&#8217;t stop them from indexing pages if those URLs are linked elsewhere. For private or sensitive content, use authentication, noindex meta tags or HTTP headers instead.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-4-skipping-testing-and-validation-nbsp\">4. Skipping testing and validation&nbsp;<\/h3>\n\n\n\n<p>Even a small typo (like a missing slash or colon) can break your file. Use tools like <a href=\"https:\/\/search.google.com\/search-console\/settings\/robots-txt\" target=\"_blank\" rel=\"noreferrer noopener\">Google\u2019s robots.txt Tester<\/a>, technicalseo.com or TametheBot\u2019s robots.txt testing tool to catch errors before they go live.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-5-failing-to-update-the-file-as-your-site-evolves-nbsp\">5. Failing to update the file as your site evolves&nbsp;<\/h3>\n\n\n\n<p>During a WordPress migration, forgetting to update your robots.txt file is a common and critical oversight. If the Disallow: \/ directive was used to block bots during development, and not removed after launch, your entire site can remain invisible to search engines.&nbsp;<\/p>\n\n\n\n<p>That\u2019s why WordPress migration and robots.txt settings must be reviewed together; especially before relaunching or submitting your XML sitemap.&nbsp;&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/www.bluehost.com\/blog\/wp-content\/uploads\/2025\/06\/common-robots.txt-mistakes-to-avoid.png\" alt=\"common robots.txt mistakes to avoid\"\/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-how-to-block-ai-bots-and-bad-web-crawlers-using-robots-txt\">How to block AI bots and bad web crawlers using robots.txt?<\/h2>\n\n\n\n<p>As AI tools become more widespread, AI crawlers like GPTBot, CCBot and ClaudeBot are actively scanning websites for content to use in training data. If you want to control whether your content is accessed by these bots, robots.txt gives you a starting point.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-blocking-ai-bots-using-robots-txt\">Blocking AI bots using robots.txt<\/h3>\n\n\n\n<p>Just like with search engine bots, you can target AI bots by their unique user agent names. Here&#8217;s how to block them:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>User-agent: GPTBot\nDisallow: \/\n\nUser-agent: CCBot\nDisallow: \/\n\nUser-agent: ClaudeBot\nDisallow:\n\nUser-agent: Applebot-Extended\nDisallow: \/<\/code><\/pre>\n\n\n\n<p><strong>Pro Tip:<\/strong> Keep track of new AI crawler user agents (for example, PerplexityBot, MistralAI-User). Many publishers update robots.txt regularly to manage AI access in response to evolving bot behavior.&nbsp;<\/p>\n\n\n\n<p>If you only want to block specific folders (like your blog or learning resources), use path-based Googlebot disallow rules:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>User-agent: GPTBot\nDisallow: \/blog\/\nDisallow: \/learn\/<\/code><\/pre>\n\n\n\n<p>You can also use pattern matching to block all AI-related user agents, though this may not catch every variation:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>User-agent: *AI*\nDisallow: \/<\/code><\/pre>\n\n\n\n<p><strong>Note:<\/strong> There\u2019s no guarantee that all AI bots will respect your robots.txt file. Remember, disallowed pages can still be indexed if they are publicly accessible and linked by external sites.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-should-you-block-ai-bots\">Should you block AI bots?<\/h3>\n\n\n\n<p>Whether you block AI crawlers depends on your content strategy. Many news publishers and original content creators are taking a stance to protect their intellectual property. Others prefer allowing AI tools for broader content reach and experimentation.&nbsp;<\/p>\n\n\n\n<p>If you&#8217;re unsure, consider blocking only high-value or sensitive sections while leaving general resources accessible.&nbsp;<\/p>\n\n\n\n<p><strong>For full control:<\/strong> Combine Disallow rules with noindex meta tags or authentication to keep pages out of both training sets and search result pages.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-final-thoughts-nbsp-nbsp\">Final thoughts&nbsp;&nbsp;<\/h2>\n\n\n\n<p>A well-structured robots txt file gives you control over how search engines interact with your website. But with great power comes great responsibility.&nbsp;<\/p>\n\n\n\n<p>Using directives like robots.txt Disallow all can be useful for staging sites or temporary blocks. But applying them incorrectly on a live site can wipe your visibility from Google in a matter of hours. That\u2019s why testing, validating and auditing your file regularly is important.&nbsp;<\/p>\n\n\n\n<p>Whether you&#8217;re blocking duplicate pages, guiding AI bots or fine-tuning your crawl budget, treat your robots.txt file as a critical part of your <a href=\"https:\/\/www.bluehost.com\/blog\/seo-basics\/\">SEO strategy<\/a>. You can also use the built-in SEO tools available in Bluehost\u2019s dashboard to optimize crawl behavior.&nbsp;<\/p>\n\n\n\n<p>Not sure if your site has SEO issues? Use our <a href=\"https:\/\/www.bluehost.com\/seo-checker\">free SEO Checker tool<\/a> to scan your website and get a full SEO audit report with actionable tips to improve your ranking.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-faqs\">FAQs<\/h2>\n\n\n\n<div class=\"schema-faq wp-block-yoast-faq-block\"><div class=\"schema-faq-section\" id=\"faq-question-1740044726265\"><strong class=\"schema-faq-question\"><strong>How important is robots.txt for SEO?<\/strong><\/strong> <p class=\"schema-faq-answer\">Robots.txt helps web crawlers understand which pages to index. This affects your visibility on Google Search and your rankings.<\/p> <\/div> <div class=\"schema-faq-section\" id=\"faq-question-1740044786219\"><strong class=\"schema-faq-question\"><strong>What are the risks of using robots.txt Disallow all?<\/strong><\/strong> <p class=\"schema-faq-answer\">Using robots.txt Disallow all can remove your pages from search results, causing traffic loss and SEO damage that takes time to recover from.<\/p> <\/div> <div class=\"schema-faq-section\" id=\"faq-question-1749812112088\"><strong class=\"schema-faq-question\"><strong>Can I block specific bots?\u00a0<\/strong>\u00a0<\/strong> <p class=\"schema-faq-answer\">Yes, you can block specific bots by targeting their user-agent names in your robots.txt file. For example, if you want to block Bing&#8217;s crawler, add a Bingbot Disallow rule like:\u00a0User-agent: Bingbot Disallow: \/private-directory\/ Bingbot disallow tells that bot not to crawl your site. Just remember: not all bots will obey the rules.\u00a0<\/p> <\/div> <div class=\"schema-faq-section\" id=\"faq-question-1749812324898\"><strong class=\"schema-faq-question\"><strong>Is robots.txt legally enforceable?\u00a0<\/strong>\u00a0<\/strong> <p class=\"schema-faq-answer\">No, robots.txt is not legally enforceable and follows Robots Exclusion Protocol. While most reputable bots adhere to it, malicious or unauthorized bots can completely disregard it.\u00a0<\/p> <\/div> <div class=\"schema-faq-section\" id=\"faq-question-1749812394712\"><strong class=\"schema-faq-question\"><strong>Will Disallow prevent indexing?\u00a0<\/strong>\u00a0<\/strong> <p class=\"schema-faq-answer\">Disallow does not equal noindex. Pages blocked via robots.txt can still be indexed if externally linked. Use both \u2018Disallow\u2019 and \u2018noindex\u2019 when you want control over both crawling and indexing.<\/p> <\/div> <div class=\"schema-faq-section\" id=\"faq-question-1740044696012\"><strong class=\"schema-faq-question\">What does &#8216;Disallow all&#8217; do in robots.txt?<\/strong> <p class=\"schema-faq-answer\">&#8220;Disallow all&#8221; in robots.txt blocks all search engine bots from crawling any part of your site.<\/p> <\/div> <div class=\"schema-faq-section\" id=\"faq-question-1740044893120\"><strong class=\"schema-faq-question\"><strong><strong>Can \u2018Disallow all\u2019 affect my site\u2019s SEO negatively?<\/strong>\u00a0<\/strong><\/strong> <p class=\"schema-faq-answer\">Yes, using &#8216;Disallow all&#8217; can hurt your SEO. It can make your site hard to find on Google and affect your visibility in Google Search Console.<\/p> <\/div> <div class=\"schema-faq-section\" id=\"faq-question-1740045027389\"><strong class=\"schema-faq-question\"><strong>How do I reverse the effects of &#8216;Disallow all&#8217; on my website?<\/strong><\/strong> <p class=\"schema-faq-answer\">To reverse the &#8216;Disallow all&#8217; directive:\u00a0\u00a0<br\/>1. Remove \u2018Disallow: \/\u2019 from the robots.txt file.\u00a0\u00a0<br\/>2. Submit the updated robots.txt file in Google Search Console.\u00a0\u00a0<br\/>3. Resubmit the XML sitemap to help search engines rediscover your pages more quickly.<br\/>4. Monitor Google Search Console for crawl errors.<\/p> <\/div> <div class=\"schema-faq-section\" id=\"faq-question-1740045126570\"><strong class=\"schema-faq-question\"><strong>Is &#8216;Disallow all&#8217; the best way to protect private content from search engines?\u00a0<\/strong>\u00a0<\/strong> <p class=\"schema-faq-answer\">No, robots.txt Disallow all is not a good way to keep private content safe. It is advisable to use robust security measures, such as passwords, for sensitive information.\u00a0\u00a0<\/p> <\/div> <div class=\"schema-faq-section\" id=\"faq-question-1740045167445\"><strong class=\"schema-faq-question\"><strong>How frequently should I update my robots.txt file?<\/strong><\/strong> <p class=\"schema-faq-answer\">Check and update your robots.txt file after redesigning your website, moving content or making significant changes to your site layout. Ensure it aligns with your current SEO strategy and that your XML sitemap is linked correctly.\u00a0<\/p> <\/div> <div class=\"schema-faq-section\" id=\"faq-question-1749812633591\"><strong class=\"schema-faq-question\"><strong>What is the difference between Disallow and Noindex?<\/strong>\u00a0<\/strong> <p class=\"schema-faq-answer\">Disallow prevents crawling; noindex prevents indexing. Use both for complete control over visibility.\u00a0<\/p> <\/div> <div class=\"schema-faq-section\" id=\"faq-question-1749812677126\"><strong class=\"schema-faq-question\"><strong>Does Disallow: \/ mean my pages are invisible to Google?<\/strong>\u00a0<\/strong> <p class=\"schema-faq-answer\">No. If those pages are linked elsewhere, they might still appear in search results\u2014just not crawled.\u00a0<\/p> <\/div> <div class=\"schema-faq-section\" id=\"faq-question-1749812756183\"><strong class=\"schema-faq-question\"><strong>How do I test if my robots.txt is working correctly?<\/strong>\u00a0<\/strong> <p class=\"schema-faq-answer\">Use Google Search Console\u2019s robots.txt tester or external tools like TametheBot or technicalseo.com.\u00a0<\/p> <\/div> <div class=\"schema-faq-section\" id=\"faq-question-1749812857416\"><strong class=\"schema-faq-question\"><strong>How often should I update my robots.txt file?<\/strong>\u00a0<\/strong> <p class=\"schema-faq-answer\">After redesigns, content migrations, or structural SEO changes. Regular reviews help catch mistakes early.\u00a0<\/p> <\/div> <div class=\"schema-faq-section\" id=\"faq-question-1757678220920\"><strong class=\"schema-faq-question\"><strong>Do robots.txt rules need to be on a separate line?<\/strong><\/strong> <p class=\"schema-faq-answer\">Yes. Each directive (such as Disallow or Allow) must be written on a separate line. Otherwise, crawlers may misinterpret your instructions.<\/p> <\/div> <div class=\"schema-faq-section\" id=\"faq-question-1757678245356\"><strong class=\"schema-faq-question\"><strong>Is robots.txt case sensitive?<\/strong><\/strong> <p class=\"schema-faq-answer\">Yes. Robots.txt is case sensitive, which means \/Images\/ and \/images\/ are treated as different directory names.<\/p> <\/div> <div class=\"schema-faq-section\" id=\"faq-question-1757678270583\"><strong class=\"schema-faq-question\"><strong>Can I block only one group of pages instead of the whole site?<\/strong><\/strong> <p class=\"schema-faq-answer\">Yes. You can target only one group of pages by specifying their directory name or pattern. For example, blocking \/private\/ while allowing other pages to be crawled.<\/p> <\/div> <div class=\"schema-faq-section\" id=\"faq-question-1757678303627\"><strong class=\"schema-faq-question\"><strong>Does robots.txt need to be on the same host as my site?<\/strong><\/strong> <p class=\"schema-faq-answer\">Yes. The robots.txt file must live on the same host as your website\u2019s home page. If it\u2019s missing, crawlers may assume the whole site is open to indexing.<\/p> <\/div> <div class=\"schema-faq-section\" id=\"faq-question-1757678333340\"><strong class=\"schema-faq-question\"><strong>Is \u201crobots.txt deny all\u201d the same as Disallow: \/?<\/strong><\/strong> <p class=\"schema-faq-answer\">The phrase robots.txt deny all is a plain-language way to describe the Disallow: \/ rule (for example User-agent: * + Disallow: \/). It blocks crawlers from crawling the entire site and should only be used on staging or private sites.<\/p> <\/div> <\/div>\n","protected":false},"excerpt":{"rendered":"<p>Confused about robots.txt disallow? Let&#8217;s break down syntax, examples, AI bot blocking and SEO practices with real use. <\/p>\n","protected":false},"author":110,"featured_media":233297,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_yoast_wpseo_title":"Robots.txt Disallow Explained: Syntax, Use Cases & SEO Best Practices","_yoast_wpseo_metadesc":"Learn how to use robots.txt disallow to block bots, control crawling and improve SEO. Includes examples, syntax, AI bot blocking and best practices.","inline_featured_image":false,"footnotes":""},"categories":[3046,1345],"tags":[3330,3340],"ppma_author":[662],"class_list":["post-122674","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-development","category-website","tag-how-to-guides","tag-tips-tricks"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.1 (Yoast SEO v27.1.1) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Robots.txt Disallow Explained: Syntax, Use Cases &amp; SEO Best Practices<\/title>\n<meta name=\"description\" content=\"Learn how to use robots.txt disallow to block bots, control crawling and improve SEO. Includes examples, syntax, AI bot blocking and best practices.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.bluehost.com\/blog\/wp-json\/wp\/v2\/posts\/122674\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Robots.txt Disallow Explained: Syntax, Use Cases &amp; SEO Best Practices\" \/>\n<meta property=\"og:description\" content=\"Learn how to use robots.txt disallow to block bots, control crawling and improve SEO. Includes examples, syntax, AI bot blocking and best practices.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/\" \/>\n<meta property=\"og:site_name\" content=\"Bluehost Blog\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/bluehost\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-09-12T12:00:19+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-02-16T03:59:44+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.bluehost.com\/blog\/wp-content\/uploads\/2025\/06\/Robots.txt-Disallow-All_-What-It-Means-and-When-to-Use-It.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1100\" \/>\n\t<meta property=\"og:image:height\" content=\"620\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Jyoti Saxena\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@bluehost\" \/>\n<meta name=\"twitter:site\" content=\"@bluehost\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Jyoti Saxena\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"28 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/\"},\"author\":{\"name\":\"Jyoti Saxena\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/#\/schema\/person\/6d68c86eff8903098d5714c6064007d1\"},\"headline\":\"Robots.txt Disallow Explained: Syntax, Use Cases &amp; SEO Best Practices\",\"datePublished\":\"2025-09-12T12:00:19+00:00\",\"dateModified\":\"2026-02-16T03:59:44+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/\"},\"wordCount\":5798,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.bluehost.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.bluehost.com\/blog\/wp-content\/uploads\/2025\/06\/Robots.txt-Disallow-All_-What-It-Means-and-When-to-Use-It.png\",\"keywords\":[\"How-To Guides\",\"Tips &amp; Tricks\"],\"articleSection\":[\"Development\",\"Website\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#respond\"]}]},{\"@type\":[\"WebPage\",\"FAQPage\"],\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/\",\"url\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/\",\"name\":\"Robots.txt Disallow Explained: Syntax, Use Cases & SEO Best Practices\",\"isPartOf\":{\"@id\":\"https:\/\/www.bluehost.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.bluehost.com\/blog\/wp-content\/uploads\/2025\/06\/Robots.txt-Disallow-All_-What-It-Means-and-When-to-Use-It.png\",\"datePublished\":\"2025-09-12T12:00:19+00:00\",\"dateModified\":\"2026-02-16T03:59:44+00:00\",\"description\":\"Learn how to use robots.txt disallow to block bots, control crawling and improve SEO. Includes examples, syntax, AI bot blocking and best practices.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#breadcrumb\"},\"mainEntity\":[{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044726265\"},{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044786219\"},{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812112088\"},{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812324898\"},{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812394712\"},{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044696012\"},{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044893120\"},{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740045027389\"},{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740045126570\"},{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740045167445\"},{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812633591\"},{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812677126\"},{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812756183\"},{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812857416\"},{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678220920\"},{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678245356\"},{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678270583\"},{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678303627\"},{\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678333340\"}],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#primaryimage\",\"url\":\"https:\/\/www.bluehost.com\/blog\/wp-content\/uploads\/2025\/06\/Robots.txt-Disallow-All_-What-It-Means-and-When-to-Use-It.png\",\"contentUrl\":\"https:\/\/www.bluehost.com\/blog\/wp-content\/uploads\/2025\/06\/Robots.txt-Disallow-All_-What-It-Means-and-When-to-Use-It.png\",\"width\":1100,\"height\":620,\"caption\":\"Robots.txt Disallow All_ What It Means and When to Use It_header\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.bluehost.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Website\",\"item\":\"https:\/\/www.bluehost.com\/blog\/category\/website\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Robots.txt Disallow Explained: Syntax, Use Cases &amp; SEO Best Practices\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/#website\",\"url\":\"https:\/\/www.bluehost.com\/blog\/\",\"name\":\"Bluehost\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/www.bluehost.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.bluehost.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/#organization\",\"name\":\"Bluehost\",\"url\":\"https:\/\/www.bluehost.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.bluehost.com\/blog\/wp-content\/uploads\/2023\/08\/bluehost-logo.svg\",\"contentUrl\":\"https:\/\/www.bluehost.com\/blog\/wp-content\/uploads\/2023\/08\/bluehost-logo.svg\",\"width\":136,\"height\":24,\"caption\":\"Bluehost\"},\"image\":{\"@id\":\"https:\/\/www.bluehost.com\/blog\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/bluehost\/\",\"https:\/\/x.com\/bluehost\",\"https:\/\/www.linkedin.com\/company\/bluehost-com\/\",\"https:\/\/www.youtube.com\/user\/bluehost\",\"https:\/\/en.wikipedia.org\/wiki\/Bluehost\"],\"description\":\"Bluehost is a leading web hosting provider empowering millions of websites worldwide. \\u2028Discover how Bluehost's expertise, reliability, and innovation can help you achieve your online goals.\",\"telephone\":\"+1-888-401-4678\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/#\/schema\/person\/6d68c86eff8903098d5714c6064007d1\",\"name\":\"Jyoti Saxena\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/#\/schema\/person\/image\/83702bc8c658b2e029089fde0e4a14d1\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/c74ba415c88dbae52eb00fea7fb0b33b08ec4b4fc22607e55bfe585e3304671c?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/c74ba415c88dbae52eb00fea7fb0b33b08ec4b4fc22607e55bfe585e3304671c?s=96&d=mm&r=g\",\"caption\":\"Jyoti Saxena\"},\"description\":\"Jyoti is a storyteller at heart, weaving words that make tech and eCommerce feel less like a maze and more like an adventure. With a cup of chai in one hand and curiosity in the other, Jyoti turns complex ideas into conversations you actually want to have.\",\"url\":\"https:\/\/www.bluehost.com\/blog\/author\/jyoti\/\"},{\"@type\":\"Question\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044726265\",\"position\":1,\"url\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044726265\",\"name\":\"How important is robots.txt for SEO?\",\"answerCount\":1,\"acceptedAnswer\":{\"@type\":\"Answer\",\"text\":\"Robots.txt helps web crawlers understand which pages to index. This affects your visibility on Google Search and your rankings.\",\"inLanguage\":\"en-US\"},\"inLanguage\":\"en-US\"},{\"@type\":\"Question\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044786219\",\"position\":2,\"url\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044786219\",\"name\":\"What are the risks of using robots.txt Disallow all?\",\"answerCount\":1,\"acceptedAnswer\":{\"@type\":\"Answer\",\"text\":\"Using robots.txt Disallow all can remove your pages from search results, causing traffic loss and SEO damage that takes time to recover from.\",\"inLanguage\":\"en-US\"},\"inLanguage\":\"en-US\"},{\"@type\":\"Question\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812112088\",\"position\":3,\"url\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812112088\",\"name\":\"Can I block specific bots?\u00a0\u00a0\",\"answerCount\":1,\"acceptedAnswer\":{\"@type\":\"Answer\",\"text\":\"Yes, you can block specific bots by targeting their user-agent names in your robots.txt file. For example, if you want to block Bing's crawler, add a Bingbot Disallow rule like:\u00a0User-agent: Bingbot Disallow: \/private-directory\/ Bingbot disallow tells that bot not to crawl your site. Just remember: not all bots will obey the rules.\u00a0\",\"inLanguage\":\"en-US\"},\"inLanguage\":\"en-US\"},{\"@type\":\"Question\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812324898\",\"position\":4,\"url\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812324898\",\"name\":\"Is robots.txt legally enforceable?\u00a0\u00a0\",\"answerCount\":1,\"acceptedAnswer\":{\"@type\":\"Answer\",\"text\":\"No, robots.txt is not legally enforceable and follows Robots Exclusion Protocol. While most reputable bots adhere to it, malicious or unauthorized bots can completely disregard it.\u00a0\",\"inLanguage\":\"en-US\"},\"inLanguage\":\"en-US\"},{\"@type\":\"Question\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812394712\",\"position\":5,\"url\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812394712\",\"name\":\"Will Disallow prevent indexing?\u00a0\u00a0\",\"answerCount\":1,\"acceptedAnswer\":{\"@type\":\"Answer\",\"text\":\"Disallow does not equal noindex. Pages blocked via robots.txt can still be indexed if externally linked. Use both \u2018Disallow\u2019 and \u2018noindex\u2019 when you want control over both crawling and indexing.\",\"inLanguage\":\"en-US\"},\"inLanguage\":\"en-US\"},{\"@type\":\"Question\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044696012\",\"position\":6,\"url\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044696012\",\"name\":\"What does 'Disallow all' do in robots.txt?\",\"answerCount\":1,\"acceptedAnswer\":{\"@type\":\"Answer\",\"text\":\"\\\"Disallow all\\\" in robots.txt blocks all search engine bots from crawling any part of your site.\",\"inLanguage\":\"en-US\"},\"inLanguage\":\"en-US\"},{\"@type\":\"Question\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044893120\",\"position\":7,\"url\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044893120\",\"name\":\"Can \u2018Disallow all\u2019 affect my site\u2019s SEO negatively?\u00a0\",\"answerCount\":1,\"acceptedAnswer\":{\"@type\":\"Answer\",\"text\":\"Yes, using 'Disallow all' can hurt your SEO. It can make your site hard to find on Google and affect your visibility in Google Search Console.\",\"inLanguage\":\"en-US\"},\"inLanguage\":\"en-US\"},{\"@type\":\"Question\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740045027389\",\"position\":8,\"url\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740045027389\",\"name\":\"How do I reverse the effects of 'Disallow all' on my website?\",\"answerCount\":1,\"acceptedAnswer\":{\"@type\":\"Answer\",\"text\":\"To reverse the 'Disallow all' directive:\u00a0\u00a0<br\/>1. Remove \u2018Disallow: \/\u2019 from the robots.txt file.\u00a0\u00a0<br\/>2. Submit the updated robots.txt file in Google Search Console.\u00a0\u00a0<br\/>3. Resubmit the XML sitemap to help search engines rediscover your pages more quickly.<br\/>4. Monitor Google Search Console for crawl errors.\",\"inLanguage\":\"en-US\"},\"inLanguage\":\"en-US\"},{\"@type\":\"Question\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740045126570\",\"position\":9,\"url\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740045126570\",\"name\":\"Is 'Disallow all' the best way to protect private content from search engines?\u00a0\u00a0\",\"answerCount\":1,\"acceptedAnswer\":{\"@type\":\"Answer\",\"text\":\"No, robots.txt Disallow all is not a good way to keep private content safe. It is advisable to use robust security measures, such as passwords, for sensitive information.\u00a0\u00a0\",\"inLanguage\":\"en-US\"},\"inLanguage\":\"en-US\"},{\"@type\":\"Question\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740045167445\",\"position\":10,\"url\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740045167445\",\"name\":\"How frequently should I update my robots.txt file?\",\"answerCount\":1,\"acceptedAnswer\":{\"@type\":\"Answer\",\"text\":\"Check and update your robots.txt file after redesigning your website, moving content or making significant changes to your site layout. Ensure it aligns with your current SEO strategy and that your XML sitemap is linked correctly.\u00a0\",\"inLanguage\":\"en-US\"},\"inLanguage\":\"en-US\"},{\"@type\":\"Question\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812633591\",\"position\":11,\"url\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812633591\",\"name\":\"What is the difference between Disallow and Noindex?\u00a0\",\"answerCount\":1,\"acceptedAnswer\":{\"@type\":\"Answer\",\"text\":\"Disallow prevents crawling; noindex prevents indexing. Use both for complete control over visibility.\u00a0\",\"inLanguage\":\"en-US\"},\"inLanguage\":\"en-US\"},{\"@type\":\"Question\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812677126\",\"position\":12,\"url\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812677126\",\"name\":\"Does Disallow: \/ mean my pages are invisible to Google?\u00a0\",\"answerCount\":1,\"acceptedAnswer\":{\"@type\":\"Answer\",\"text\":\"No. If those pages are linked elsewhere, they might still appear in search results\u2014just not crawled.\u00a0\",\"inLanguage\":\"en-US\"},\"inLanguage\":\"en-US\"},{\"@type\":\"Question\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812756183\",\"position\":13,\"url\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812756183\",\"name\":\"How do I test if my robots.txt is working correctly?\u00a0\",\"answerCount\":1,\"acceptedAnswer\":{\"@type\":\"Answer\",\"text\":\"Use Google Search Console\u2019s robots.txt tester or external tools like TametheBot or technicalseo.com.\u00a0\",\"inLanguage\":\"en-US\"},\"inLanguage\":\"en-US\"},{\"@type\":\"Question\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812857416\",\"position\":14,\"url\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812857416\",\"name\":\"How often should I update my robots.txt file?\u00a0\",\"answerCount\":1,\"acceptedAnswer\":{\"@type\":\"Answer\",\"text\":\"After redesigns, content migrations, or structural SEO changes. Regular reviews help catch mistakes early.\u00a0\",\"inLanguage\":\"en-US\"},\"inLanguage\":\"en-US\"},{\"@type\":\"Question\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678220920\",\"position\":15,\"url\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678220920\",\"name\":\"Do robots.txt rules need to be on a separate line?\",\"answerCount\":1,\"acceptedAnswer\":{\"@type\":\"Answer\",\"text\":\"Yes. Each directive (such as Disallow or Allow) must be written on a separate line. Otherwise, crawlers may misinterpret your instructions.\",\"inLanguage\":\"en-US\"},\"inLanguage\":\"en-US\"},{\"@type\":\"Question\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678245356\",\"position\":16,\"url\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678245356\",\"name\":\"Is robots.txt case sensitive?\",\"answerCount\":1,\"acceptedAnswer\":{\"@type\":\"Answer\",\"text\":\"Yes. Robots.txt is case sensitive, which means \/Images\/ and \/images\/ are treated as different directory names.\",\"inLanguage\":\"en-US\"},\"inLanguage\":\"en-US\"},{\"@type\":\"Question\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678270583\",\"position\":17,\"url\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678270583\",\"name\":\"Can I block only one group of pages instead of the whole site?\",\"answerCount\":1,\"acceptedAnswer\":{\"@type\":\"Answer\",\"text\":\"Yes. You can target only one group of pages by specifying their directory name or pattern. For example, blocking \/private\/ while allowing other pages to be crawled.\",\"inLanguage\":\"en-US\"},\"inLanguage\":\"en-US\"},{\"@type\":\"Question\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678303627\",\"position\":18,\"url\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678303627\",\"name\":\"Does robots.txt need to be on the same host as my site?\",\"answerCount\":1,\"acceptedAnswer\":{\"@type\":\"Answer\",\"text\":\"Yes. The robots.txt file must live on the same host as your website\u2019s home page. If it\u2019s missing, crawlers may assume the whole site is open to indexing.\",\"inLanguage\":\"en-US\"},\"inLanguage\":\"en-US\"},{\"@type\":\"Question\",\"@id\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678333340\",\"position\":19,\"url\":\"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678333340\",\"name\":\"Is \u201crobots.txt deny all\u201d the same as Disallow: \/?\",\"answerCount\":1,\"acceptedAnswer\":{\"@type\":\"Answer\",\"text\":\"The phrase robots.txt deny all is a plain-language way to describe the Disallow: \/ rule (for example User-agent: * + Disallow: \/). It blocks crawlers from crawling the entire site and should only be used on staging or private sites.\",\"inLanguage\":\"en-US\"},\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Robots.txt Disallow Explained: Syntax, Use Cases & SEO Best Practices","description":"Learn how to use robots.txt disallow to block bots, control crawling and improve SEO. Includes examples, syntax, AI bot blocking and best practices.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.bluehost.com\/blog\/wp-json\/wp\/v2\/posts\/122674\/","og_locale":"en_US","og_type":"article","og_title":"Robots.txt Disallow Explained: Syntax, Use Cases &amp; SEO Best Practices","og_description":"Learn how to use robots.txt disallow to block bots, control crawling and improve SEO. Includes examples, syntax, AI bot blocking and best practices.","og_url":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/","og_site_name":"Bluehost Blog","article_publisher":"https:\/\/www.facebook.com\/bluehost\/","article_published_time":"2025-09-12T12:00:19+00:00","article_modified_time":"2026-02-16T03:59:44+00:00","og_image":[{"width":1100,"height":620,"url":"https:\/\/www.bluehost.com\/blog\/wp-content\/uploads\/2025\/06\/Robots.txt-Disallow-All_-What-It-Means-and-When-to-Use-It.png","type":"image\/png"}],"author":"Jyoti Saxena","twitter_card":"summary_large_image","twitter_creator":"@bluehost","twitter_site":"@bluehost","twitter_misc":{"Written by":"Jyoti Saxena","Est. reading time":"28 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#article","isPartOf":{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/"},"author":{"name":"Jyoti Saxena","@id":"https:\/\/www.bluehost.com\/blog\/#\/schema\/person\/6d68c86eff8903098d5714c6064007d1"},"headline":"Robots.txt Disallow Explained: Syntax, Use Cases &amp; SEO Best Practices","datePublished":"2025-09-12T12:00:19+00:00","dateModified":"2026-02-16T03:59:44+00:00","mainEntityOfPage":{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/"},"wordCount":5798,"commentCount":0,"publisher":{"@id":"https:\/\/www.bluehost.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#primaryimage"},"thumbnailUrl":"https:\/\/www.bluehost.com\/blog\/wp-content\/uploads\/2025\/06\/Robots.txt-Disallow-All_-What-It-Means-and-When-to-Use-It.png","keywords":["How-To Guides","Tips &amp; Tricks"],"articleSection":["Development","Website"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#respond"]}]},{"@type":["WebPage","FAQPage"],"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/","url":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/","name":"Robots.txt Disallow Explained: Syntax, Use Cases & SEO Best Practices","isPartOf":{"@id":"https:\/\/www.bluehost.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#primaryimage"},"image":{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#primaryimage"},"thumbnailUrl":"https:\/\/www.bluehost.com\/blog\/wp-content\/uploads\/2025\/06\/Robots.txt-Disallow-All_-What-It-Means-and-When-to-Use-It.png","datePublished":"2025-09-12T12:00:19+00:00","dateModified":"2026-02-16T03:59:44+00:00","description":"Learn how to use robots.txt disallow to block bots, control crawling and improve SEO. Includes examples, syntax, AI bot blocking and best practices.","breadcrumb":{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#breadcrumb"},"mainEntity":[{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044726265"},{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044786219"},{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812112088"},{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812324898"},{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812394712"},{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044696012"},{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044893120"},{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740045027389"},{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740045126570"},{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740045167445"},{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812633591"},{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812677126"},{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812756183"},{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812857416"},{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678220920"},{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678245356"},{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678270583"},{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678303627"},{"@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678333340"}],"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#primaryimage","url":"https:\/\/www.bluehost.com\/blog\/wp-content\/uploads\/2025\/06\/Robots.txt-Disallow-All_-What-It-Means-and-When-to-Use-It.png","contentUrl":"https:\/\/www.bluehost.com\/blog\/wp-content\/uploads\/2025\/06\/Robots.txt-Disallow-All_-What-It-Means-and-When-to-Use-It.png","width":1100,"height":620,"caption":"Robots.txt Disallow All_ What It Means and When to Use It_header"},{"@type":"BreadcrumbList","@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.bluehost.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Website","item":"https:\/\/www.bluehost.com\/blog\/category\/website\/"},{"@type":"ListItem","position":3,"name":"Robots.txt Disallow Explained: Syntax, Use Cases &amp; SEO Best Practices"}]},{"@type":"WebSite","@id":"https:\/\/www.bluehost.com\/blog\/#website","url":"https:\/\/www.bluehost.com\/blog\/","name":"Bluehost","description":"","publisher":{"@id":"https:\/\/www.bluehost.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.bluehost.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.bluehost.com\/blog\/#organization","name":"Bluehost","url":"https:\/\/www.bluehost.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.bluehost.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/www.bluehost.com\/blog\/wp-content\/uploads\/2023\/08\/bluehost-logo.svg","contentUrl":"https:\/\/www.bluehost.com\/blog\/wp-content\/uploads\/2023\/08\/bluehost-logo.svg","width":136,"height":24,"caption":"Bluehost"},"image":{"@id":"https:\/\/www.bluehost.com\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/bluehost\/","https:\/\/x.com\/bluehost","https:\/\/www.linkedin.com\/company\/bluehost-com\/","https:\/\/www.youtube.com\/user\/bluehost","https:\/\/en.wikipedia.org\/wiki\/Bluehost"],"description":"Bluehost is a leading web hosting provider empowering millions of websites worldwide. \u2028Discover how Bluehost's expertise, reliability, and innovation can help you achieve your online goals.","telephone":"+1-888-401-4678"},{"@type":"Person","@id":"https:\/\/www.bluehost.com\/blog\/#\/schema\/person\/6d68c86eff8903098d5714c6064007d1","name":"Jyoti Saxena","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.bluehost.com\/blog\/#\/schema\/person\/image\/83702bc8c658b2e029089fde0e4a14d1","url":"https:\/\/secure.gravatar.com\/avatar\/c74ba415c88dbae52eb00fea7fb0b33b08ec4b4fc22607e55bfe585e3304671c?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/c74ba415c88dbae52eb00fea7fb0b33b08ec4b4fc22607e55bfe585e3304671c?s=96&d=mm&r=g","caption":"Jyoti Saxena"},"description":"Jyoti is a storyteller at heart, weaving words that make tech and eCommerce feel less like a maze and more like an adventure. With a cup of chai in one hand and curiosity in the other, Jyoti turns complex ideas into conversations you actually want to have.","url":"https:\/\/www.bluehost.com\/blog\/author\/jyoti\/"},{"@type":"Question","@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044726265","position":1,"url":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044726265","name":"How important is robots.txt for SEO?","answerCount":1,"acceptedAnswer":{"@type":"Answer","text":"Robots.txt helps web crawlers understand which pages to index. This affects your visibility on Google Search and your rankings.","inLanguage":"en-US"},"inLanguage":"en-US"},{"@type":"Question","@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044786219","position":2,"url":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044786219","name":"What are the risks of using robots.txt Disallow all?","answerCount":1,"acceptedAnswer":{"@type":"Answer","text":"Using robots.txt Disallow all can remove your pages from search results, causing traffic loss and SEO damage that takes time to recover from.","inLanguage":"en-US"},"inLanguage":"en-US"},{"@type":"Question","@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812112088","position":3,"url":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812112088","name":"Can I block specific bots?\u00a0\u00a0","answerCount":1,"acceptedAnswer":{"@type":"Answer","text":"Yes, you can block specific bots by targeting their user-agent names in your robots.txt file. For example, if you want to block Bing's crawler, add a Bingbot Disallow rule like:\u00a0User-agent: Bingbot Disallow: \/private-directory\/ Bingbot disallow tells that bot not to crawl your site. Just remember: not all bots will obey the rules.\u00a0","inLanguage":"en-US"},"inLanguage":"en-US"},{"@type":"Question","@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812324898","position":4,"url":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812324898","name":"Is robots.txt legally enforceable?\u00a0\u00a0","answerCount":1,"acceptedAnswer":{"@type":"Answer","text":"No, robots.txt is not legally enforceable and follows Robots Exclusion Protocol. While most reputable bots adhere to it, malicious or unauthorized bots can completely disregard it.\u00a0","inLanguage":"en-US"},"inLanguage":"en-US"},{"@type":"Question","@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812394712","position":5,"url":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812394712","name":"Will Disallow prevent indexing?\u00a0\u00a0","answerCount":1,"acceptedAnswer":{"@type":"Answer","text":"Disallow does not equal noindex. Pages blocked via robots.txt can still be indexed if externally linked. Use both \u2018Disallow\u2019 and \u2018noindex\u2019 when you want control over both crawling and indexing.","inLanguage":"en-US"},"inLanguage":"en-US"},{"@type":"Question","@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044696012","position":6,"url":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044696012","name":"What does 'Disallow all' do in robots.txt?","answerCount":1,"acceptedAnswer":{"@type":"Answer","text":"\"Disallow all\" in robots.txt blocks all search engine bots from crawling any part of your site.","inLanguage":"en-US"},"inLanguage":"en-US"},{"@type":"Question","@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044893120","position":7,"url":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740044893120","name":"Can \u2018Disallow all\u2019 affect my site\u2019s SEO negatively?\u00a0","answerCount":1,"acceptedAnswer":{"@type":"Answer","text":"Yes, using 'Disallow all' can hurt your SEO. It can make your site hard to find on Google and affect your visibility in Google Search Console.","inLanguage":"en-US"},"inLanguage":"en-US"},{"@type":"Question","@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740045027389","position":8,"url":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740045027389","name":"How do I reverse the effects of 'Disallow all' on my website?","answerCount":1,"acceptedAnswer":{"@type":"Answer","text":"To reverse the 'Disallow all' directive:\u00a0\u00a0<br\/>1. Remove \u2018Disallow: \/\u2019 from the robots.txt file.\u00a0\u00a0<br\/>2. Submit the updated robots.txt file in Google Search Console.\u00a0\u00a0<br\/>3. Resubmit the XML sitemap to help search engines rediscover your pages more quickly.<br\/>4. Monitor Google Search Console for crawl errors.","inLanguage":"en-US"},"inLanguage":"en-US"},{"@type":"Question","@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740045126570","position":9,"url":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740045126570","name":"Is 'Disallow all' the best way to protect private content from search engines?\u00a0\u00a0","answerCount":1,"acceptedAnswer":{"@type":"Answer","text":"No, robots.txt Disallow all is not a good way to keep private content safe. It is advisable to use robust security measures, such as passwords, for sensitive information.\u00a0\u00a0","inLanguage":"en-US"},"inLanguage":"en-US"},{"@type":"Question","@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740045167445","position":10,"url":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1740045167445","name":"How frequently should I update my robots.txt file?","answerCount":1,"acceptedAnswer":{"@type":"Answer","text":"Check and update your robots.txt file after redesigning your website, moving content or making significant changes to your site layout. Ensure it aligns with your current SEO strategy and that your XML sitemap is linked correctly.\u00a0","inLanguage":"en-US"},"inLanguage":"en-US"},{"@type":"Question","@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812633591","position":11,"url":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812633591","name":"What is the difference between Disallow and Noindex?\u00a0","answerCount":1,"acceptedAnswer":{"@type":"Answer","text":"Disallow prevents crawling; noindex prevents indexing. Use both for complete control over visibility.\u00a0","inLanguage":"en-US"},"inLanguage":"en-US"},{"@type":"Question","@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812677126","position":12,"url":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812677126","name":"Does Disallow: \/ mean my pages are invisible to Google?\u00a0","answerCount":1,"acceptedAnswer":{"@type":"Answer","text":"No. If those pages are linked elsewhere, they might still appear in search results\u2014just not crawled.\u00a0","inLanguage":"en-US"},"inLanguage":"en-US"},{"@type":"Question","@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812756183","position":13,"url":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812756183","name":"How do I test if my robots.txt is working correctly?\u00a0","answerCount":1,"acceptedAnswer":{"@type":"Answer","text":"Use Google Search Console\u2019s robots.txt tester or external tools like TametheBot or technicalseo.com.\u00a0","inLanguage":"en-US"},"inLanguage":"en-US"},{"@type":"Question","@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812857416","position":14,"url":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1749812857416","name":"How often should I update my robots.txt file?\u00a0","answerCount":1,"acceptedAnswer":{"@type":"Answer","text":"After redesigns, content migrations, or structural SEO changes. Regular reviews help catch mistakes early.\u00a0","inLanguage":"en-US"},"inLanguage":"en-US"},{"@type":"Question","@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678220920","position":15,"url":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678220920","name":"Do robots.txt rules need to be on a separate line?","answerCount":1,"acceptedAnswer":{"@type":"Answer","text":"Yes. Each directive (such as Disallow or Allow) must be written on a separate line. Otherwise, crawlers may misinterpret your instructions.","inLanguage":"en-US"},"inLanguage":"en-US"},{"@type":"Question","@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678245356","position":16,"url":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678245356","name":"Is robots.txt case sensitive?","answerCount":1,"acceptedAnswer":{"@type":"Answer","text":"Yes. Robots.txt is case sensitive, which means \/Images\/ and \/images\/ are treated as different directory names.","inLanguage":"en-US"},"inLanguage":"en-US"},{"@type":"Question","@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678270583","position":17,"url":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678270583","name":"Can I block only one group of pages instead of the whole site?","answerCount":1,"acceptedAnswer":{"@type":"Answer","text":"Yes. You can target only one group of pages by specifying their directory name or pattern. For example, blocking \/private\/ while allowing other pages to be crawled.","inLanguage":"en-US"},"inLanguage":"en-US"},{"@type":"Question","@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678303627","position":18,"url":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678303627","name":"Does robots.txt need to be on the same host as my site?","answerCount":1,"acceptedAnswer":{"@type":"Answer","text":"Yes. The robots.txt file must live on the same host as your website\u2019s home page. If it\u2019s missing, crawlers may assume the whole site is open to indexing.","inLanguage":"en-US"},"inLanguage":"en-US"},{"@type":"Question","@id":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678333340","position":19,"url":"https:\/\/www.bluehost.com\/blog\/robots-txt-disallow-all\/#faq-question-1757678333340","name":"Is \u201crobots.txt deny all\u201d the same as Disallow: \/?","answerCount":1,"acceptedAnswer":{"@type":"Answer","text":"The phrase robots.txt deny all is a plain-language way to describe the Disallow: \/ rule (for example User-agent: * + Disallow: \/). It blocks crawlers from crawling the entire site and should only be used on staging or private sites.","inLanguage":"en-US"},"inLanguage":"en-US"}]}},"authors":[{"term_id":662,"user_id":110,"is_guest":0,"slug":"jyoti","display_name":"Jyoti Saxena","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/c74ba415c88dbae52eb00fea7fb0b33b08ec4b4fc22607e55bfe585e3304671c?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":"","9":"","10":"","11":"","12":"","13":"","14":"","15":""}],"_links":{"self":[{"href":"https:\/\/www.bluehost.com\/blog\/wp-json\/wp\/v2\/posts\/122674","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.bluehost.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.bluehost.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.bluehost.com\/blog\/wp-json\/wp\/v2\/users\/110"}],"replies":[{"embeddable":true,"href":"https:\/\/www.bluehost.com\/blog\/wp-json\/wp\/v2\/comments?post=122674"}],"version-history":[{"count":3,"href":"https:\/\/www.bluehost.com\/blog\/wp-json\/wp\/v2\/posts\/122674\/revisions"}],"predecessor-version":[{"id":265694,"href":"https:\/\/www.bluehost.com\/blog\/wp-json\/wp\/v2\/posts\/122674\/revisions\/265694"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.bluehost.com\/blog\/wp-json\/wp\/v2\/media\/233297"}],"wp:attachment":[{"href":"https:\/\/www.bluehost.com\/blog\/wp-json\/wp\/v2\/media?parent=122674"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.bluehost.com\/blog\/wp-json\/wp\/v2\/categories?post=122674"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.bluehost.com\/blog\/wp-json\/wp\/v2\/tags?post=122674"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.bluehost.com\/blog\/wp-json\/wp\/v2\/ppma_author?post=122674"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}