Seo, in its a lot of fundamental sense, relies upon something above all others: Search engine spiders crawling and indexing your site.
However nearly every website is going to have pages that you don’t wish to include in this exploration.
In a best-case situation, these are not doing anything to drive traffic to your website actively, and in a worst-case, they might be diverting traffic from more important pages.
Fortunately, Google enables web designers to inform online search engine bots what pages and content to crawl and what to disregard. There are a number of methods to do this, the most common being using a robots.txt file or the meta robots tag.
We have an exceptional and comprehensive description of the ins and outs of robots.txt, which you need to certainly read.
However in top-level terms, it’s a plain text file that lives in your site’s root and follows the Robots Exclusion Protocol (REP).
Robots.txt provides crawlers with guidelines about the site as a whole, while meta robotics tags consist of directions for particular pages.
Some meta robots tags you might use consist of index, which tells search engines to include the page to their index; noindex, which informs it not to add a page to the index or include it in search results page; follow, which advises an online search engine to follow the links on a page; nofollow, which informs it not to follow links, and an entire host of others.
Both robots.txt and meta robotics tags are useful tools to keep in your toolbox, but there’s also another way to advise search engine bots to noindex or nofollow: the X-Robots-Tag.
What Is The X-Robots-Tag?
The X-Robots-Tag is another way for you to control how your web pages are crawled and indexed by spiders. As part of the HTTP header response to a URL, it controls indexing for an entire page, in addition to the particular components on that page.
And whereas utilizing meta robots tags is fairly simple, the X-Robots-Tag is a bit more complex.
But this, of course, raises the question:
When Should You Utilize The X-Robots-Tag?
According to Google, “Any instruction that can be utilized in a robotics meta tag can also be defined as an X-Robots-Tag.”
While you can set robots.txt-related instructions in the headers of an HTTP reaction with both the meta robots tag and X-Robots Tag, there are particular circumstances where you would want to utilize the X-Robots-Tag– the two most common being when:
- You want to control how your non-HTML files are being crawled and indexed.
- You want to serve instructions site-wide rather of on a page level.
For example, if you wish to block a specific image or video from being crawled– the HTTP reaction technique makes this easy.
The X-Robots-Tag header is also helpful because it permits you to combine multiple tags within an HTTP response or use a comma-separated list of instructions to specify regulations.
Possibly you don’t desire a certain page to be cached and desire it to be not available after a particular date. You can utilize a mix of “noarchive” and “unavailable_after” tags to advise search engine bots to follow these directions.
Essentially, the power of the X-Robots-Tag is that it is much more flexible than the meta robots tag.
The advantage of using an X-Robots-Tag with HTTP actions is that it enables you to utilize routine expressions to perform crawl directives on non-HTML, in addition to use parameters on a bigger, global level.
To help you understand the distinction in between these instructions, it’s handy to categorize them by type. That is, are they crawler regulations or indexer instructions?
Here’s an useful cheat sheet to discuss:
|Spider Directives||Indexer Directives|
|Robots.txt– uses the user representative, allow, disallow, and sitemap regulations to define where on-site search engine bots are enabled to crawl and not allowed to crawl.||Meta Robots tag– allows you to specify and avoid online search engine from showing particular pages on a site in search results.
Nofollow– allows you to specify links that need to not hand down authority or PageRank.
X-Robots-tag– permits you to control how defined file types are indexed.
Where Do You Put The X-Robots-Tag?
Let’s state you want to block specific file types. A perfect approach would be to include the X-Robots-Tag to an Apache configuration or a.htaccess file.
The X-Robots-Tag can be contributed to a website’s HTTP responses in an Apache server configuration via.htaccess file.
Real-World Examples And Uses Of The X-Robots-Tag
So that sounds great in theory, however what does it look like in the real world? Let’s take a look.
Let’s say we desired search engines not to index.pdf file types. This setup on Apache servers would look something like the below:
In Nginx, it would appear like the below:
place ~ * . pdf$
Now, let’s take a look at a different situation. Let’s state we want to utilize the X-Robots-Tag to obstruct image files, such as.jpg,. gif,. png, and so on, from being indexed. You might do this with an X-Robots-Tag that would appear like the below:
Please note that understanding how these regulations work and the impact they have on one another is important.
For instance, what occurs if both the X-Robots-Tag and a meta robotics tag are located when spider bots find a URL?
If that URL is obstructed from robots.txt, then certain indexing and serving instructions can not be found and will not be followed.
If instructions are to be followed, then the URLs including those can not be prohibited from crawling.
Check For An X-Robots-Tag
There are a few different methods that can be utilized to look for an X-Robots-Tag on the site.
The easiest way to check is to set up a browser extension that will inform you X-Robots-Tag info about the URL.
Screenshot of Robots Exemption Checker, December 2022
Another plugin you can utilize to figure out whether an X-Robots-Tag is being used, for instance, is the Web Developer plugin.
By clicking on the plugin in your internet browser and navigating to “View Response Headers,” you can see the numerous HTTP headers being utilized.
Another method that can be used for scaling in order to identify issues on websites with a million pages is Shrieking Frog
. After running a website through Screaming Frog, you can browse to the “X-Robots-Tag” column.
This will show you which sections of the site are using the tag, together with which specific directives.
Screenshot of Screaming Frog Report. X-Robot-Tag, December 2022 Utilizing X-Robots-Tags On Your Website Understanding and controlling how search engines communicate with your site is
the foundation of search engine optimization. And the X-Robots-Tag is a powerful tool you can utilize to do simply that. Simply understand: It’s not without its dangers. It is very easy to make a mistake
and deindex your entire site. That said, if you read this piece, you’re most likely not an SEO beginner.
So long as you use it wisely, take your time and check your work, you’ll find the X-Robots-Tag to be a helpful addition to your arsenal. More Resources: Featured Image: Song_about_summer/ Best SMM Panel