//Google says Goodbye to undocumented and unsupported rules in Robots.txt

Google says Goodbye to undocumented and unsupported rules in Robots.txt

Google Webmaster made an announcement earlier today. Google will be retiring all code that handles unsupported and unpublished rules. GoogleBot will no longer obey a robots.txt directive related to indexing. The reason being it is an unofficial directive. Publishers who rely on robots.txt no-index directive have until the 1st of September, 2019.

Google Webmaster announces

Garry Illyes, a webmaster trend analyst with Google, as promised ran an analysis about no-index in robots.txt.

Googlebot to not support noindex robots.txt derectiv

In answer to a question why there is no code handler for rules like crawler delay he said-

While open sourcing their robots.txt parser library  they analyzed the usage of robots.txt rules. They focused on rules unsupported by the internet draft . These rules included crawl-delay, no-follow and no-index. They found that due to never documented by Google, their minimal usage in relation to Googlebot hurts website presence in the SERP. 

Alternative options for people who relied on the no-index indexing in the robots.txt file is suggested in the Google Webmaster Central Blog.

By |2019-07-02T13:34:28+00:00July 2nd, 2019|SEO Update|1 Comment

About the Author:

Sakshi Singh
Sakshi Singh is a content marketing specialist at Justgoweb Digital. She has a background in content marketing, storytelling and journalism. Sakshi's editorial focus blends digital marketing and creative strategy with topics like SEO, SEM, and display advertising.