Google Webmaster made an announcement earlier today. Google will be retiring all code that handles unsupported and unpublished rules. GoogleBot will no longer obey a robots.txt directive related to indexing. The reason being it is an unofficial directive. Publishers who rely on robots.txt no-index directive have until the 1st of September, 2019.
Garry Illyes, a webmaster trend analyst with Google, as promised ran an analysis about no-index in robots.txt.
In answer to a question why there is no code handler for rules like crawler delay he said-
While open sourcing their robots.txt parser library they analyzed the usage of robots.txt rules. They focused on rules unsupported by the internet draft . These rules included crawl-delay, no-follow and no-index. They found that due to never documented by Google, their minimal usage in relation to Googlebot hurts website presence in the SERP.
Alternative options for people who relied on the no-index indexing in the robots.txt file is suggested in the Google Webmaster Central Blog.