Table of contents:

Google to Webmasters: Remove Noindex From Robots.txt Files

The search engine giant wants website owners and webmasters clean up all “noindex” commands in your robots.txt file.

Everyone handling an SEO campaign must have already heard from Google. Digital marketers, SEO specialists and basically every individual who has access to Google Search Console were notified by Google to start removing noindex directives from robots.txt files.

Effective September 1, 2019, all noindex directives must have already been taken out. Google already declared this upcoming change back in July. If you have not heard from them yet, then let this article serve as heads-up.

Should you care about this?

Of course, you should. If you care about your SERP rankings, then you should. Noindex directives basically tell Google not to crawl a particular page on your website. If you have a hidden page, that could work to your advantage.

But come September 1, if you fail to remove noindex statements on your robots.txt file, you will only confuse Google crawlers. The new crawling system, according to reports, will index every page on your website, including tho9gyse that you hid through noindex. The noindex statements may be ignored but, considering this is a new update, there is no knowing how much it will affect your website rankings in the event you fail to remove it.

That is why it is important to implement the necessary changes early on. Crawl-delay or nofollow commands may be better alternatives. Explore other methods so you can continue hiding page(s) without depending on noindex statements.

Alternatives to noindex

Rest assured, there are plenty of alternatives out there. Granted, it is the most effective method for hiding URLs from Google crawlers, hiding a page can be done in many ways and not just through noindex.

Below is a list of options for your consideration:

(1) Hide your page behind a protected username and password. Google will automatically not crawl login pages.

(2) Remove a URL from your website through the Search Console Remove URL tool. It allows you to temporarily block the URL from appearing on search engine results pages.

(3) Use Disallow commands instead. This directive prevents Google from seeing and crawling a page. If the URL in question appears on pages outside of your domain, Google will still crawl it. However, its overall visibility will greatly decline.

(4) Status codes 404 and 410 HTTPs may also work. They tell Google that a particular page is non-existent. Once the search engine processes this information, it will stop crawling your page.

(5) Adding noindex directives on the metatag section instead of the robots.txt file.

How soon should you remove noindex directives?

The sooner, the better. Also, you can streamline your website by weeding out pages that you no longer need. Adjunct pages that only take up space or could put your website rankings in jeopardy may need to be removed ASAP.

If you think such pages can still be of use in the future, then you can use any of the noindex alternatives listed above.


Do you need a new website design or a boost in your search rankings? We can help you! 

You can talk to our in-house SEO specialists or visit the following pages to know more about our SEO services in Australia: (1) Click here if you're looking for web design and development services. (2) Click here to start a comprehensive digital marketing campaign. (3) Click here for more information about SEO strategies. 

Article originally posted on Search Engine Land. 

AUTHOR:

Arthur Choi

Arthur Choi

SEO Strategist

70% technical SEO, 30% offsite optimisation. Loves to strategise and build organic SEO campaigns for plumbers, excavators and e-commerce clients.

Facebook share button Twitter share button Linkedin share button
+61 2 8003 5090
Get to Know Us

If you're looking for an affordable digital marketing agency and SEO company to work closely with you to get more traffic and sales, you can trust us!

Digital Muscle at Google Earth