Google Search Console: What Does it Mean When Your Pages Are Crawled - Currently Not Indexed?

Brandon LazovicMay 1, 2021

Google Search Console is a great tool to have for any business owner. It provides valuable information about the performance of your website in Google's search results, and tells you when and why pages are not indexed. This article will explore what it means when your web pages are crawled but not indexed by Google, as well as some ways to get them re-crawled and back into the index so that they show up on page 1!

What Does It Mean When A Page Is Crawled But Not Indexed By Google?

The indexing status in Google Search Console tells you which pages have or haven’t been found by the Webpages crawling. But it doesn’t tell you whether they eventually will be, and how to fix any problems that might make them invisible for now as part of your search engine optimization efforts. 

However, we can infer that:

  1. Google can crawl the page and took the time to do so
  2. After crawling the page, Google decided not to index it in the search results

Reasons Why A Page Might Not Be Indexed

There are many reasons why a page may not be indexed in the search results by Google. Below are a few reasons why this may be happening to your site pages:

Bad Internal Linking

When a site is crawled but not indexed by Google, it means there are too few links pointing to the page. Add more internal links pointing at pages higher in your website’s architecture—these will help drive more traffic to those important pages.

Poor Quality Content

Pages without Google indexing may have been put on the back burner because they don't seem very valuable to Google. Make sure that you've covered all of your customer needs and are providing useful content with higher quality than competing pages in order to get these indexed.

Duplicate Content

These pages may be near-duplicates in terms of content compared to other site pages (this often happens with e-commerce product pages). To rectify this, add canonical tags to your preferred version of the web page.

Click-depth Is Too Long

Too few clicks will leave essential pages relatively unknown due to poorly optimized crawl budget. 4 clicks or less from the homepage is a good rule of thumb to ensure that your web pages are seen as important by Google, as well as helping with user navigation so visitors can find and discover these pages easily on your website.

How To Get Google To Crawl And Index Your Pages

Now that we've explored common reasons that your web pages are being crawled, but not indexed by Google, let's walk through a few ways to get Google to index those pages that are being crawled.

Check The Page Using Google Search Operators

First, you'll want to actually check and see if those web pages that are being flagged in Search Console are actually indexed.

Use a site operator on Google like "site:example.com/post" to see if that page is indexed - if it is, then no action is needed! It'll just take some time for Search Console to update its data base to reflect that the URL is both crawled and indexed.

RSS Feed URLs

This is a common example that we see in Search Console for this issue. Most RSS feed URLs with have something like /feed/ appended to the end of the URL. Most of the time, RSS feed URLs are linked within a XML document, so there's no need for Google to include it in its index because serving a XML document to users would provide a poor content experience.

Paginated URLs

Paginated URLs are another reason that the "crawled - currently not indexed" status will appear in Search Console. While Google will use pagination as a way to crawl and discover content on a website, it doesn't necessarily need to display paginated URLs in its search index.

Outdated Inventory Pages

Outdated inventory pages can also lead to this status being shown in Google Search Console. When Google checks to see the availability of a product and finds that its either out of stock or expired, this isn't a useful to users and will opt to not index those types of pages.

The easiest solution for this issue is to take stock of your inventory and make sure that products aren't being incorrectly listed. A good way to automate this is to check your web pages using a Screaming Frog crawl, especially if you're using the OutOfStock schema markup type.

301 Redirect Issues

A 301 redirect issue is also a likely culprit for pages being crawled but not indexed.

The first thing to do when you see this on Google Search Console is confirm that there's no error message shown in the console, which would indicate an intermittent problem with your 301 redirects.

If all seems well, check your crawler logs and make sure they show the server response code of 200 whenever it encounters a page during its crawl. If these don't match up then you'll need to fix either the crawling or indexing issues before continuing with any other steps.

Most of the time, Google may index a page that is being 301 redirected to a new page, so it's indexing the 301 redirect rather than your new destination page.

If this is an issue on a large number of URLs, a solution is to create a temporary sitemap using all the URLs listed in the crawled - currently not indexed report in Search Console.

This will help with Google crawling your new destination URLs more frequently than it otherwise would be without the temporary sitemap.xml file in place. 

Pages With Thin Content

If your pages suffer from thin content (less than 300 words on the page), then Google will choose to not index them because they aren't beneficial to the user experience.

This often happens with product listings on ecommerce websites. The solution to this problem is to simply add more content to those web pages to promote a better user experience.

Pages With Duplicate Content

If your pages are duplicating their content from other pages on your website, then this may cause Google to not index them.

The solution is to create a new page with unique and valuable information that's unrelated to any of the other pages on your site. This will help prevent duplicate content issues in the future because as long as there are no links between those two sets of webpages, it'll be fine for Googlebot to crawl both versions without causing problems for either one.

A few methods for adding content to these types of pages is to create dynamic tags that will insert more unique content for each individual page (like ecommerce product listings). You should also take care to remove as much boilerplate copy as possible to better diversify these types of pages from one another. 

Summary
What Does It Mean If A Site Is Crawled: But Not Indexed?
Title
What Does It Mean If A Site Is Crawled: But Not Indexed?
Description

If you're worried that your site was indexed by Google, but it's not showing up in the results, this video is for you. In this video we'll talk about what it means if a site is crawled: but not indexed!

Categories:
Digital Marketing
Social Media
Digital PR
Photography
SEO
Analytics
Tool Reviews
Local SEO                                                          
Contact: 
brandon.lazovic@gmail.com                           

Sitemap           Privacy            Editorial Guidelines
Copyright © 2021 Brandon Lazovic LLC. All rights reserved.
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram