Removing links or your website from the Google Search Results can be a difficult task. There are many factors that go into what determines if your link will show up in the search results, and there is no way to remove all of them at once. In this article, we will walk through how to remove URLs from Google's search results.
Before removing a URL from the Google Search Results, you need to have a plan in place that adheres to SEO best practices to ensure you aren't harming the organic health of your website.
Sometimes you need your web page to be accessible to users, but don't necessarily want it appearing on search engine results pages.
A good example of this is when it comes to duplicate content, like in the case of e-commerce websites with thousands of near-similar product pages. In these types of cases, best practice is to include a no-index or canonical tag to either preserve crawl budget, or consolidate page equity into a more appropriate page to improve keyword rankings.
If your URL is receiving organic visits, and you would rather have a different variant of the web page appearing for a keyword instead of this URL, then you would include a canonical tag to signal to Google that the other web page is the preferred version that it should be assigning SEO value to.
If the URL isn't ranking on Google or driving organic traffic, then you would want to assign a noindex meta tag to signal to search engine spiders that it shouldn't be indexed or appear in the search results. In this case, no SEO value would be passed from this URL to a different / better URL.
If the content shouldn't be visible to users, you should delete or 301 redirect the web page to not only prevent it from appearing in the search results, but to also prevent users from discovering the content.
A 301 redirect is appropriate if you have another web page that has similar content / discusses a similar topic when compared to the original page.
A HTTP 410 status code is more appropriate if there isn't another page in place on the site that you can redirect users to.
To check if the content is indexed, head to Google Search Console and search for your website domain name (e.g., "sitename.com").
If you notice that this URL appears in the SERPs, it's likely being crawled or cached by Googlebot and therefore not removed from being able to show up in a user's search results.
There are some exceptions - one example would be URLs of images that have been uploaded on your site but aren't linked anywhere else: these might still appear even though they're technically unreachable pages.
You can also do a manual search using site operators on Google, like "site:example.com/post" to see if that page is actually indexed.
If you're looking to remove outdated or duplicate URLs from the search results, you can explore one of the following methods:
If your website has been hacked and you want to remove these malware URLs as quickly as possible, you'll want to use Google Search Console's URL removal tool to temporarily hide them until you can resolve the hacking issue.
Google will keep these URLs hidden for up to 180 days - however, they will still remain indexed by Google, so you need to ensure that you fix the root of the problem within that six month time period.
You can also remove these URLs from your website and serve a 410 HTTP status code to signal to Google that these pages have been permanently deleted to get them out of its index.
Some websites use a staging environment with the same domain name as their production site, which can cause problems when you try to remove URLs from Google. If this applies to your website and you want them removed from its search engine results pages, then you'll need to also tell Google about the URL removal by adding it to Search Console's indexing section.
You also want to serve staging URLs behind a firewall so Google's crawlers and website users aren't able to access the content in your staging environment.
If your staging URLs are outranking your live production URLs, you'll first want to assign 301 redirects that point to your live URLs, wait a few months, and then create a firewall to give Google the chance to pass SEO value to your preferred web pages.
In most cases, Google doesn't recommend noindexing or disallowing URLs using robots.txt directives, but it does recommend this method for removing indexed images on your site. This is because it isn't possible to include a noindex tag on an image, compared to HTML code on a web page.
Make sure you go into the URL removal tool in GSC to get rid of the images that you don't want indexed. Then you'll add a disallow directive in your robots.txt so that the next time Google downloads and crawls this file, it will see the Disallow directive and remove your target images from the search results.
If you're looking to remove content that isn't on your website, it can be a little trickier because you don't have as much control to make changes.
First, you want to reach out to the website owner and have them either add a cross-domain canonical that points to an appropriate page on your site; implement a 301 redirect; or ask them to delete that content.
If that website owner isn't answering or refuses to take action, you can use a few different forms, such as:
If you have URLs being indexed that contain sensitive customer information like credit cards, job applications, or where they live, you want to ensure that these types of pages never get crawled or indexed by a search engine.
You can temporarily remove these URLs using Google Search Console's Removals Tool, and then remove that content / serve a 410 HTTP status code to inform Google that the content has been permanently removed from your website.
After this, you want to ensure that sensitive content is never indexed again by implementing security measures or hiding that information behind a sign-in wall through something like an account create that forces users to log-in with unique credentials.
Below are a few common mistakes that you should avoid when you attempt to remove URLs from search engine results pages on Google and Bing.
While this was a common practice years ago, Google has stopped supporting the noindex directive in robots.txt files, so their search crawlers won't follow this directive.
Crawling is different than the indexation of a web page. Google isn't able to know what's on a page if it doesn't have the ability to crawl it, but it may still know that the page exists and will potentially display it in the search results using contextual hints like internal links or anchor text for backlinks that point to that page.
Nofollow Tags basically hint to Google that its search crawlers shouldn't follow that link to crawl the destination page. However, this isn't a hard and fast rule - it's simply a hint that Google could choose to ignore, allowing it to still discover your web page and index it.
Assigning both of these tags can confuse Google because they are directing its search crawlers to do two different things.
Canonicals tell Google to pass SEO value to a destination page and ignore all other variants in the grand scheme of ranking in the search results. A noindex tag tells Google that the web page shouldn't be indexed at all. When both are used, Google may choose to follow one of the two meta directives (typically it will choose the canonical tag over the noindex tag) which can still allow for the destination page to rank or show up in the search results pages.