Guide to Troubleshooting Technical SEO Problems

Technical SEO (Search Engine Optimization) sounds rather technical, and it is but it is not over-whelming. It is important to fix problems that can impact your site.

Technical SEO (Search Engine Optimization) sounds rather technical, and it is but it is not over-whelming. It is important to fix problems that can impact your site.

If your site is struggling in the search engine results pages (SERPs), hiring an expert to help you correct the problem is always best. But you don’t necessarily need to be an expert to correct every problem.

Here are some of the most common technical problems and how to fix them. This is not a step-by-step guide, so if you are not sure how to exactly implement these solutions, consult an expert.

Verify if you are indexed

If your pages are not indexed by search engines, you cannot rank for anything. To check, type site:yourwebsite.com, without any spaces. This will show you what pages are indexed. If you don’t see some of your pages, check Google’s Search Console. This will give you some insights on why certain pages are not being indexed.

Check your robots.txt file

Nearly all websites and web platforms have a Robots.txt file that sits in the top-level directory of your web server. This small text file tells web crawlers what to do when they encounter specific directories or files. The standard content looks something like this:

User-agent: *
Disallow:

When formatted as above, this file tells the web bots to crawl every page and resource in your site directory.


This may not always be ideal. There may be unimportant pages that aren’t shown to the public that can slow down crawlers, causing them to miss other important elements. Technically, it’s called a crawl budget.

To fix this problem, web developers change the Robots.txt file to specify which elements should be ignored, making it look something like this:

User-agent: *
<p>Disallow: /mail/

The “disallowed” elements here are simply server directories that shouldn’t be crawled because they don’t contain any useful content.

A problem occurs is when developers forget to finish off the Disallow sections after the forward-slash. Listing only “Disallow: /” tells crawlers not to crawl your entire web directory, including your website, meaning you are effectively de-indexing your site.

In this case, the problem is easily fixed by removing the forward slash and properly list your disallowed entries.

For more information on how to properly setup your robots.txt file, click this Google support article.

Track Rel=Canonical tags

Rel=canonical exists to tell web crawlers that two pages, either on or off site, are identical, and identifies which of the two pages crawlers should consider when indexing your site. This prevents search engines like Google and Bing from labeling a page as duplicate content, which can potentially get the page de-indexed.

Failing to use Rel=Canonical tags is just as much of an issue in technical SEO troubleshooting as failing to use them correctly. They need to be in place, on the right pages, pointing to the right zones, in order to positively impact SEO.

Using analytics tools, search your site for instances of duplicate content. Then, repeat the search for off site versions. Then, use Rel=Canonical meta tags to redirect crawlers to the right page.

For more information, see this article from Google.

Plagiarism

According to Wikipedia “Plagiarism is the ‘wrongful appropriation’ and ‘stealing and publication’ of another author’s ‘language, thoughts, ideas, or expressions’ and the representation of them as one’s own original work.”

We see this happen many times when site owners come to us complaining of poor-performing websites. One of the first things we do is check for duplicate content, which we find quite often.

Many web design companies that market to a certain niche, such as insurance, offer cookie-cutter websites to their clients. You end up with dozens of sites with the exact same content.

When search engines discover two or more websites with the same content, it negatively impacts your ranking in search results.

To find out if you have duplicate content on your website, use Copyscape or Siteliner.

Improve security with HTTPS

Still operating on HTTP? Google has been pushing site owners to switch to the more secure HTTPS for some time now. If you are using a popular browser, such as Chrome or Firefox, a non-secure website will trigger a “not secure” warning which may drive potential visitors away from clicking on your site.

Last year, Google announced a preference for secure sites with current SSL certificates, meaning HTTPS has more SEO weight than HTTP alone. If you use secure site technology to keep your viewers safe, Google will reward you with a slight ranking boost.

I hope this article was helpful, but if you need further help, contact Blue Lacy SEO today.

Blue Lacy SEO
El Paso, Texas
[email protected]
915-494-2382 or 915-471-9796

Related Posts