The Essential SEO Audit Checklist with Step-by-Step Guide

10.05.2024
by Gav Stevens
Head of SEO

This article provides everything needed to boost the SEO of your website. It includes: our professional SEO audit checklist, instructions of how to use it, and a step-by-step guide explaining exactly what to do.

If you want Google to crawl, index and rank your website you’ll love this post. Just make sure you read it carefully and work through every step one by one.

Download your SEO audit checklist:

The epicentre of any SEO audit should be a comprehensive checklist so that nothing is missed or forgotten.

Here’s an SEO audit checklist we use to great effect – click here to download!

Instructions:

  1. Download the SEO checklist spreadsheet.
  2. The checklist assumes you’re auditing a whole website.
  3. It’s split into numbered sections which you need to work through in order.
  4. Each potential issue is: HighMedium, and Low Importance.
  5. Mark each issue with Pass or Fail or N/A as you work through them.
  6. Add any comments under ‘NOTES’ and next steps under ‘REMEDIAL ACTION’.
  7. The ‘Explanation’ column links to this guide & explains how to audit that specific item.

 

1.    Essential tools to do an SEO Audit

Like most things in digital marketing, there are a lot of ways to do this.

The good news is that everything in this audit is possible with just two pieces of software.

Screaming Frog

This is by far the best web crawler for SEO – powerful, uncluttered, reliable and easy to use.

Click here to install Screaming Frog.

*An annual licence for the paid version is comparatively very cheap and a no brainer for anyone who’s serious about SEO. It’s also the only paid version of SEO auditing software you will ever need.

Google Search Console (GSC)

This is where a website’s presence in Google Search results is monitored, maintained and troubleshooted.

You’ll need a Gmail account to sign up and need to have added your domain. If you haven’t simply sign in, go to ‘Add property’ and enter your domain under ‘Domain’:

Run a site crawl

Add the website address in the bar at the top where it says ‘Enter URL to spider’.

Change to ‘Subdomain’ (unless you want to crawl an ‘Exact URL’ or ‘All Subdomains’, which are not the focus in this article).

Then, click ‘Start’.

 

2.    Crawlability and Indexing

Website is indexed

Can the website pages appear in Google Search results?

The most effective way to check this is a simple ‘site:’ search. Open Google Search in a web browser (such as Chrome) and enter:

site:yourdomain (including the www. if that is being used)

Check the main pages are showing in the results i.e. homepage, about, service/ product pages, blog, contact, etc.

Also, pay attention to how many results there are – does the number make sense?

If not, pay more attention to the number of URLs in Screaming Frog (Crawl Data > Internal > HTML) compared to the number of results in the SERPs.

Go back to the URL inspection tool in GSC to ‘Test Live URL’ and learn what the issue is:

URL inspection on Google Search Console

This is the preferred option if you want to investigate a particular URL.

Simply, click on ‘URL inspection’ within GSC and enter the details. You can check if it’s indexed as well as ‘request indexing’ if needs be.

Page content is Google friendly

We don’t just want Google to index page URLs – it needs to index our actual content.

Some of the most widely used web development languages, like JavaScript, need to be tuned carefully so that they’re Google friendly.

Check a web page by adding its URL to the following line of code in a web browser address bar:

https://google.com/search?q=cache:https://entertheURL/

Make sure all the main pages on the website are indexable and that the cached version of the page that Google serves is as expected.

Check HTTP response status codes

A page needs to return a 200 status code (from the web server) for search engine indexing. In other words, we don’t want 4xx and 5xx error codes.

Use Screaming Frog to compile a list of all the pages with error codes by clicking the ‘Response Codes’ tab. Sort in descending order to see the 5xx and 4xx codes easily, then highlight and export to excel.

Clean up the list and take remedial action.

For example, a 404-error code means the page is no longer available and requires redirecting accordingly.

Redirects serve two main purposes:

  • They forward the page authority of the old page to the new one i.e. if an old page ranked well for something, the new page gets that benefit (provided the content is on the same topic and optimized).
  • Getting a 404 ‘page no longer exists’ error is a very poor user experience so adding a redirect ensures this is avoided.

Check the robots.txt file

It’s a good idea to do a quick check of the robots.txt file, just to make sure nothing untoward is living in there. This is especially important if the site seems to be having indexation issues.

A robots.txt file controls which files crawlers can access on your site. Add ‘robots.txt’ to the end of you domain to look at it for your website:

https://yourdomain/robots.txt

If all crawlers can access all files, it should look something like this:

User-agent: *
Disallow:
Sitemap: https://www.paladinmarketing.co.uk/sitemap.xml

Anything else and it might be blocking some access i.e. restricting crawling and indexing.

If you’re unsure whether there’s an issue, or want to double check something, there is a robots.txt file checker within GSC.

Go to GSC > Settings > Robots.txt > Open Report

URLs listed in the XML Sitemap

It’s best practice to have all your indexable URLs (i.e. those you want to rank in Google Search) listed in an XML sitemap file. It’s not a technical requirement but makes discovering the pages for crawling as easy as possible.

Add sitemap.xml onto your domain to take a look:

https://yourdomain/sitemap.xml

Did you notice the sitemap file location in the robots.txt file example above?

Once a sitemap lists all the URLs for indexing it makes sense to ensure that search engine crawlers can find it. Just add your domain to the syntax (like above) and it’s good to go.

Sitemap submitted in Google Search Console

Finally, submit your sitemap to Google so it can be used for crawling and indexing.

Would Google find your sitemap without manual submission?

Yes, but nowhere near as quickly. Also, Google provides an Index Coverage Report which can be handy if problems are encountered.

Internal links are efficient

The most valuable pages on your website should be accessible in three clicks (i.e. links) or less.

This applies to all types of pages but obviously excludes things like product configurations or other dedicated user journeys. Therefore, internal links (including navigation) between pages needs to be checked.

Choose an end point that a user could be interested in.

Start on the website homepage before counting how many clicks it takes to arrive at the chosen destination.

If you can get there via a few links, so can Google’s crawler.

Canonical tags point to the right pages

A canonical tag specifies a single preferred version of a page when its available via multiple URLs. It tells a search engine which one to focus on and index, thereby preventing duplicate content that wastes ‘crawl budget’.

SEO best practice is to have canonical tags on every URL based on two simple rules:

  • Different URLs with the same content should point to the same canonical URL.
  • URL parameters that don’t alter a page’s content should have the same canonical tag.

Note – these also apply to domain patterns i.e. with or without the ‘www.’

Screaming Frog is brilliant for auditing canonicalization:

There’s a bunch of different things it enables you to do but the most important for an SEO audit is finding ‘missing’ canonical tags. Once you know this you can add them and direct crawlers to the correct pages.

Google agrees with you

So, as usual Google gets the final say with canonicals too.

Even if your user-defined canonicals are clearly marked for every page, Google won’t necessarily respect them. It uses more than just your tags as canonicalization signals, such as redirects, URL patterns, links, etc.

The easiest way to see if Google respects your canonicals is in GSC > Indexing > Pages:

Click on the ‘Duplicate, Google chose different canonical than user’, then the magnifying glass on a URL to inspect further.

The Golden Rule of Canonical Tags

Be careful when applying canonical tags as it’s easy to send mixed signals to Google without realizing it.

If a page is not to be indexed (i.e. pagination pages that have the ‘noindex’ tag) you can’t then canonicalize it to a page that is to be indexed.

Thus, the golden rule goes: do not canonicalize ‘noindexed’ pages to a URL that is indexable.

Simple URLs

Whilst users of a website never really take much notice of the URL in the address bar, it’s good practice to ensure they’re as simple as possible.

This is because:

  • They’re easier to manage in analytics (for analysis and reporting)
  • It avoids broken relative links that can become never ending
  • Google might use a URL to create a breadcrumb for a snippet in the SERPs

Google provides guidance on what it regards as good URL structures if you’re struggling.

Make sure any JavaScript is Google friendly

To check if JavaScript (JS) is in use on a website or web page, simply turn it off in your browser settings then refresh and browse. A page will look the same if it doesn’t use any – if it does, the JS content will have disappeared.

There are three tools to check if Google can render a page with JavaScript present:

  1. The ‘Test Live URL’ function in the URL inspection tool in GSC in View Tested Page using the ‘screenshot‘ tab.
  2. Google’s mobile-friendly test
  3. Screaming Frog

Screaming Frog is by far the most powerful and comprehensive, not to mention easy to carry out. Go to the JavaScript tab then filter by ‘Pages with Blocked Resources’.

Export a list of issues and pass to whoever developed the website.

 

3.    Meta data

Meta data is not seen by users on the front end of the website. Basically, it’s information about the information on the page so search engines can understand the content better.

After crawlability, it’s the most important aspect of SEO.

Indexable pages have a meta title

All indexable pages should have a meta title (a.k.a. page title) that fits certain criteria, as follows:

  • Contains the page’s focus keyword (in the first 60 characters ideally)
  • Describes what the page is about clearly
  • Doesn’t exceed 60 characters in total (so all of it displays on mobile and desktop)
  • Is unique, that is it’s not used anywhere else on the website

Go to the ‘Page Titles’ tab in Screaming Frog which provides lots of insight including the most important issues like: Missing, Duplicate, and Over 60 Characters.

Ensure there is a meta title for each indexable page that is: unique, under 60 characters and contains the page’s focus keyword.

Good meta descriptions

These are not as important as they use to be because nowadays Google pulls content dynamically from a page for SERP descriptions.

But it will use yours if it deems it to be better that what it can do (for a given search query). Therefore, optimizing them makes sense. Make sure they include:

  • The page’s focus keyword
  • What the page is about and a clear call to action
  • No more than 105 characters (so it displays in full on mobile)

Also, make sure they’re unique otherwise it’s likely they’ll be ignored by Google.

Favicon in place

A favicon is the little icon of a company’s logo that sits up in the web browser tab or next to the page title on mobile. It’s a tiny graphic that is purposely put in place by web developers.

Whilst these have no technical impact on SEO, they can play a part by attracting more click throughs in the search results. This is especially true if you’ve invested in raising your brand awareness.

A quality favicon will engender organic CTR which is good for SEO, especially if your content is an awesome UX and conversion optimized.

Structured markup

Structured mark-up is not a ranking factor in its own right.

That said, it helps Google understand the content of a page and is also used to generate rich snippets (which skyrocket click throughs in the SERPs). Google lists the various types it supports here.

To pass this item in the audit, you need to validate the structured data on all major website pages (e.g. home, sales pages, blog, product pages, etc, etc) using the appropriate Google tools.

 

And that’s it! That’s everything you need to be sure that your website is following the best SEO practices and, in doing so, performing at it’s best for you!

We hope you found this post helpful. Happy SEO-ing (and you can thank us later!)

The Essential SEO Audit Checklist with Step-by-Step Guide