
Technical SEO helps understand how search engines crawl and index your website without any problems. Google Search Console and Bing Webmaster Tools are free to use for website analyses and sighting of any technical SEO issues. Search Engines have technical limitations. They may not understand your content either fully or partly. Technical SEO factors matter and you need to address these limitations which search engines come across by understanding how they work. You may have unique titles or a lot of images and text which tells the search engine what the post is all about. Search engines recognize JavaScript code running on your website. At times you block your content visibility not knowing that you have not connected or opened up your website to search engines. Connect SEO with other business activities. Improve further with visuals such as videos or engagement with local content and news.
Non-Technical v/s Technical SEO
SEO can be divided into three sections. First is the On-page SEO [optimization to improve the content]. Second is the Off-page SEO [links, promotion of content]. The third is Technical SEO which focuses on how well spiders can crawl your site and index your content. The Screaming Frog SEO Spider crawls website URLs. It gets key elements to break down and audit technical and On-page SEO. Track results using Google Analytics.

Related Content * Screaming Frog * 7 Steps To Getting Started With Google Analytics * Links in Google Search Results
Site Indexing for Technical SEO
The procedure of downloading data from websites and storing it into databases by Search Engines is referred to as Website Indexing. Their purpose is to serve relevant results against queries submitted by users. Websites can be indexed by XML sitemaps, robots.txt, Meta Robots tag, etc.

Google Search Console
Go to coverage under index (fig 2) in the left bar menu and see how many pages have been crawled by Google. To check the status of any particular URL, enter it in the top menu box. Within the coverage, section open the report and view a breakdown of errors (if they exist). Mobile Usability covers the mobile issues that you should optimize for your website.
Related Content * Google Search Console
SiteMap
Sitemaps point search engines to pages on your website and ensure crawling of pages. Both XML and HTML sitemaps help search engines crawl your website.
HTML Sitemaps
An HTML sitemap aids users to acquire an extensive understanding of your website. HTML sitemaps are easily read and understood. They are a simple page with links to pages on the website. In the general overview find pages of interest on sitemap page. In a large website, the HTML sitemap content is split or categorized so that it can be better organized and understood.
XML Sitemaps
XML sitemaps are specifically created for search engines intended for reading by search engine robots. They are more specific to SEO & search engines. It includes the back scene activity of a webpage and provides unique information about each URL. Search engines get access to additional data. When the page was last updated? How often the page changes? What is the Page’s importance in relation to other pages on your site? The content is more logically analyzed by search engines that are useful for new, undiscovered sites.

A SiteMap can be created by using the Yoast SEO for WordPress websites. Another option is xml-sitemaps.com. We will create an XML sitemap using the Screaming Frog tool (fig 3) and upload it to our website.
Related Content * Screaming Frog
Screaming Frog
In the screaming frog interface, enter your website to crawl your website and click start. The tool crawls a list of pages within the site. The progress of the crawl can be seen at the right as the crawl progresses through the pages of the website. The results show the address of the page, type of content for the page whether or not it is an HTML file or image or code such as javascript, status code, error codes, title tags, and more….. After the crawl is completed you can create an XML sitemap by going to sitemaps in the top bar menu and choosing to create XML sitemaps. The screaming frog gives several options and is best left at default. There are additional options available if you need something specific.

The Pages tab (fig 4) includes pages like No index Pages or Canonicalized pages or paginates ie pages in a series such as a page 1,2,3… or pdfs.

The last modified tab (fig 5) tells search engines when the page was last modified. You can go by the server response or custom date.

The Priority tab (fig 6) sets the priority for specific pages within your site.

Change the frequency (fig 7) and show how often the pages are updated such as daily, weekly, monthly.

Choose whether or note to include images (fig 8).

The Hreflang (fig 9) binds multiple language URLs together.
XML File download
After selecting the information from the various options create the sitemap by clicking ‘Next’. The screaming frog will then ask where you want to save the file. After you have selected the location for saving the file you will be required to name the file. To get an idea of the sitemap file, open it by clicking the link XML File Map and view the PDF. See the additional information which is provided at the top of search engines. Also view each page with extra information such as how often it is updated, the URL of the page, and the priority of the page. Depending on the setting you have chosen see all the information along with when the page was last modified.
Related Content * XML Site Map > As generated by 'Screaming Frog SEO Spider'
Schema
Schema or schema.org is microdata (structured data) that when added to HTML enhance the manner in which search engines read to present the webpages in SERPs. It is the method of markup preferred by search engines such as Google, Bing, etc.

In Google’s Structured Data Testing tool (fig10) paste your website’s URL or the code snippet. After the tool runs it will check the structured data code to flag any errors if detected. It shows whether the structured data code is in the right format or not. Google support recommends checking sites using this tool during the development stage and before deploying your website.
Technical SEO: Site Accessibility
Crawling tools such as screaming frog, SEMrush Site Audit, DeepCrawl, help us know- how search engines work? how they crawl your website? They check on
Crawl Budget is the number of pages Googlebot crawls and indexes on a website within a given time period. The factors that affect the crawl budget are – the domain’s age and size, link profile, and the quantum of new content. For crawling all links should be reachable and readable. Crawling efficiency depends on the use of correct HTTP status codes, no duplicate & thin code, and page speed.

The crawl stat tabs in Google Search Console (fig 11) located under ‘Legacy tools and reports’ provide information on how many pages are being crawled per day.

With the tab – ‘fetch as Google’ keep an eye on daily crawling trends, Use this information (fig 12) as a basis and follow up with the trend.
User Agent
A robots.txt file shows whether certain user agents/ crawling software can or cannot crawl parts of a website. These crawl instructions are stated by allowing or disallowing the behavior of user agents.
Javascript SEO
A Technical SEO professional must know and understand JavaScript. Javascript SEO allows crawling, indexing, and ranking of websites by Search Engines. Google can crawl JavaScript but avoid massive use of client
The procedure for indexing JavaScript takes place in two waves and requires about a weeks’ time. The key to JavaScript SEO is Server-side rendering or Hybrid Rendering. In Server-side rendering, the content on the website that is significant will be crawled and indexed in one wave rather than two. The Server-side render must include indexable URLs and internal links must be ‘a
Canonical tag
A canonical tag informs search engines that a particular URL is the main copy of a page. This URL needs to appear in search results. The canonical tag when used puts an end to issues caused by duplicate content visible on multiple URLs. A canonical tag code would look like – <link rel=”canonical” href=”https://sapcanvas.com”>
Site Architecture & Linking in Technical SEO
Website architecture is all about structure and linking of webpages. A perfect website architecture helps users to find what they are looking for and search engine crawlers to find and index pages. From an SEO point of view, a flat architecture is the best which means users and crawlers alike can reach any page on the website within 4 clicks. In a flat architecture, Google spiders find all the pages on the website thereby maximizing your crawl budget. On the other hand, a deep architecture means that certain pages take 4-10 more clicks to reach the particular page of your interest. It is complicated and is bad both for SEO and UX.
Page Depth
All content should be 3 – 4 clicks from the home page. This indicates that the website is optimized for both the users and crawlers. As a rule, the more important a page is the closer it should be to the home page.
H & alt tags
H tags is an on-page SEO factor which conveys to search engines what the website is all about. Make sure the h1 tag contain the targeted keywords relevant to your content. The h2 tag is a subheading and should have similar keywords as in the h1 tag. Next h3 tag becomes a subheading for the h2 tag and so on. Ensure that the targeted keywords placed in your content are within the norms specified by the keyword density requirements.
Internal Links
Internal Links makes sure the availability of all documents. It bunches content under circumstances for what a page is supposed to rank for. Link types can be image links, text links, navigational links, content links, etc. The element that is associated with links is <a href>, no follow attribute, anchor text for destination links. An example of navigational links can be seen in breadcrumbs which shows where you are from the domain. Google can display breadcrumbs in search result if you have used markup properly using schema.org. The robots.txt file manages crawlers to make sure significant URLs are not blocked by using the <a href> and not JavaScript.
The goal of using Hreflang is to serve the correct content to the user. It is a tag attribute that informs search engines the association between pages in different languages on your site. A website is multilingual and you want search engines to send users to the content in their own language.
Link Juice
In SEO Link juice is a term that describes the equity that gets passed when one page or site is linked to another through hyperlinks. Search engines view such links as votes by other sites that your page is extremely important for promotion. Hyperlinks point to a complete document or to a particular element within that document. eg: <a href=“https://www.sapcanvas.com”>. As such hyperlinks can be text, graphics, icon, or objects in a document that is linked to another file. Hreflang tags and no follow tags do not pass any link juice.
No Follow Links
Nofollow links are hyperlinks with a rel=“
Broken Links | Redirects
Orphaned Links are created when you create a page and fail to link it. These orphan pages do not get views and users may not get access to proper content by incorrect linkings. Identify the orphaned pages on your website that are no more linked with your content. On the other hand, broken links occur when pages are deleted, misspelled URLs, renaming of sites, etc. Test your links regularly and repair broken links for providing good user experience. When you move content to a new URL use the redirect to forward users and search engines from the existing URL to that new URL. It can be also used when you delete a page or change domain names or merge websites.
Technical SEO: Conclusion
Measure various SEO metrics using Google Analytics. For WordPress websites use plugins such as Yoast for SEO and WP Super Cache for optimizing website speed and performance. Ensure your website is ‘Mobile Friendly’ and provide a great UX across all devices. Check how search engines view your website in Google Search Console.
Develop Buyer Persona and evolve content around a set of keywords. Research for keywords using the free tools- Google Autocomplete, Ubersuggest & Keyword Shitter. Choose the target keywords that you can rank for. Write a unique title tag with the target keyword and include it in the URL. Meta Descriptions should be written to include target keywords and have the ability to increase CTR. Include the target keyword in the filename and the alt text for images. The target keyword should appear in the first 100 words of your post, title header h1 as also in the subheadings h2, h3……
Ensure to add at least 2 inbound links and 2 outbound links (other blog posts, .edu, and .gov resources) to your article. Content should be a minimum of 300 words or more. Make use of the social sharing buttons on your website. Follow a comprehensive link building strategy. Use tools such as MOZ SEO, Ahrefs, Majestic SEO, and replicate your competitors’ authoritative links and try reaching out to those sources. Publish content on high domain authority sites such as YouTube, Slideshare, Quora, Guest Posting, etc with the aim of driving traffic to your website. For Content Ideas use the BuzzSumo tool. Get your Google My Business Page if you have a business with a physical location.