JavaScript, today and the future of web development(~98% of all websites), is a programming language used to create complex interactive websites by manipulating HTML which is not programming but a markup language, and CSS as well. 

Without JavaScript, the web we know today would be an entirely different place. The websites would not be allowing user input and we would only be able to read what’s created and served statically. Online shopping wouldn’t be the same since most of the APIs(connecting back-end and front-end data) and interactive applications are built using JavaScript. As well as online shopping, data collection and other 3rd party applications would have needed to use different methods and techniques.

Despite the fact that JavaScript is an irreplaceable(at least for now) part of the web industry, there are some downsides to using Javascripts in terms of SEO. Let’s get to why and how JavaScript can impact SEO performance negatively by understanding how search engines work.

Crawling, Rendering & Indexing

For web pages to be indexed on search engine results for relevant user queries, they should be discovered in the first place. Once a page is discovered, the search engine crawler downloads static elements such as text, images and videos –this is called crawling.

Once crawling is complete, search engine algorithms analyze the page by dozens of criteria, define the context and store in the index in order to provide the best possible search results for related user queries.

Second Wave Indexing

Previously, your page was found, static HTML file was downloaded along with images and videos, and analyzed by algorithms. And if it was found worthy to be shown to searchers, was added to the index. This is the part where your page is listed on search results regarding what is inside the plain HTML file. For a better understanding of the page with dynamic content inserted by CSS and JavaScript applications, some search engines download aforementioned files and run them on artificial browsers –and this event is called rendering. 

Once rendering completes,  the page is analyzed and indexed once again for searchers around the world, unless there is anything against quality guidelines –and this is what second wave indexing simply means.

How JavaScript Can Impact Your SEO Game

JavaScript files can be parsed by most modern browsers to help you access dynamic content on web pages as smooth as your connection and hardware allow. However, the capability of search engines for rendering JavaScript is still limited.

As far as we know, Bing can generally render JavaScript. Yet, it is also stated “it is difficult for bingbot to process JavaScript at scale on every page of every website, while minimizing the number of HTTP requests at the same time”

Another major search engine Yandex states that any content that is generated by JavaScript will not be included in the index in their webmaster support document.

Last but not least, Google, with the highest number of users in the search engine market can render most of the Javascript content on web pages. However, it takes more time than indexing HTML content on pages. The reason for that is mostly the file size as well as the JavaScript codes can be much more complicated compared to HTML.

Let’s take Amazon as an example here. Raw HTML file on the homepage takes 113kB of space whereas JavaScript files on the page takes more than 1MB of space which means there is around 10 times difference in file sizes. Adding the complexity of code structure, parsing and execution of JavaScript is a time-consuming and costly process for Google. While we aren’t exactly sure and Google Search advocates don’t explicitly say how long for Google to render JavaScript, the exact time it takes for Google to render JavaScript is not publicly disclosed and can vary depending on various factors such as the complexity of the JavaScript code and the performance of the server. Thus, the content that is generated by JavaScript will take some time to be indexed and/or updated on search engine indexes.

Types of JavaScript Rendering

Client-Side Rendering

Client-side rendering is the process of sending only the raw HTML file with links to the sources that dynamically change the page such as JavaScript and CSS files. As soon as the HTML is started parsing, the user’s browser requests the dynamic content and executes CSS and JavaScript files regarding their fetching priority.

This allows users to see the static version of the page within milliseconds and the dynamic content is injected into the page as soon as it gets. However, execution time for the crawlers can be more problematic since they need to analyze billions of pages around the web.

On the other hand, client-side rendering is relatively cost-effective, as the website’s servers do not need to render each page, but only store the files.

Server-Side Rendering

Server-side rendering on the contrary, is the process of rendering all the files on the server, and delivering the fully rendered HTML once a user or a crawler requests the page. 

Server-side rendering is very advantageous regarding search engine crawlers because they will access the most fresh version of the page within seconds without needing to execute any other resource.

Unlike search engines, server-side rendering may not be very efficient for users. Once a request is made by a user, the server executes the dynamic files and sends back the fully rendered HTML as once again mentioned above, and this leaves the user waiting for seconds looking at a blank screen. This simply means very bad TTFB scores on the LightHouse metrics which is the base of all speed-related CrUX metrics. As well as the CrUX metrics, users will be frustrated as they need to access another page on the website and this process will leave most of the visitors unsatisfied.

Another down-side to server-side rendering is the cost since it will require using many more processors depending on the number of pages and the file sizes.  

Dynamic Rendering

Dynamic rendering in my opinion is the ideal mix of serving content to visitors and crawlers, especially for large websites. Dynamic rendering is the method of serving pre-rendered version of a page when the request comes from a search engine crawler, and serving the raw HTML together with JavaScript and CSS sources when the request comes from a real user’s browser. 

This way, search engines can analyze the most fresh version of the page without needing to use their own resources and real visitors do not have to wait for seconds every time they need to load a new page on the website.

How to Optimize JS-Heavy Websites

Keep Everything Essential in the Raw HTML

The first rule for dealing with JavaScript based websites is to keep everything you want search engines to see on the page. This includes a descriptive page title, a descriptive meta description, canonical address, other meta tags, essential visuals and of course the textual content.

Always Use href Links

Relying on #-based JavaScript powered links for the internal link network may result with crawlers unable to discover the pages properly.

hash-based routing link

href routing link

Manage Robots.txt Carefully

It may be tempting to block JavaScript and CSS files on the Robots.txt file because they are not shown to the visitors. Yet, blocking these resources will result in bots that can never see the rendered versions of the pages.

Don’t Try to Fool Google

Idea of showing different pages to search engines and real users(in other words; cloaking) may be thought of as an SEO strategy(black hat!) but, Google can detect and the website will have to face serious manual actions that can go up to full removal of the website form index. See the resource here.

Use Structured Data

Structured data is the best method to tell search engines what the page is about and what it provides to visitors. Even though there is not a chance of gaining rich results, using the most convenient structured data mark-up for every page possible to provide search engines information more than the limits of page titles and meta descriptions. 

How to See Rendered Version of a Web Page

Browser’s Developer Tools

It can be Google Chrome, Safari or another browser. Clicking on right on an empty space anywhere on the page, “View Page Source” will show the raw HTML elements of the page and “Inspect” will show the rendered version of the page. 

Google’s Tools

Google’s mobile-friendly test tool shows how Google crawlers see the page once “View Tested Page” is clicked. The same menu also exists on Google Search Console as well as Rich Results Test Tool.

Browser Extensions

Some browser extensions such as “View Rendered Source” show the raw and rendered HTML versions on a single page and also show what is changed after rendering.

Screaming Frog SEO Spider

Being one of the most powerful SEO tools, Screaming Frog SEO Spider also has a dimension showing the difference between raw and rendered versions of pages in bulk. For that, the rendering method on configuration settings should be set as JavaScript. And what’s changed by JavaScript applications will be shown in the JavaScript section of overview tab.