Crawling, Rendering & Indexing
For web pages to be indexed on search engine results for relevant user queries, they should be discovered in the first place. Once a page is discovered, the search engine crawler downloads static elements such as text, images and videos –this is called crawling.
Once crawling is complete, search engine algorithms analyze the page by dozens of criteria, define the context and store in the index in order to provide the best possible search results for related user queries.
Second Wave Indexing
Once rendering completes, the page is analyzed and indexed once again for searchers around the world, unless there is anything against quality guidelines –and this is what second wave indexing simply means.
This allows users to see the static version of the page within milliseconds and the dynamic content is injected into the page as soon as it gets. However, execution time for the crawlers can be more problematic since they need to analyze billions of pages around the web.
On the other hand, client-side rendering is relatively cost-effective, as the website’s servers do not need to render each page, but only store the files.
Server-side rendering on the contrary, is the process of rendering all the files on the server, and delivering the fully rendered HTML once a user or a crawler requests the page.
Server-side rendering is very advantageous regarding search engine crawlers because they will access the most fresh version of the page within seconds without needing to execute any other resource.
Unlike search engines, server-side rendering may not be very efficient for users. Once a request is made by a user, the server executes the dynamic files and sends back the fully rendered HTML as once again mentioned above, and this leaves the user waiting for seconds looking at a blank screen. This simply means very bad TTFB scores on the LightHouse metrics which is the base of all speed-related CrUX metrics. As well as the CrUX metrics, users will be frustrated as they need to access another page on the website and this process will leave most of the visitors unsatisfied.
Another down-side to server-side rendering is the cost since it will require using many more processors depending on the number of pages and the file sizes.
This way, search engines can analyze the most fresh version of the page without needing to use their own resources and real visitors do not have to wait for seconds every time they need to load a new page on the website.
How to Optimize JS-Heavy Websites
Keep Everything Essential in the Raw HTML
Always Use href Links
hash-based routing link
href routing link
Manage Robots.txt Carefully
Don’t Try to Fool Google
Idea of showing different pages to search engines and real users(in other words; cloaking) may be thought of as an SEO strategy(black hat!) but, Google can detect and the website will have to face serious manual actions that can go up to full removal of the website form index. See the resource here.
Use Structured Data
Structured data is the best method to tell search engines what the page is about and what it provides to visitors. Even though there is not a chance of gaining rich results, using the most convenient structured data mark-up for every page possible to provide search engines information more than the limits of page titles and meta descriptions.
How to See Rendered Version of a Web Page
Browser’s Developer Tools
It can be Google Chrome, Safari or another browser. Clicking on right on an empty space anywhere on the page, “View Page Source” will show the raw HTML elements of the page and “Inspect” will show the rendered version of the page.
Some browser extensions such as “View Rendered Source” show the raw and rendered HTML versions on a single page and also show what is changed after rendering.
Screaming Frog SEO Spider