Client-side or server-side rendering
Different rendering methods are suitable for different purposes. Elimelech made the case for dynamic rendering as a means of satisfying search engine crawlers and users, but first it is necessary to understand how client-side and server-side rendering work.
Client side rendering
When a user clicks on a link, their browser sends requests to the server where the site is hosted.
“It’s kind of like assembling your own furniture because the server says to the browser, ‘Hey, these are all the parts, these are the instructions, build the page. I trust you.’ And that means all the hard work is moved to the browser instead of the server,” Elimelech said.
Dynamic rendering represents “the best of both worlds,” Elimelech said. Dynamic rendering means “switching between client-side rendered content and pre-rendered content for specific user agents,” according to Google.
Below is a simplified diagram explaining how dynamic rendering works for different user agents (users and bots).
”So there is a request to URL, but this time we check: Do we know this user agent? Is it a known robot? Is it Google? Is it Bing? Is it Semrush? Is it something we know about? If it’s not, we assume it’s a user and then we render on the client side,” Elimelech said.
On the other hand, if the client is a bot, server-side rendering is used to serve fully rendered HTML. “So he sees everything that needs to be seen,” Elimelech said.
But the dynamic rendering is not perfect
There are, however, complications associated with dynamic rendering. “We have two streams to maintain, two sets of logic, caching, other complex systems; so it’s more complex when you have two systems instead of one,” Elimelech said, noting that site owners also need to maintain a list of user agents in order to identify bots.
Some might worry that serving search engine crawlers something different from what you show users could be considered a cover-up.
“Dynamic rendering is actually a preferred and recommended solution by Google because what matters to Google is if the important things are the same [between the two versions]”said Elimelech, adding that” the “important things” are the things that interest us as SEOs: the content, the headers, the meta tags, the internal links, the navigation links, the robots, the title, canonical and structured data markup. , content, images — anything to do with how a bot would react to the page. . . it’s important to keep them the same and when you keep them the same, especially the content and especially the meta tags, Google has no problem with that.
Since it is necessary to maintain parity between what you serve to bots and what you serve to users, it is also necessary to audit for issues that could break that parity.
To audit potential issues, Elimelech recommends Screaming Frog or a similar tool that lets you compare two crawls. “So what we like to do is crawl a website as a Googlebot (or some other search engine user agent) and crawl it as a user and make sure there’s no no differences,” he said. Comparing the relevant items between the two scans can help you identify potential issues.
Elimelech also mentioned the following methods to detect problems:
New to Search Engine Land