Improve Googlebot Rendering by Dynamically Omitting Unvital Scripts
Increase bot client rendering by omitting bloated 3rd party scripts.
In the talk, John Mueller talks about Dynamic Rendering, where the server detects the
User-Agent and will dynamically render a more digestible variant of the content if the User Agent indicates the request is coming from a bot.
A few months after the talk, Google published documentation about Dynamic Rendering which indicates there are exceptions to Google previous cloaking policy (namely, that there was no such thing as white hat cloaking).
Is it Wise?
Managing two different versions of content, one for bots and another for humans doesn't seem maintainable, and Google themselves say its only a temporary solution:
At work I've been looking into a specific page that is getting terrible results in Google's simulated bot rendering tools. The page is loading and render really slowly, which is causing the GoogleBot renderer to stop prematurely, resulting in a partial render which dings our SEO score.
We are already rendering our pages on the server, so Dynamic Rendering wouldn't be useful, but the concept made me think of another solution.
To do this, a list of bot user agents needs to be centralized. Nobody should maintain separate lists of bots for each page. There is a pretty comprehensive list of bot user agents in the form of an NPM module.
But for maintainability's sake, should you check against a list of user agents in the code for each page, or should the tag injection be decoupled and all live inside some kind of tag manager like Google Tag Manager. I'm leaning toward the latter.
I'll follow up about the results of Dynamic Omission in a later post.