Your Inline CSS Is Making Pages Look Identical to Googlebot

TL;DR

Critical CSS inlining combined with lazy-loaded content can make your pages appear nearly identical to Google during initial render. When all pages share the same above-the-fold styles and your unique content loads after JavaScript executes, Google might flag pages as duplicates based on the pre-rendered HTML. The performance optimization that boosted your PageSpeed score is now causing content canonicalization problems.


The Conversation

Tina, frontend lead at a content publisher, 10,000+ articles, Next.js with critical CSS extraction, advanced technical knowledge, noticed duplicate content flags in Search Console: We’re seeing weird canonicalization issues. Search Console shows some of our articles pointing to wrong canonicals, ones we didn’t set. Different articles are being grouped together as duplicates when they’re clearly different content.

SDC SEO Brain: Duplicate content detection looks at what Google sees when it crawls, not what users eventually see after all JavaScript executes. Walk me through your page loading sequence. What renders first?

Tina: We inline critical CSS for above-the-fold content. Then the main content lazy loads as you scroll. It’s a performance optimization that got us good Core Web Vitals scores.

SDC SEO Brain: What does “above the fold” look like across your articles? Is it the same layout?

Tina: Pretty similar. Header, navigation, a hero section with the article title and featured image placeholder, then the content area starts. The actual article text is below the fold.

SDC SEO Brain: If I grabbed the initial HTML before any JavaScript executed, how much would differ between two random articles?

Tina: The title in the header, the page title tag, maybe some meta content. But most of the visible HTML structure would be identical. The article body loads via JavaScript.

SDC SEO Brain: That’s your duplicate content problem. Google’s initial crawl sees pages that are 90% identical HTML. Even though the full content differs, the pre-render state looks like duplicate pages with minor variations.

Tina: But Google renders JavaScript. They’ve been doing it for years. They should see the full article.

SDC SEO Brain: Google renders JavaScript, but the initial HTML still matters for understanding page structure. More importantly, Google’s duplicate detection can trigger before full rendering completes. If the raw HTML is too similar across pages, those pages might be grouped as duplicates early in the processing pipeline.

Tina: We do server-side rendering. Next.js generates the HTML.

SDC SEO Brain: What exactly does Next.js server-render? Is the full article content in the initial HTML response, or just the shell?

Tina: The shell. Article content fetches client-side from our CMS API. We did this to keep build times manageable with 10,000 articles.

SDC SEO Brain: So your server-rendered HTML is essentially a template with placeholders. Every article serves the same template structure. The unique content comes from JavaScript API calls after the page loads. Google might be evaluating the template as the page content.

Tina: That seems like a Google problem, not ours.

SDC SEO Brain: It’s a technical SEO problem regardless of whose limitation causes it. Your pages aren’t communicating their uniqueness effectively. The HTML structure that gets crawled doesn’t distinguish articles from each other until JavaScript runs.

Tina: What’s the fix? We can’t inline all 10,000 articles into HTML at build time.

SDC SEO Brain: Incremental Static Regeneration in Next.js lets you statically generate pages on-demand after initial build. Or you can use getServerSideProps to fetch article content at request time and include it in the initial HTML response. Either approach puts unique content in the HTML that Google receives.

Tina: SSR for every article request seems expensive.

SDC SEO Brain: It’s a tradeoff. You optimized for performance metrics by deferring content loading. Now you’re paying the SEO cost. You could also explore caching strategies where SSR responses get cached at your CDN edge, reducing server load while still providing unique HTML per URL.

Tina: What about our critical CSS approach? Is that part of the problem?

SDC SEO Brain: Critical CSS itself isn’t bad, but it contributes to the similarity problem. If every page inlines the same critical CSS block, that’s a large chunk of identical content in every HTML response. Combined with the identical template structure, you’re maximizing HTML similarity across pages.

Tina: The CSS is extracted automatically based on above-the-fold content. Different pages might have different critical CSS.

SDC SEO Brain: In theory, but if your above-the-fold content is the same template across all articles, the critical CSS will be nearly identical too. The tool extracts what’s needed for the rendered structure. Same structure means same extracted CSS.

Tina: This is frustrating. We did everything right for performance. Now we have SEO problems.

SDC SEO Brain: Performance optimization and SEO can conflict when optimizations defer or exclude content from initial HTML. The solution is ensuring your server response includes enough unique content for Google to distinguish pages, even if additional content loads client-side.

Tina: How do I verify what Google actually sees?

SDC SEO Brain: Use URL Inspection in Search Console and look at the rendered HTML. Compare two articles that are being grouped as duplicates. If the rendered HTML is different, the problem is in the pre-render processing. If it’s similar, Google’s rendering isn’t getting your full content.

Tina: Can I see the pre-render HTML specifically?

SDC SEO Brain: View your page source directly, or use curl to fetch the raw HTML. Compare that across articles. If the view-source HTML is too similar, that’s what Google’s initial processing sees. The rendered HTML in Search Console shows what Google sees after JavaScript.

Tina: What’s “too similar”?

SDC SEO Brain: There’s no exact threshold, but think about it qualitatively. If a human looked at the raw HTML of two articles and couldn’t tell them apart without examining metadata closely, Google’s systems might struggle too. Unique body content should be obvious in the HTML structure.

Tina: If I add article content to server rendering, should I remove lazy loading entirely?

SDC SEO Brain: You can keep lazy loading for images and supplementary content. The key is having your main article text in the initial HTML. The headline, body paragraphs, and key content that makes the page unique should render server-side. Secondary elements like comments, related posts, and images below the fold can still lazy load.

Tina: What about our critical CSS extraction? Should we change that approach?

SDC SEO Brain: Keep it, but verify that pages with different content structures get different critical CSS. If articles with photos get different above-fold layouts than text-only articles, they should have different critical CSS extracted. The uniformity problem is worse if everything is forced into identical templates.

Tina: How long until Search Console stops showing these duplicate issues after we fix this?

SDC SEO Brain: Once you deploy changes, Google needs to recrawl your pages. With 10,000 articles, a full recrawl takes time, probably weeks. Prioritize your highest-traffic pages by ensuring they’re in your sitemap and internally linked well. Monitor the duplicate content report in Search Console for improvement.


FAQ

Q: Why does critical CSS cause duplicate content issues?
A: Critical CSS alone doesn’t cause duplicates. The problem is when identical critical CSS plus identical template structure plus deferred content loading creates HTML responses that are nearly identical across different pages. Google may group these as duplicates before JavaScript rendering shows the differences.

Q: Should I stop using critical CSS inlining?
A: No. Keep critical CSS for performance, but ensure your HTML response includes enough unique content to distinguish pages. Server-render at least your main content, not just the template shell.

Q: What should be in the initial HTML for SEO?
A: At minimum: unique title, unique headings, unique body content paragraphs, and unique metadata. The main content that differentiates the page should be in the server response, not loaded via JavaScript.

Q: How do I verify what Google sees?
A: Use Search Console’s URL Inspection tool to see the rendered HTML. Compare view-source of different pages to see raw HTML similarity. If your unique content only appears in the rendered view, not the raw source, you have a JavaScript dependency problem.

Q: Can I use getStaticProps or ISR to fix this?
A: Yes. Incremental Static Regeneration can generate article pages with full content without building all 10,000 at once. Pages are generated on first request and cached. This gives you static performance with dynamic content inclusion.


Summary

Critical CSS inlining combined with lazy-loaded content creates near-identical HTML across different pages. When your unique content loads via JavaScript after the initial template renders, Google may flag pages as duplicates based on the similar server response.

Server-side rendering of main content is essential for SEO. The HTML response Google receives should include enough unique text to clearly distinguish pages from each other. Template shells with JavaScript-loaded content create duplicate detection problems.

Performance and SEO can conflict. Deferring content loading for PageSpeed scores can hurt search visibility. Balance both by server-rendering unique content while lazy loading secondary elements.

Use URL Inspection and view-source comparison to verify page uniqueness. If raw HTML is too similar across pages, Google’s initial processing may group them as duplicates before JavaScript rendering completes.

Incremental Static Regeneration offers a middle ground for large sites. Generate pages with full content on-demand without massive build times, giving you both performance and SEO-friendly HTML.


Sources

  • Google Search Central: Duplicate content
  • Google Search Central: JavaScript SEO
  • Next.js Documentation: Incremental Static Regeneration
  • web.dev: Extract critical CSS