TL;DR
Schema testing tools validate syntax, not accuracy. Your markup can pass every validator while misrepresenting page content in ways Google’s algorithms detect. When rich results disappear site-wide without a manual action, you’re facing algorithmic demotion for schema trust issues, not a technical error you can debug in a testing tool.
The Conversation
Kevin, SEO specialist at recipe site, WordPress with Yoast, 8 years experience, advanced SEO knowledge: We lost all our rich results three weeks ago. Recipe cards, star ratings, cooking times, everything gone from search results. We’ve had rich results for six years. No manual action in Search Console, no obvious errors in our structured data. I’ve tested dozens of pages in the Rich Results Test and they all pass.
SDC SEO Brain: All rich results disappearing simultaneously without a manual action usually indicates algorithmic action, not a technical problem. Google’s systems decided to stop trusting your structured data. What changed on your site around three weeks ago?
Kevin: Nothing major. We added a new rating display feature, but that was about two months ago. The ratings were always there, we just made them more prominent in the UI.
SDC SEO Brain: Tell me more about this rating feature. Where do the ratings come from?
Kevin: Users can rate recipes after trying them. The star ratings have always been part of our recipe schema.
SDC SEO Brain: How do you calculate the aggregate rating in your schema? Is it a direct average of user ratings?
Kevin: Not exactly. We use a weighted formula. New recipes start with a baseline score so they don’t show zero stars. We also factor in engagement metrics like saves and prints to give a more complete picture of recipe quality.
SDC SEO Brain: There’s your problem. Google’s rating schema guidelines require that ratings represent actual user reviews, not calculated quality scores. A recipe with zero user reviews showing 4.2 stars because of high engagement violates schema accuracy requirements.
Kevin: But the testing tool shows valid markup. No errors, no warnings.
SDC SEO Brain: Testing tools check syntax and required properties. They don’t verify that your 4.2 star rating comes from real user ratings. Google’s quality algorithms do that verification by comparing your schema claims against visible page content and user behavior patterns.
Kevin: So Google figured out we were inflating ratings?
SDC SEO Brain: More precisely, Google’s systems detected that your rating values don’t correspond to genuine user reviews on the page. If someone visits your recipe and sees “47 ratings” but only 12 actual reviews exist, or sees 4.8 stars when the visible reviews average 3.9, that mismatch triggers trust signals.
Kevin: We display the rating prominently. It matches what’s in the schema.
SDC SEO Brain: The displayed rating matching the schema rating doesn’t matter if both are calculated rather than aggregated from real reviews. The schema specification for AggregateRating says it should be based on multiple ratings or reviews. If you’re generating that number from engagement data, you’re misusing the property.
Kevin: This has worked for six years though. Why would Google suddenly care?
SDC SEO Brain: Google’s schema trust systems have evolved. They now cross-reference structured data claims against multiple signals: visible page content, user interaction patterns, and comparisons across your domain. A site-wide pattern of inflated or inaccurate ratings eventually triggers algorithmic suppression. Six years of historical rich results doesn’t protect you once the algorithm flags the pattern.
Kevin: What about our competitors? Some of them have obviously fake ratings. Five stars on every single recipe.
SDC SEO Brain: Algorithmic detection isn’t instant or uniform. Some sites get caught immediately, others take years. Your competitors might lose rich results next month or next year. The fact that others are violating guidelines doesn’t protect you once Google’s systems flag your domain.
Kevin: Fine. If I fix the ratings to show only actual user reviews, how long until rich results come back?
SDC SEO Brain: First, let’s make sure ratings are the only issue. Are there other schema properties that might not accurately represent page content?
Kevin: Like what?
SDC SEO Brain: Cooking times that are optimistically short. Calorie counts that don’t match the actual recipe. Serving sizes that seem off. Author information for recipes that don’t have real authors. Any schema property where the structured data claims something the page content doesn’t clearly support.
Kevin: Our cooking times come from testing in our kitchen. They might be faster than average home cooks achieve.
SDC SEO Brain: If users consistently find your cooking times unrealistic, and Google sees return-to-SERP behavior after people visit your recipes, that’s another accuracy signal. Schema should represent what users actually experience, not best-case scenarios.
Kevin: This is overwhelming. I thought schema was about technical implementation.
SDC SEO Brain: Schema implementation has two layers: technical validity and content accuracy. Testing tools handle technical validity. Google’s algorithms evaluate accuracy by comparing your claims against page content, user behavior, and patterns across your entire site. A technically perfect schema that makes inaccurate claims is worse than no schema at all.
Kevin: Worse how?
SDC SEO Brain: Sites without schema don’t get rich results but also don’t accumulate schema trust violations. Sites with inaccurate schema can lose rich result eligibility for extended periods because Google has to rebuild trust in your structured data. It’s the difference between never having a credit score and having a bad credit score.
Kevin: How do I rebuild trust?
SDC SEO Brain: Audit every schema property for accuracy. Ratings should come only from actual user reviews. Cooking times should match what users experience. Remove schema from pages where you can’t guarantee accuracy. Then wait for Google to recrawl and algorithmically reevaluate. There’s no manual action to appeal because there’s no manual action, just algorithmic confidence that declined.
Kevin: How long does algorithmic reevaluation take?
SDC SEO Brain: Weeks to months. Google needs to recrawl your pages, process the updated schema, and rebuild trust signals. The timeline depends on crawl frequency, how severe the original accuracy issues were, and how thoroughly you fixed them. Unlike manual actions, there’s no reconsideration request or confirmation when you’re cleared.
Kevin: Is there any way to speed this up?
SDC SEO Brain: Ensure high-priority pages get recrawled by requesting indexing in Search Console. But that only affects crawling, not reevaluation speed. The algorithmic trust rebuild happens on Google’s timeline. Your job is to fix the accuracy issues completely so when reevaluation happens, your schema passes.
Kevin: Should I remove all schema temporarily and add it back after fixing?
SDC SEO Brain: Removing schema entirely would guarantee no rich results. Keeping corrected schema gives you a chance at recovery as Google recrawls. The exception would be if your schema is so fundamentally broken that you need to redesign the entire implementation. In that case, clean removal and careful reintroduction might make sense.
Kevin: One more thing. We use Yoast for schema generation. Could the plugin be doing something wrong?
SDC SEO Brain: Yoast generates technically valid schema based on what you input. If you put inflated ratings into the rating field, Yoast outputs valid schema with inflated ratings. The plugin isn’t the problem; the data you’re feeding it is. Fix the underlying data, and Yoast will output accurate schema.
FAQ
Q: Why do pages pass Rich Results Test but lose rich results?
A: Testing tools validate syntax and required properties, not accuracy. Google’s algorithms separately evaluate whether your schema claims accurately represent page content. A page can be technically valid but semantically misleading.
Q: How does Google detect inaccurate ratings?
A: Google compares schema values against visible page content, user behavior signals, and patterns across your domain. If users see 47 ratings displayed but only 12 reviews exist, or if star ratings don’t match visible review averages, the mismatch signals accuracy problems.
Q: Can I use engagement metrics in aggregate ratings?
A: No. AggregateRating schema should only include actual user ratings or reviews. Using saves, prints, or calculated quality scores instead of genuine user ratings violates schema accuracy guidelines.
Q: How long does schema trust recovery take?
A: Weeks to months. There’s no fixed timeline or confirmation process. Recovery depends on Google recrawling your pages, processing updated schema, and algorithmically rebuilding trust in your structured data.
Summary
Schema testing tools validate syntax, not accuracy. Markup can pass every validator while misrepresenting page content in ways Google’s algorithms detect.
Rating schema requires actual user reviews, not calculated engagement scores or quality algorithms. Inflated or fabricated ratings are a common trigger for site-wide rich result removal.
Sudden site-wide rich result loss without manual action indicates algorithmic demotion based on detected patterns. Manual actions appear in Search Console; algorithmic decisions don’t.
Recovery requires fixing accuracy issues and waiting for Google’s systems to recrawl and reevaluate. There’s no appeal process for algorithmic trust loss, only correction and patience.
Schema implementation has two layers: technical validity and content accuracy. A technically perfect schema that makes inaccurate claims damages your site more than having no schema at all.
Sources
- Google Search Central: Structured data guidelines
- Google Search Central: Review snippet structured data
- Google Search Central: Rich result status reports