Home Latest Insights | News Google Scales Back AI Overviews on Health Searches After Questions Over Accuracy and Clinical Risk

Google Scales Back AI Overviews on Health Searches After Questions Over Accuracy and Clinical Risk

Google Scales Back AI Overviews on Health Searches After Questions Over Accuracy and Clinical Risk

Google appears to have quietly rolled back its AI-generated “Overviews” for certain health-related search queries following scrutiny over misleading medical information.

The move, which highlights growing tensions between rapid AI deployment and patient safety concerns, follows an investigation by the Guardian, which found that Google’s AI Overviews were producing oversimplified and potentially misleading responses to sensitive medical questions. In one example, users searching for “what is the normal range for liver blood tests” were shown numerical reference ranges that failed to account for key variables such as age, sex, ethnicity, nationality, or underlying medical conditions.

Medical experts warned that such omissions could give users a false sense of reassurance, particularly in cases where liver enzyme levels may fall within one population’s “normal” range but signal disease risk in another. Liver blood tests are commonly used to detect conditions such as hepatitis, fatty liver disease, and cirrhosis, where delayed diagnosis can have serious consequences.

Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).

Register for Tekedia AI in Business Masterclass.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab (class begins Jan 24 2026).

Tekedia unveils Nigerian Capital Market Masterclass.

After the Guardian published its findings, the outlet reported that AI Overviews no longer appeared for searches including “what is the normal range for liver blood tests” and “what is the normal range for liver function tests.” However, the removal appeared uneven. Variations on those queries, such as “lft reference range” or “lft test reference range,” were still capable of triggering AI-generated summaries, suggesting that Google’s safeguards were applied selectively rather than comprehensively.

Subsequent checks later in the day indicated further tightening. Several similar health-related queries no longer produced AI Overviews at all, though Google continued to prompt users to submit the same questions through its separate “AI Mode,” which remains available across Search. In multiple instances, the Guardian’s investigation itself surfaced as a top-ranked result, replacing the AI-generated summary with traditional reporting.

Google declined to comment on the specific removals. A spokesperson told the Guardian that the company does not “comment on individual removals within Search,” emphasizing instead that it works to “make broad improvements” to its systems. The spokesperson added that Google had asked an internal team of clinicians to review the queries cited in the investigation and concluded that “in many instances, the information was not inaccurate and was also supported by high quality websites.”

That response points to a central issue facing AI-generated health summaries: even when underlying sources are credible, the act of compressing complex medical guidance into a short, generalized overview can strip away essential context. Unlike traditional search results, which present multiple sources and viewpoints, AI Overviews synthesize information into a single authoritative-sounding answer placed prominently at the top of the page.

Google has spent the past year expanding AI Overviews as part of a broader effort to reimagine Search around generative AI. In 2024, the company unveiled health-focused AI models and pledged improvements aimed at making medical searches more reliable, stressing that its tools are not intended to replace professional advice. Still, critics argue that the format itself encourages users to treat AI summaries as definitive guidance.

Patient advocacy groups say the episode exposes a deeper structural problem. Vanessa Hebditch, director of communications and policy at the British Liver Trust, welcomed the apparent removal of AI Overviews for liver test queries but said the change does not address the underlying risk.

“This is excellent news,” Hebditch told the Guardian. “Our bigger concern with all this is that it is nit-picking a single search result and Google can just shut off the AI Overviews for that but it’s not tackling the bigger issue of AI Overviews for health.”

Her comments echo broader concerns among clinicians and regulators that platform-level fixes triggered by media attention are insufficient. Health information is one of the most heavily regulated areas of communication, and mistakes can carry real-world consequences, yet generative AI tools are often deployed with fewer safeguards than traditional medical publications.

The episode comes as governments worldwide intensify scrutiny of AI systems used in sensitive domains. In Europe, regulators have signaled that health-related AI applications will face higher compliance standards under the EU’s AI framework, while in the United Kingdom, policymakers have stressed that platforms must demonstrate a duty of care when distributing medical information.

For Google, the partial withdrawal of AI Overviews appears to reflect a balancing act rather than a retreat. The company continues to promote AI-powered search experiences while making quiet adjustments to avoid reputational and regulatory fallout. It is not clear if those adjustments will result in the tech giant having a more systemic rethink of how AI is used for health searches.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here