Google AI Search Feature Under Fire for Spreading Harmful Medical Misinformation A new AI-powered feature integrated into the world’s most popular search engine is raising alarms after being caught generating dangerously inaccurate health advice. The tool, which automatically creates summaries at the top of search results, has provided recommendations that medical experts warn could lead to serious injury or death. The system recently advised a user to consume at least one small rock per day for necessary minerals and vitamins, falsely presenting this as a legitimate suggestion from a geologist at the University of Chicago. In another instance, it recommended using glue to make cheese stick more effectively to pizza. These bizarre and hazardous outputs demonstrate a critical failure in the AI’s ability to discern safe information from satirical or blatantly wrong sources. More disturbingly, the AI has ventured into giving specific and perilous medical guidance. When queried about how many charcoal briquettes to eat, the system reportedly repeated an old, debunked internet meme, suggesting a user could consume them safely. In reality, eating charcoal briquettes can cause severe internal blockages, poisoning, and is potentially fatal. The AI has also been documented blending unverified, user-generated forum content with established medical facts, creating a confusing and dangerous amalgamation that users might trust due to the platform’s authority. This presents a profound risk. Individuals increasingly turn to search engines for immediate health questions. If the information they receive is inaccurate or presented without critical context, it can lead to serious harm. People may forgo necessary professional medical treatment in favor of these AI-generated summaries, which carry an implied credibility by virtue of their placement on a trusted search platform. The core issue appears to be the AI’s design. These large language models generate responses by predicting sequences of words based on patterns in their vast training data, which includes both reputable and poor-quality internet sources. They lack true understanding, a fact-checking mechanism, or the ability to apply medical judgment. Their goal is to produce a coherent-sounding answer, not a clinically vetted one. When the model encounters strange or malicious content online, it can inadvertently repackage it as fact. The company behind the tool has stated that these examples are rare and not representative of most user experiences. They acknowledge that the system can sometimes generate odd or inaccurate responses, attributing this to what they term data voids or information gaps where high-quality content is scarce. They also note that many of the problematic examples seen online involve uncommon or deliberately manipulated queries. However, critics argue that for a feature impacting billions of users, any rate of failure is unacceptable when public health is at stake. The automated nature of the system means dangerous advice can be generated at scale, instantly. While the company has implemented some guardrails and says it is taking swift action to remove policy-violating responses, the reactive nature of these fixes means harmful information can circulate until it is specifically flagged. This situation highlights a broader tension in the rapid deployment of AI into essential information services. The race to integrate generative AI comes with significant ethical responsibilities, especially in high-stakes areas like health. Ensuring the safety and accuracy of automated systems before they launch widely remains a formidable challenge. For now, experts strongly caution the public against relying on AI search summaries for any health-related decisions, emphasizing that no AI should replace consultation with a qualified healthcare professional. The incident serves as a stark reminder that when it comes to medical advice, the source of the information is as important as the information itself.

