Wikimedia Raises Alarm on AI Threat to Internet Knowledge and Its Own Survival The Wikimedia Foundation is issuing a stark warning about how artificial intelligence is undermining the ecosystem of reliable information online. In a recent blog post, the organizations senior director of product, Marshall Miller, detailed a significant drop in page views that they directly attribute to the proliferation of large language model chatbots and AI-generated summaries within search engine results. Miller stated that they believe these declines show the effect of generative AI and social media on how individuals find information. He specifically pointed to search engines that now provide answers directly to users, frequently using content sourced from Wikipedia itself, which removes the need to visit the site. Compounding the traffic issue, Wikimedia has been grappling with a massive influx of sophisticated AI bot crawlers. These automated systems have become so advanced that distinguishing them from human visitors has been a major challenge. After implementing improved bot detection to get a clearer picture of real human traffic, the data revealed an 8 percent year-over-year decrease in page views. However, Miller frames this as a problem far greater than simple website metrics. He describes an existential risk to what he calls the only online platform of its scale that is powered by strict principles of verifiability, neutrality, and transparency. He warns that a continued downward trend in traffic could create a dangerous cycle. Fewer visitors would likely lead to a decline in the volunteer editors who maintain the site, reduced funding from donations, and ultimately, a degradation in the quality and reliability of the content that so many other platforms depend on. The solution proposed by Miller is for the companies behind LLMs and search engines to be more deliberate in directing users back to the original sources of information. He argues that for people to trust the information they find online, platforms must clearly state where it comes from and provide easy opportunities for users to visit and even participate in those sources, like Wikipedia. This situation is somewhat ironic, as Wikipedia itself recently considered integrating AI-generated summaries at the top of its own articles. That proposed project was met with immediate and intense criticism from the community of volunteer editors and was canceled before any testing could begin. The core of Wikimedias argument is that the current trajectory of AI information delivery, while convenient, risks starving the very sources that make reliable knowledge possible in the first place. The foundation is calling for a new model of collaboration where AI tools support and elevate human-curated knowledge platforms rather than inadvertently bypassing them.


