The Wikimedia Foundation, the umbrella organization of Wikipedia and a dozen or so other crowdsourced knowledge projects, said on Wednesday that bandwidth consumption for multimedia downloads from Wikimedia Commons has surged by 50% since January 2024.
The reason, the outfit wrote in a blog post Tuesday, isn’t due to growing demand from knowledge-thirsty humans, but from automated, data-hungry scrapers looking to train AI models.
“Our infrastructure is built to sustain sudden traffic spikes from humans during high-interest events, but the amount of traffic generated by scraper bots is unprecedented and presents growing risks and costs,” the post reads.
Wikimedia Commons is a freely accessible repository of images, videos, and audio files that are available under open licenses or are otherwise in the public domain.
Digging down, Wikimedia says that almost two-thirds (65%) of the most “expensive” traffic — that is, the most resource-intensive in terms of the kind of content consumed — was from bots. However, just 35% of the overall pageviews comes from these bots. The reason for this disparity, according to Wikimedia, is that frequently accessed content stays closer to the user in its cache, while other less-frequently accessed content is stored further away in the “core data center,” which is more expensive to serve content from. This is the kind of content that bots typically go looking for.
“While human readers tend to focus on specific – often similar – topics, crawler bots tend to ‘bulk read’ larger numbers of pages and visit also the less popular pages,” Wikimedia writes. “This means these types of requests are more likely to get forwarded to the core datacenter, which makes it much more expensive in terms of consumption of our resources.”
The long and short of all this is that the Wikimedia Foundation’s site reliability team is having to spend a lot of time and resources blocking crawlers to avert disruption for regular users. And all this before we consider the cloud costs that the Foundation is faced with.
In truth, this represents part of a fast-growing trend that is threatening the very existence of the open internet. Last month, software engineer and open source advocate Drew DeVault bemoaned the fact that AI crawlers ignore “robots.txt” files that are designed to ward off automated traffic. And “pragmatic engineer” Gergely Orosz also complained last week that AI scrapers from companies such as Meta have driven up bandwidth demands for his own projects.
While open source infrastructure, in particular, is in the firing line, developers are fighting back with “cleverness and vengeance,” as TechCrunch wrote last week. Some tech companies are doing their bit to address the issue, too — Cloudflare, for example, recently launched AI Labyrinth, which uses AI-generated content to slow crawlers down.
However, it’s very much a cat-and-mouse game that could ultimately force many publishers to duck for cover behind logins and paywalls — to the detriment of everyone who uses the web today.
Read the full article here