Here’s How ‘AI Poisoning’ Tools Are Sabotaging Data-Hungry Bots

 

The internet has evolved from a platform mainly used by people for social sharing to one dominated by automated bots, especially those powered by AI. Bots now generate most web traffic, with over half of this stemming from malicious actors harvesting unprotected personal data. Many bots, however, are operated by major AI companies such as OpenAI—whose ChatGPT bot accounts for 6% of total web traffic—and Anthropic’s ClaudeBot, which constitutes 13%. 

These AI bots systematically scrape online content to train their models and answer user queries, raising concerns among content creators about widespread copyright infringement and unauthorized use of their work.

Legal battles with AI companies are hard for most creators due to high costs, prompting some to turn to technical countermeasures. Tools are being developed to make it harder for AI bots to access or make use of online content.


Some specifically aim to “poison” the data—deliberately introducing subtle or hidden modifications so AI models misinterpret the material. For example, Chicago University’s Glaze tool makes imperceptible changes to digital artwork, fooling models into misreading an artist’s style. Nightshade, another free tool, goes a step further by convincing AI that terms like “cat” should

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: