A hacker has carried out one of the most advanced AI-driven cybercrime operations ever documented, using Anthropic’s Claude chatbot to identify targets, steal sensitive data, and even draft extortion emails, according to a new report from the company.
It Anthropic disclosed that the attacker leveraged Claude Code — a version of its AI model designed for generating computer code — to assist in nearly every stage of the operation. The campaign targeted at least 17 organizations across industries including defense, finance, and healthcare, making it the most comprehensive example yet of artificial intelligence being exploited for cyber extortion.
Cyber extortion typically involves hackers stealing confidential data and demanding payment to prevent its release. AI has already played a role in such crimes, with chatbots being used to write phishing emails. However, Anthropic’s findings mark the first publicly confirmed case in which a mainstream AI model automated nearly the entire lifecycle of a cyberattack.
The hacker reportedly prompted Claude to scan for vulnerable companies, generate malicious code to infiltrate systems, and extract confidential files. The AI system then organized the stolen data, analyzed which documents carried the highest value, and suggested ransom amounts based on victims’ financial information. It also drafted extortion notes demanding bitcoin payments, which ranged from $75,000 to more
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article: