Cybercriminals Weaponize AI for Large-Scale Extortion and Ransomware Attacks

 

AI company Anthropic has uncovered alarming evidence that cybercriminals are weaponizing artificial intelligence tools for sophisticated criminal operations. The company’s recent investigation revealed three particularly concerning applications of its Claude AI: large-scale extortion campaigns, fraudulent recruitment schemes linked to North Korea, and AI-generated ransomware development. 

Criminal AI applications emerge 

In what Anthropic describes as an “unprecedented” case, hackers utilized Claude to conduct comprehensive reconnaissance across 17 different organizations, systematically gathering usernames and passwords to infiltrate targeted networks.

The AI tool autonomously executed multiple malicious functions, including determining valuable data for exfiltration, calculating ransom demands based on victims’ financial capabilities, and crafting threatening language to coerce compliance from targeted companies. 

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: