New LegalPwn Attack Exploits Gemini, ChatGPT and other AI Tools into Executing Malicious Code via Disclaimers

A sophisticated new attack method that exploits AI models’ tendency to comply with legal-sounding text, successfully bypassing safety measures in popular development tools. A study by Pangea AI Security has revealed a novel prompt injection technique dubbed “LegalPwn” that weaponizes legal disclaimers, copyright notices, and terms of service to manipulate large language models (LLMs) into […]

The post New LegalPwn Attack Exploits Gemini, ChatGPT and other AI Tools into Executing Malicious Code via Disclaimers appeared first on Cyber Security News.

This article has been indexed from Cyber Security News

Read the original article: