LLM chatbots trivial to weaponise for data theft, say boffins

System prompt engineering turns benign AI assistants into ‘investigator’ and ‘detective’ roles that bypass privacy guardrails

A team of boffins is warning that AI chatbots built on large language models (LLM) can be tuned into malicious agents to autonomously harvest users’ personal data, even by attackers with “minimal technical expertise”, thanks to “system prompt” customization tools from OpenAI and others.…

This article has been indexed from The Register – Security

Read the original article: