<p>Organizations and employees everywhere continue to rush to use AI to boost productivity and tackle rote job functions, but new research shows this might prove disastrous. Malicious actors could use AI exploits to access sensitive data, experts say, especially if targets don’t have proper AI governance and security controls in place.</p>
<div class=”ad-wrapper ad-embedded”>
<div id=”halfpage” class=”ad ad-hp”>
<script>GPT.display(‘halfpage’)</script>
</div>
<div id=”mu-1″ class=”ad ad-mu”>
<script>GPT.display(‘mu-1’)</script>
</div>
</div>
<p>IBM’s 2025 “Cost of a Data Breach Report” found that 13% of organizations have experienced recent breaches involving their AI models or applications. More than half of these — 60% — said the incidents led to broad data compromise, while one in three reported operational disruption. Attackers increasingly view AI as a high-value target, researchers concluded, even as AI security and governance measures lag behind adoption rates. Meanwhile, one in six data breaches involved AI-based attacks.</p>
<p>This week’s featured articles highlight the potential for AI exploits and the importance of <a href=”https://www.cybersecuritydive.com/news/steps-to-secure-ai/736546/”>taking steps to protect AI</a>, such as creating <a href=”https://www.techtarget.com/searchsecurity/tip/How-to-craft-a-generative-AI-security-policy-that-works”>AI security policies</a> and implementing <a href=”https://www.techtarget.com/searchdatamanagement/feature/Exploding-interest-in-GenAI-makes-AI-governance-a-necessity”>AI governance.</a> Read more from IBM’s research and learn how AI exploits could hurt your company.</p>
<section class=”section main-article-chapter” data-menu-title=”‘Man in the prompt’ attack could target ChatGPT and GenAI tools”>
<h2 class=”section-title”><i class=”icon” data-icon=”1″></i>’Man in the prompt’ attack could target ChatGPT and GenAI tools</h2>
<p>LayerX researchers demonstrated the possibility of using a “man in the prompt” attack, which they say can affect major AI tools including ChatGPT, Gemini and Copilot. This exploit uses <a href=”https://www.darkreading.com/cloud-security/more-than-half-of-browser-extensions-pose-security-risks”>browser extensions'</a> ability to access the Document Object Model (DOM), allowing them
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
Read the original article: