Sensitive AI Key Leak : A Wave of Security Concerns in U.S. Government Circles

 

A concerning security mistake involving a U.S. government employee has raised alarms over how powerful artificial intelligence tools are being handled. A developer working for the federal Department of Government Efficiency (DOGE) reportedly made a critical error by accidentally sharing a private access key connected to xAI, an artificial intelligence company linked to Elon Musk.

The leak was first reported after a programming script uploaded to GitHub, a public code-sharing platform, was found to contain login credentials tied to xAI’s system. These credentials reportedly unlocked access to at least 52 of the company’s internal AI models including Grok-4, one of xAI’s most advanced tools, similar in capacity to OpenAI’s GPT-4.

The employee, identified in reports as 25-year-old Marko Elez, had top-level access to various government platforms and databases. These include systems used by sensitive departments such as Homeland Security, the Justice Department, and the Social Security Administration.

The key remained active and publicly visible for a period of time before being taken down. This has sparked concerns that others may have accessed or copied the credentials while they were exposed.

Why It Matters

Security experts say this isn’t just a one-off mistake, it’s a sign that powerful AI systems may be handled too carelessly, even by insiders with government clearance. If the leaked key had been misused before removal, bad actors coul

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: