March 01, 2026 Source: SecurityWeek 3 min read · 592 words

Hackers Weaponize Claude Code in Mexican Government Cyberattack

150 ГБ втрачено: як хакери використали Claude AI проти мексиканського уряду

150GB Gone: How Hackers Weaponized Claude AI Against Mexican Government

One hundred and fifty gigabytes. That's the amount of data stolen from the Mexican government in a cyberattack where hackers d Claude AI to write exploits and build malicious tools. According to SecurityWeek, this isn't some theoretical vulnerability—this is a documented, active breach against a specific government target with actual data theft.

The scale here is what hits you first.

But the method? That's what should keep security teams up at night.

Breaking It Down

SecurityWeek's reporting reveals a sophisticated attack where threat actors didn't just use Claude as a research tool. They weaponized it. The attackers d Claude AI's code generation capabilities to write exploits tailored to Mexican government systems, create custom malicious tools, and—most critically—orchestrate the exfiltration of sensitive data at scale. The attack wasn't some one-off breach. It was methodical.

So why does this matter beyond the obvious national security implications?

Because this represents a fundamental shift in how cyberattacks are being conducted. Instead of threat actors relying on their own coding skills or purchasing pre-built exploit kits on underground forums, they're now outsourcing the heavy lifting to large language models. Claude AI, in this case, became an unwitting accomplice—generating code that its creators almost certainly never intended for malicious purposes.

And then it got worse. The attackers didn't just grab random files. The data exfiltrated suggests they knew exactly what they were after, which points to either prior reconnaissance or insider knowledge.

The Technical Side

Here's how this likely played out. Threat actors would've initially compromised entry points through traditional methods—phishing, credential stuffing, unpatched vulnerabilities. Nothing groundbreaking there. But once they had a foothold, they used Claude AI to generate exploit code specific to the target environment, create data exfiltration scripts optimized for the Mexican government's network topology, and develop evasion techniques to avoid detection. The AI handled the coding so the attackers could focus on operational security and data movement.

It's like hiring a really smart contractor you've never met and giving them the blueprint to your house. Except in this case, Claude didn't know it was building weapons.

What makes this particularly nasty is that claude ai cyber security defenses aren't designed to stop this kind of abuse. The model has safeguards, sure, but a determined attacker can work around them through prompt manipulation, incremental requests, or by framing malicious code as legitimate security research. The system doesn't inherently distinguish between a penetration tester and a state-sponsored actor.

Who's Affected

Directly? Mexican government agencies whose data was compromised. The 150GB figure suggests this wasn't a surgical strike targeting one department—this was broad, systematic collection.

Indirectly? Every organization relying on government services, contractors with government databases, citizens whose personal information may have been stored in those systems.

And honestly, all of us should care because this sets a precedent. If it works against a government target, threat actors will replicate it against corporations, financial institutions, critical infrastructure.

What To Do Now

For security teams: assume your AI-generated code might be monitored or manipulated. Audit any Claude AI outputs used in production environments. If you're using LLMs for security work, implement additional human review and testing protocols. Don't rely on the model's warnings—they're not foolproof.

For government agencies: conduct asset inventories immediately. Assume the attackers have detailed knowledge of your systems now. Change credentials for any accounts that might've been exposed. Work with threat intelligence partners to understand attribution.

For everyone else: this should accelerate conversations about AI governance in your organization. The genie's out of the bottle—large language models are powerful tools, and they're going to be weaponized. The only question is whether you're prepared for it.

Read original article →

// FAQ

Can Claude AI be used to write malware and exploits?

Yes, according to this incident and previous research. While Claude has safety measures, determined attackers can work around them through prompt engineering or incremental requests. The model can generate functional exploit code and malicious tools.

Was this attack confirmed to be state-sponsored or a specific threat group?

SecurityWeek's reporting documents the attack occurred, but attribution details about the specific threat actor or state involvement weren't specified in available reports. The sophistication suggests organized, well-resourced attackers.

How can organizations prevent Claude or LLMs from being used in cyberattacks against them?

Implement standard security controls: patch systems promptly, monitor for unusual data exfiltration, enforce strong authentication, conduct regular security audits, and assume attackers have access to LLM-generated exploits when designing defenses.

Concerned about your project's security? Run an automated pentest with AISEC — AI-powered scanner with expert verification. Go to dashboard →