February 20, 2026 Source: SecurityWeek 2 min read · 561 words

PromptSpy Android Malware Abuses Gemini AI at Runtime for Persistence

PromptSpy: Android-шкідливе ПЗ зловживає Gemini AI під час виконання для забезпечення стійкості

When This Started

It's February 2026. SecurityWeek just broke the story. But PromptSpy's footprint on infected devices? That goes back further—researchers are still piecing together the timeline. What we know: the malware's been actively operating, surviving reboots, and doing it through a method that's frankly alarming in its sophistication.

Google's Gemini AI at runtime. That's the vector.

The Discovery

SecurityWeek's investigation revealed PromptSpy wasn't spotted through traditional malware signatures or behavioral analysis alone. Instead, researchers noticed something unusual about how certain Android devices were behaving post-reboot. The malware kept coming back. But more importantly, it was adapting.

The discovery highlighted a gap in runtime vulnerability scanning tools that most organizations rely on. Traditional endpoint protection? It wasn't catching this.

What makes this particularly nasty: the malware doesn't just hide in code. It analyzes on-screen elements in real time and uses Gemini AI to determine its next moves. It's thinking. Or at least, it's making decisions based on AI analysis of what it sees.

Technical Analysis

Here's what's actually happening under the hood. PromptSpy operates by intercepting runtime data and feeding it to Google's Gemini AI model. The AI analyzes the device's current state—what apps are open, what's displayed, what the user's doing—and generates commands that keep the malware persistent across reboots.

Think of it like this: instead of hardcoding instructions, the malware outsources its decision-making to an AI system. That's evasion on a different level.

The runtime vulnerability itself isn't in Gemini (Google's AI isn't compromised). It's in how PromptSpy exploits the runtime environment to make those API calls in the first place. Runtime vulnerability management solutions, like those powered by Dynatrace's runtime vulnerability analytics, aren't specifically designed to catch this pattern. They're built for memory corruption, buffer overflows, and traditional exploits. Not for "malware talking to a cloud AI service to decide if it should hide."

This also raises uncomfortable questions about container runtime vulnerability scanning and similar tools. If your security stack can't see what's happening at the AI integration layer, you've got blind spots. Serious ones.

Damage Assessment

How many devices are infected? SecurityWeek hasn't released a specific number, and that's concerning in itself. The malware's ability to survive reboots means infected users might not even realize what's running on their phone.

The impact potential is massive. Device persistence means the malware can capture credentials, intercept communications, or deploy additional payloads whenever it wants. And it's doing this with AI-driven decision-making that adapts to different device configurations.

This isn't a flash-in-the-pan threat.

Mitigation

For most users: update your device immediately and run a full security scan with your mobile security vendor. Yes, even if your vendor claims they're already protecting you. PromptSpy's runtime vulnerability means some tools might miss it on first pass.

For enterprises: this is where it gets harder. Runtime vulnerability scanning tools need to evolve. You need visibility into AI model API calls made by processes you don't recognize. You need runtime vulnerability analytics that flag suspicious communication patterns to external AI services.

And frankly? This should have been caught sooner. The fact that malware could abuse a major AI system for persistence without setting off wider alarms is a systemic failure.

If you're using Gemini or similar APIs in production, audit which processes can reach them. Restrict access. Treat AI service calls like the security boundary they apparently are.

PromptSpy won't be the last malware to weaponize AI. But it could be the one that finally forces the industry to take runtime AI security seriously.

Read original article →

Concerned about your project's security? Run an automated pentest with AISEC — AI-powered scanner with expert verification. Go to dashboard →