TL;DR
Google’s Threat Intelligence Group reports a shift from using AI for productivity to embedding LLMs inside malware, including dropper and data‑stealing families that call models at runtime. Some tools are experimental, but several AI-augmented threats have been observed in active operations and Google says it has disabled associated assets and strengthened model safeguards.
What happened
Google’s Threat Intelligence Group (GTIG) says adversaries are moving beyond using AI as a helper and have begun integrating large language models (LLMs) directly into malware. The report identifies first instances of “just‑in‑time” AI use during execution in families such as PROMPTFLUX and PROMPTSTEAL, which request code or commands from models at runtime to regenerate or perform targeted data collection. Other detections include FRUITSHELL (a PowerShell reverse shell with hard‑coded prompts intended to evade LLM-based analysis) and QUIETVAULT (a JavaScript credential stealer that uses on‑host AI tooling to hunt secrets and exfiltrate them via GitHub). Some samples, notably PROMPTFLUX and PROMPTLOCK, are labeled experimental and appear to be in development; PROMPTFLUX contains modules that query the Gemini API for obfuscated code and self‑modification logic. GTIG reports state‑backed and criminal actors using Gemini and other models across the attack lifecycle, and says it has disabled affected assets and applied intelligence to harden models and classifiers.
Why it matters
- Malware that queries LLMs at runtime can mutate or generate payloads on demand, complicating detection methods that rely on static signatures.
- Availability of multifunctional AI tooling in underground markets lowers the technical barrier for less experienced actors.
- State‑linked actors are using AI across reconnaissance, phishing, C2 development and exfiltration, potentially increasing scale and speed of campaigns.
- Even though several samples are experimental, the documented techniques are early indicators of a trend that defenders must prepare for.
Key facts
- GTIG identified first use of "just‑in‑time" AI in malware families such as PROMPTFLUX and PROMPTSTEAL.
- PROMPTFLUX is a VBScript dropper that can call the Gemini API to rewrite and obfuscate its own source; it is described as experimental.
- PROMPTSTEAL is a Python data miner that uses the Hugging Face API to query Qwen2.5‑Coder‑32B‑Instruct for one‑line Windows commands and then exfiltrates collected data.
- QUIETVAULT is a JavaScript credential stealer targeting GitHub and NPM tokens; it uses AI prompts and on‑host AI CLI tools to locate additional secrets and exfiltrate them via a public GitHub repository.
- FRUITSHELL is a PowerShell reverse shell observed in operations and contains hard‑coded prompts intended to bypass LLM‑powered security analysis.
- PROMPTLOCK is a proof‑of‑concept cross‑platform ransomware (Go) that generates and executes Lua scripts via an LLM; it is labeled experimental.
- GTIG observed actors attempting social‑engineering style prompts (e.g., posing as students or researchers) to evade AI safety guardrails.
- The underground marketplace for illicit AI tooling has matured in 2025 with multifunctional offerings for phishing, malware development and vulnerability research.
- GTIG reports misuse of Gemini by state‑sponsored actors from North Korea, Iran and the People’s Republic of China across multiple stages of intrusions.
- Google says it disabled assets tied to some of this activity and has used findings to strengthen classifiers and model safeguards, including protections for Gemini.
What to watch next
- Potential increase in LLM‑driven self‑modifying malware that evades static detection (GTIG expects broader adoption).
- Expansion of multifunctional AI tool offerings in underground markets that simplify malware development and phishing campaigns.
- Effectiveness of model‑level safeguards and classifier improvements in preventing prompt‑based evasion and abuse.
Quick glossary
- Large Language Model (LLM): A machine learning model trained on large text corpora to generate or transform text and code in response to prompts.
- Dropper: Malware designed to install or deliver additional malicious code onto a target system.
- Obfuscation: Techniques used to make code harder to analyze or detect, often by changing structure or encoding.
- Command and control (C2): A channel used by threat actors to send commands to and receive data from compromised systems.
- Exfiltration: The unauthorized transfer of data from a target system to an attacker‑controlled location.
Reader FAQ
Did GTIG find LLMs being used directly by malware?
Yes. The report documents malware families that call LLMs during execution to generate or modify code, with PROMPTFLUX and PROMPTSTEAL given as examples.
Are these AI‑enabled attacks widespread and actively compromising networks?
Partially. Some families like PROMPTSTEAL and QUIETVAULT were observed in operations; others, notably PROMPTFLUX and PROMPTLOCK, are described as experimental or in development and do not yet demonstrate broad compromise.
What mitigations has Google taken?
Google says it disabled assets tied to the activity and used the intelligence to strengthen classifiers and model safeguards, including protections for Gemini.
Which models and APIs were mentioned in the report?
The report references Google Gemini (including use of the gemini‑1.5‑flash‑latest tag) and other models accessed via APIs such as Qwen2.5‑Coder‑32B‑Instruct on Hugging Face.

Executive Summary Based on recent analysis of the broader threat landscape, Google Threat Intelligence Group (GTIG) has identified a shift that occurred within the last year: adversaries are no longer…
Sources
- GTIG AI Threat Tracker: Advances in Threat Actor Usage of AI Tools
- Threat actors misuse AI to enhance operations
- AI-Augmented Malware Threats Emerge in Real Attacks
- Google warns that a new era of self-evolving, AI-driven …
Related posts
- Triofox Vulnerability CVE-2025-12480: Unauthenticated Host-Header Bypass Enables RCE
- Time Travel Triage: Using Time Travel Debugging to Analyze .NET Hollowing
- UNC1549 exploits third-party access, VDI breakouts and custom malware