TL;DR

Sanaz Yashar, a former Unit 8200 operative and current CEO of Zafran Security, warns that AI is accelerating the pace at which vulnerabilities are weaponized and predicts a large-scale AI-enabled attack is likely. She points to industry analysis showing attackers are exploiting bugs faster than vendors can patch them and says defenders must use AI-based tools — with humans remaining in the loop — to manage risk.

What happened

Sanaz Yashar, who spent 15 years in Israel’s Unit 8200 and later led threat-intel teams at Cybereason and at Google’s incident response and threat intel business, told reporters that the speed of exploit development has changed dramatically with AI. She recalled that producing a high-quality zero-day once took roughly a year; today AI tools let adversaries discover and weaponize flaws far faster. Yashar cited industry analysis indicating the average time-to-exploit (TTE) reached negative one day in 2024, meaning many bugs were exploited before vendors issued fixes, and said roughly 78% of observed weaponizations involved LLMs or AI. She warned that widespread use of AI in enterprises enlarges the attack surface — from prompt-injection abuses to vulnerabilities in AI frameworks themselves — and expressed concern that less skilled actors could trigger disproportionate, hard-to-mitigate collateral damage. Her company, Zafran, builds AI-driven threat-exposure management and recommends AI agents plus human oversight for triage and remediation.

Why it matters

  • A negative average TTE means many vulnerabilities are exploited before patches reach users, reducing defenders' reaction time.
  • AI tooling is lowering the technical barrier and increasing the speed of weaponizing vulnerabilities, changing attacker economics.
  • Wider deployment of AI in products and workflows expands exploitable attack surfaces, creating new avenues like prompt injection and agent manipulation.
  • Inexperienced or loosely coordinated actors abusing AI-driven exploits could produce unpredictable, large-scale collateral damage.
  • Defenders may need to adopt AI-enabled detection and remediation workflows, while keeping humans involved for risk decisions.

Key facts

  • Sanaz Yashar spent 15 years in Israel’s Unit 8200 before entering the commercial security sector.
  • Yashar co-founded Zafran Security in 2022; the company develops AI-based threat-exposure management tools.
  • She previously led threat intelligence at Cybereason and worked at Google’s incident response and threat-intel business, Mandiant.
  • Yashar says zero-day development used to take about 360 days; AI has drastically reduced that timeline.
  • Industry analysis cited by Yashar found the average time-to-exploit (TTE) in 2024 reached -1 day.
  • Yashar states that approximately 78% of vulnerabilities being weaponized involved large language models or AI.
  • She warns of prompt-injection and other attacks that can manipulate AI agents or bypass guardrails.
  • Zafran’s approach uses AI agents to investigate, triage and build mitigation actions, with human approval for risk choices.
  • Yashar predicts a large-scale AI-enabled cyber incident comparable in impact to WannaCry is likely to occur.

What to watch next

  • Trends in time-to-exploit (TTE) metrics from industry telemetry and research groups (monitor for further negative or shortening values).
  • Reports of AI-driven weaponization techniques, including prompt injection, agent manipulation, and exploits targeting AI frameworks.
  • Not confirmed in the source: the timing, origin, or specific vector of any large-scale 'WannaCry of AI' event.

Quick glossary

  • Zero-day: A previously unknown software vulnerability that has no available patch at the time it is discovered by attackers or defenders.
  • Time-to-exploit (TTE): A metric that measures how many days elapse between a vendor patch release and when attackers exploit the vulnerability; negative values indicate exploitation before a patch.
  • Prompt injection: A technique that manipulates inputs given to an AI model or agent to override safety checks or cause unintended behavior.
  • AI agent: An automated software entity that uses AI models to perform tasks, make decisions, or pursue goals with varying degrees of autonomy.
  • Threat-exposure management: Processes and tools that identify, prioritize, and remediate vulnerabilities and other security exposures in an organization.

Reader FAQ

Did Yashar say a major AI-driven global attack has already happened?
No. She said the 'WannaCry of AI' has not yet occurred but warned it is likely to happen in the future.

What does a negative TTE mean?
It means attackers are exploiting vulnerabilities before the vendor’s patch is publicly available — Yashar referenced a -1 day average for 2024.

Can AI be used to defend against AI-enabled attacks?
According to Yashar, defenders should use AI-based tools and agents to find and mitigate exposures, while keeping humans in the decision loop.

Is the source sure who will launch such an AI-driven attack?
Not confirmed in the source.

AI + ML 12 Spy turned startup CEO: 'The WannaCry of AI will happen' Ah, the good old days when 0-day development took a year Jessica Lyons Mon 22 Dec 2025 // 19:39 UTC…

Sources

Related posts

By

Leave a Reply

Your email address will not be published. Required fields are marked *