TL;DR

A World Economic Forum survey shows a sharp rise in organizations checking AI tools for security, with 64% now assessing risks before deployment (up from 37%). Respondents view AI as the top driver of cybersecurity change in 2026, while geopolitical concerns and data-leak fears shape risk plans.

What happened

The World Economic Forum's Global Cybersecurity Outlook 2026 found that a substantially larger share of organizations now run security assessments on AI systems prior to deployment. Sixty-four percent of surveyed business leaders said they evaluate AI tools’ security risks, a steep increase from 37 percent in the prior year. Most respondents expect AI to be the main force reshaping cybersecurity this year, and a large majority believe AI-related vulnerabilities have grown. Leaders singled out data leaks and growing adversarial capabilities as leading worries, and many said geopolitics heavily influences their cyber strategies. The WEF report also highlighted divergence between executive views: CEOs listed cyber-enabled fraud as their top concern, while CISOs still cite ransomware and supply-chain attacks as their chief threats. On resilience, 64 percent said they meet minimum cyber-resilience requirements and only 19 percent feel they exceed those baselines. The findings were released ahead of the WEF’s annual Davos meeting.

Why it matters

  • Broader pre-deployment security checks suggest organizations are moving from experimentation toward operationalizing AI with an eye on risk management.
  • Widespread belief that AI will drive cybersecurity change could shift investment and staffing priorities toward AI-specific defenses and monitoring.
  • Geopolitical considerations — especially for very large firms — are increasingly shaping security plans, which can affect vendor choice and cross-border data handling.
  • Persistent gaps in perceived cyber resilience indicate many organizations remain exposed to business-disrupting incidents despite growing awareness.

Key facts

  • 64% of business leaders surveyed by the WEF said they assess AI tools’ security risks before deployment.
  • That 64% figure rose from 37% in the previous year, according to the WEF comparison.
  • 94% of respondents said AI will be the most significant driver of cybersecurity change in 2026.
  • 87% of respondents believe vulnerabilities associated with AI have increased.
  • 64% of surveyed organizations reported meeting minimum cyber-resilience requirements; 19% said they exceed those baselines.
  • Geopolitics shaped cyber risk strategies for 64% of organizations overall; among firms with more than 100,000 employees, that share was 91% versus 59% for organizations with fewer than 1,000 staff.
  • CEOs ranked cyber-enabled fraud (phishing, social engineering) first in their list of concerns, followed by AI vulnerabilities and software-exploit risks; CISOs continued to rank ransomware and supply-chain attacks as top threats.
  • Reporting over the prior year highlighted common AI-related problems such as prompt-injection attacks and instances where vendors needed to patch model issues.

What to watch next

  • Whether the rise in pre-deployment AI security checks is sustained in future WEF surveys or reverses as deployments scale — not confirmed in the source.
  • If more organizations shift to local cloud providers or other architecture changes to address data sovereignty and geopolitical risk, as some industry surveys have suggested — not confirmed in the source.
  • Potential increases in politically motivated attacks around major global events (for example, sporting events) and how firms prepare for those — not confirmed in the source.

Quick glossary

  • Cyber resilience: An organization’s ability to maintain business operations and limit damage during and after a cyberattack.
  • Prompt injection: A class of AI vulnerability where crafted inputs manipulate a model into performing unintended actions or revealing sensitive data.
  • Adversarial capabilities: Techniques attackers use to deceive or bypass machine-learning systems, including manipulated inputs or model-targeted attacks.
  • Supply-chain attack: An intrusion that targets software or service providers to compromise downstream customers or partners.
  • Data sovereignty: The principle that data is subject to the laws and governance structures of the country where it is stored.

Reader FAQ

How many organizations now assess AI tools for security before deployment?
According to the WEF survey, 64% of business leaders said they assess AI tools’ security risks prior to deployment.

Do most leaders think AI is increasing cybersecurity risk?
Yes — 87% of WEF respondents said AI-related vulnerabilities have increased, and 94% expect AI to be the leading driver of cybersecurity change in 2026.

Are executives confident in their organizations’ AI security?
The WEF data shows rising adoption of security checks, but at a separate NCSC conference security professionals reported they did not have a strong grasp of their organizations’ AI system security, indicating uneven confidence.

Is ransomware still a top worry?
Ransomware remains the primary concern for CISOs, while CEOs placed cyber-enabled fraud above ransomware in their rankings.

AI + ML Businesses are finally starting to ask whether their AI is secure Survey finds security checks nearly doubled in a year as leaders wise up Connor Jones Mon 12 Jan 2026…

Sources

Related posts

By

Leave a Reply

Your email address will not be published. Required fields are marked *