TL;DR
Witness AI raised $58 million to tackle risks created by enterprise chatbots, copilots and AI agents by building what it calls a “confidence layer for enterprise AI.” The raise and the company's mission were discussed on TechCrunch’s Equity podcast with Ballistic Ventures’ Barmak Meftah and Witness AI CEO Rick Caccia. Experts on the episode flagged data leakage, compliance exposure and prompt-injection attacks as core concerns, and projected AI security as a multi-hundred-billion-dollar market by 2031.
What happened
TechCrunch reported that Witness AI has secured $58 million in funding to address new security and compliance risks introduced by enterprise deployments of AI-powered chatbots, agents and copilots. The company frames its offering as a “confidence layer for enterprise AI” designed to prevent inadvertent disclosure of sensitive information, enforce compliance policies and guard against prompt-based attacks. The fundraise and the problem set were discussed on TechCrunch’s Equity podcast, where host Rebecca Bellan spoke with Barmak Meftah of Ballistic Ventures and Witness AI CEO Rick Caccia. On the episode they described enterprise concerns about data leakage and regulatory exposure as AI systems are embedded across workflows, and raised the specter of agent-to-agent communications occurring without human oversight. TechCrunch cited an industry projection that AI security could become an $800 billion to $1.2 trillion market by 2031. Details such as specific investors beyond guests, valuation, and product road map were not confirmed in the source.
Why it matters
- Enterprise AI introduces practical risks — accidental data leaks, regulatory noncompliance and prompt-injection attacks — that can cause operational and legal harm.
- A dedicated 'confidence layer' aims to give security and compliance teams controls and visibility when employees and automated agents use powerful models.
- Analyst and industry projections point to a very large addressable market for AI security, underscoring widespread enterprise demand.
- Unsupervised interactions between AI agents raise new control and governance challenges that standard security tooling may not address.
Key facts
- Witness AI raised $58 million, according to TechCrunch coverage.
- The company describes its product approach as a “confidence layer for enterprise AI.”
- The funding and the company’s mission were discussed on TechCrunch’s Equity podcast hosted by Rebecca Bellan.
- Guests on the episode included Barmak Meftah, co-founder and partner at Ballistic Ventures, and Rick Caccia, CEO of Witness AI.
- TechCrunch cited a projection that AI security could be an $800 billion to $1.2 trillion market by 2031.
- Primary enterprise worries highlighted in the conversation: accidental data leakage, compliance violations, and prompt-based injections.
- The podcast also raised concerns about AI agents communicating with other AI agents without human oversight.
- Specific details about the round’s investors, valuation, planned use of funds and customer names were not confirmed in the source.
- The story was published by TechCrunch on January 14, 2026.
What to watch next
- How Witness AI defines and implements its 'confidence layer' in product releases — not confirmed in the source.
- Announcements of enterprise pilot customers or deployments that demonstrate the product’s effectiveness — not confirmed in the source.
- Regulatory and compliance developments that affect how companies must secure AI-driven workflows — not confirmed in the source.
- The evolution of controls for agent-to-agent interactions and whether new standards or tools emerge to govern them.
Quick glossary
- Confidence layer: A security and governance layer placed around AI systems to enforce policies, prevent data leakage and provide visibility into model-driven actions.
- Prompt injection: A class of attacks where malicious or manipulative input to a model causes it to reveal information or perform unintended actions.
- AI agent: An autonomous or semi-autonomous software component that uses AI to perform tasks, make decisions or interact with other systems.
- Copilot: An AI assistant integrated into workflows or applications to help users complete tasks, often by generating suggestions or automating steps.
- Compliance: Adherence to laws, regulations and internal policies that govern data handling, privacy and industry-specific requirements.
Reader FAQ
How much money did Witness AI raise?
$58 million, as reported by TechCrunch.
What is Witness AI building?
The company says it is building a 'confidence layer for enterprise AI' to reduce data leakage, enforce compliance and guard against prompt-based attacks.
Who invested in the round?
Not confirmed in the source.
When was this reported?
TechCrunch published the coverage on January 14, 2026.
Will this solve all enterprise AI risks?
Not confirmed in the source.

TechCrunch Mobile Logo Site Search Toggle Mega Menu Toggle Latest Startups Venture Apple Security AI Apps Events Podcasts Newsletters Topics Latest AI Amazon Apps Biotech & Health Climate Cloud Computing…
Sources
- How WitnessAI raised $58M to solve enterprise AI’s biggest risk
- WitnessAI Raises $58 Million for Global Expansion and …
- Exposing Overlooked & Interconnected Risks in AI, Energy, …
- AI strategy and quality thought leadership – why it's crucial …
Related posts
- Global app downloads fall again in 2025 as consumer spending nears $156B
- Harmony launches AI notetaker for Discord to record, transcribe, summarize
- Netflix debuts its first original podcasts with Pete Davidson and Michael Irvin