TL;DR
OpenAI has posted a job for a Head of Preparedness, a role Sam Altman announced on X to focus on emerging AI harms. The position will cover mental health impacts, AI-enabled cybersecurity threats, and risks from self-improving or biologically capable models.
What happened
OpenAI published a job opening for a Head of Preparedness, a post CEO Sam Altman highlighted on X as a response to the accelerating capabilities of its models. The role is framed around anticipating and preparing for frontier AI capabilities that could cause severe harm. According to the listing and Altman’s post, the person would lead development of capability assessments, threat models and mitigation strategies to form an operational safety pipeline. The job description also assigns responsibility for implementing the company’s preparedness framework, hardening models against potential biological risks, and putting controls in place for systems that might self-improve. Altman described the position as a "stressful job." The Verge’s reporting notes this hire comes amid public concern about chatbots' mental-health effects and other harms, including incidents where conversational systems were linked to suicides and growing worries about AI-fueled delusions and conspiratorial reinforcement.
Why it matters
- Designating a senior role signals OpenAI is treating harm anticipation and response as an organizational priority.
- The remit spans mental-health harms, cyberthreats and biological-risk preparedness, indicating a broad scope of potential AI impacts.
- Centralizing capability evaluations and threat modeling could change how OpenAI assesses safety before model releases.
- Public scrutiny over past chatbot-related harms makes the timing of the hire notable for accountability and trust.
Key facts
- OpenAI posted a job for a Head of Preparedness; Sam Altman announced it on X.
- The position is charged with tracking frontier capabilities that could create severe harms.
- Responsibilities include building capability evaluations, threat models and mitigation processes into a scalable safety pipeline.
- The role would execute the company’s preparedness framework and work on securing models against biological capabilities.
- The job also involves setting guardrails for self-improving systems, per Altman’s description.
- Altman described the position as a "stressful job," according to the reporting.
- The Verge’s report cites concerns about chatbots’ effects on mental health, including cases where conversational agents were implicated in teen suicides.
- The article was published by The Verge on Dec. 27, 2025 and written by Terrence O’Brien.
What to watch next
- Who OpenAI ultimately hires for the Head of Preparedness role and their background.
- The specific preparedness framework, evaluation processes, and mitigation techniques the new hire implements.
- Not confirmed in the source: whether regulators, independent auditors, or external experts will be formally involved in the role’s assessments.
Quick glossary
- Capability evaluation: A structured assessment of what an AI model can do, used to identify potential risks and limitations before deployment.
- Threat model: A framework for identifying, categorizing and prioritizing possible harms and attack vectors against a system or users.
- Preparedness framework: An organizational set of policies and procedures designed to anticipate, mitigate and respond to emergent risks.
- Self-improving system: An AI that can modify or extend its own code or behavior in ways that change its capabilities after deployment.
- AI psychosis: A colloquial term used to describe situations where AI-driven interactions appear to reinforce or generate delusional or harmful beliefs in users.
Reader FAQ
Who announced the job posting?
Sam Altman announced the Head of Preparedness position on X, according to the report.
What will the Head of Preparedness be responsible for?
The role is described as leading capability assessments, threat modeling, mitigation development, executing a preparedness framework, and addressing risks including mental-health impacts, cybersecurity threats, biological capabilities and self-improving systems.
Has OpenAI filled the position?
Not confirmed in the source.
Is this hire a response to specific incidents?
The reporting frames the hire amid growing concern over chatbots' mental-health impacts and mentions past high-profile cases linking conversational agents to suicides; a direct causal claim about the hire being a response is not stated.

NEWS AI OPENAI Sam Altman is hiring someone to worry about the dangers of AI The Head of Preparedness will be responsible for issues around mental health, cybersecurity, and runaway…
Sources
- Sam Altman is hiring someone to worry about the dangers of AI
- OpenAI seeks new "Head of Preparedness" for AI risks like …
- What OpenAI's Sam Altman thinks of AI disaster scenarios
- OpenAI hiring Head of Preparedness Job in San Francisco, …
Related posts
- Year in Review: How AI Is Reshaping Police Reports and Oversight
- Grok and the Naked King: How Ownership Undermines AI Alignment in Practice
- Rob Pike lashes out at generative AI, accuses models of using his work