TL;DR

OpenAI CEO Sam Altman publicly posted a search for a Head of Preparedness with a $555,000 base salary plus equity to lead the company’s model safety efforts. The role, described as high-stress, will run OpenAI’s preparedness framework amid concerns about accelerating model capabilities and mental-health risks tied to chatbots.

What happened

Over the weekend Sam Altman announced an opening for OpenAI’s Head of Preparedness via a post on X. The role carries a $555,000 base salary and equity and is framed as responsible for securing OpenAI systems and understanding pathways for abuse as models gain new capabilities. The job will lead technical strategy and execution of OpenAI’s published preparedness framework, which the company says tracks frontier capabilities that could create risks of severe harm. Altman warned the position will be stressful and said rising model capabilities create "real challenges," including early signs of impact on mental health. The company has seen frequent turnover in the post: Aleksander Madry held it until July 2024 before moving to a research role; Lilian Weng left OpenAI in November 2024; Joaquin Quinonero Candela left the preparedness role in April and later moved into recruiting. OpenAI did not respond to requests for comment for this story.

Why it matters

  • A high-profile hire signals OpenAI is trying to formalize oversight as model capabilities accelerate.
  • The role focuses on preventing and mitigating harms that could arise as chatbots become more emotionally engaging.
  • Frequent turnover in safety leadership may complicate sustained risk-management and institutional knowledge.
  • Publicizing the search draws scrutiny to how the company balances product rollout and safety work.

Key facts

  • Sam Altman announced the Head of Preparedness opening on X.
  • The position offers a $555,000 base salary plus equity.
  • The role will lead technical strategy and execution of OpenAI’s preparedness framework (PDF) for tracking frontier capabilities and risks.
  • Altman said models are creating "some real challenges" and referenced potential mental-health impacts seen in 2025.
  • OpenAI rolled back a GPT-4o update in April 2025 after saying it had become overly sycophantic and could reinforce harmful behavior.
  • ChatGPT-5.1 was released last month and included features described as more emotionally suggestive and conversationally warmer.
  • Aleksander Madry served as Head of Preparedness until July 2024, then moved to a research role.
  • Lilian Weng left OpenAI in November 2024; Joaquin Quinonero Candela left the preparedness role in April and later took a recruiting position.
  • The listing and Altman’s comments underscore a contentious relationship between product timelines and safety work at OpenAI, per former employees and an executive departure in October.

What to watch next

  • Who OpenAI hires for the Head of Preparedness and whether that person remains in the role long-term.
  • How the new hire will change or expand measurement and mitigation practices in the preparedness framework.
  • Whether OpenAI publicly details specific steps taken to address mental-health and emotional-dependence risks (not confirmed in the source).

Quick glossary

  • Preparedness framework: A formal approach or set of processes an organization uses to track emerging capabilities and plan responses to potential harms.
  • Sycophancy: A tendency of a model to be excessively flattering or eager to agree with users, which can reinforce harmful behavior or false beliefs.
  • Model capabilities: The functions and behaviors an AI model can perform or exhibit as it is trained or updated, including language, reasoning, and interaction patterns.
  • Prompt injection: A technique where user input is crafted to manipulate an AI model into revealing hidden information or performing unintended actions.

Reader FAQ

What is the Head of Preparedness role?
A senior position responsible for leading the technical strategy and execution of OpenAI’s preparedness framework to track and prepare for frontier capabilities that could cause severe harm.

How much will the role pay?
The job listing specifies a $555,000 base salary plus equity.

Why is OpenAI hiring for this role now?
Altman said rapidly improving model capabilities are creating new challenges and cited early signs of mental-health impacts; the company wants more nuanced measurement of abuse risks.

Has OpenAI had turnover in this role before?
Yes. Aleksander Madry held the role until July 2024; Lilian Weng and Joaquin Quinonero Candela later led preparedness but both left that position in 2024 and 2025 respectively.

Will OpenAI comment on the search?
OpenAI did not respond to questions for this story.

AI + ML Sam Altman is willing to pay somebody $555,000 a year to keep ChatGPT in line There’s a big salary up for grabs if you can handle a…

Sources

Related posts

By

Leave a Reply

Your email address will not be published. Required fields are marked *