OpenAI’s Safety Leadership Shifts as Frontier Risks Grow
-

The Head of Preparedness role comes amid notable changes to OpenAI’s safety leadership. The company formed its preparedness team in 2023 to study risks ranging from phishing attacks to speculative catastrophic threats, including biosecurity and nuclear risks.
Less than a year later, the original preparedness lead was reassigned, and several safety-focused executives have since exited or shifted roles. OpenAI recently updated its Preparedness Framework, noting it may adjust safety requirements if rival labs release high-risk models without similar protections.
The hiring push underscores the tension between rapid AI progress and the challenge of governing increasingly powerful systems.
-
nuclear risks sounds crazy
-
Sounds crazy until models scale fast.
-
Preparedness is about low-probability, high-impact risks.