OpenAI, the developer of ChatGPT, said Tuesday that it has created a new safety and security committee headed by senior executives, including Sam Altman, after dissolving its prior oversight board earlier this month. The news comes on the heels of the announcement that OpenAI has begun training its “next frontier model.”
The committee, which also includes board of directors members Bret Taylor, Adam D’Angelo and Nicole Seligman, will make recommendations to the board as a whole about “critical safety and security decisions for OpenAI projects and operations,” according to the company. During the next three months, the new committee will review the company’s processes and safeguards, then share recommendations to the board.
OpenAI’s prior oversight team focused on AI’s long-term risks. Prior to its disbanding, co-founder Ilya Sutskever and key researcher Jan Leike resigned from the company, with Leike writing that Open AI’s “safety culture and processes have taken a backseat to shiny products.”