OpenAI’s board has established a safety and security committee to assess its operations as it trains its next artificial intelligence model. The committee announced in a statement on Tuesday, will evaluate and enhance OpenAI’s processes and safeguards over the next 90 days.
Addressing Global Safety Concerns
The formation of this committee comes amid rising global concerns about the potential dangers of increasingly powerful AI models capable of creating texts and generating images. The committee will be led by OpenAI board members Bret Taylor (Chair), Adam D’Angelo, Nicole Seligman, and Sam Altman (CEO). They will make critical safety and security recommendations to the full board regarding the company’s projects and operations.
New Model Training
OpenAI has recently begun training a new AI model that may surpass the capabilities of its current models, ChatGPT-4 and ChatGPT-4. This development has heightened the need for rigorous safety evaluations. The company stated, “OpenAI has recently begun training its next frontier model, and we anticipate the resulting systems will bring us to the next level of capabilities on our path to AGI. While we are proud to build and release industry-leading models in capabilities and safety, we welcome a robust debate at this important moment.”
Committee’s Role and Reporting
After 90 days, the safety and security committee will present their findings and recommendations to the full board. OpenAI will then publicly share an update on the adopted recommendations, ensuring consistency with safety and security standards. The committee includes technical and policy experts Aleksander Madry (Head of Preparedness), Lilian Weng (Head of Safety Systems), John Schulman (Head of Alignment Science), Matt Knight (Head of Security), and Jakub Pachocki (Chief Scientist). OpenAI will also retain and consult with additional safety, security, and technical experts, including former cybersecurity officials Rob Joyce and John Carlin.
The creation of the new safety committee follows the recent dissolution of a team focused on ensuring the safety of future ultra-capable AI systems. This team was disbanded after the departure of its leaders, including OpenAI co-founder and chief scientist Ilya Sutskever. The superalignment team, led by Sutskever and Jan Leike, was formed less than a year ago to address long-term threats from superhuman AI. Leike, who resigned, noted that his division was “struggling” for computing resources within OpenAI.
The new committee aims to continue ensuring the safe development and deployment of AI technologies, responding to growing global concerns, and advancing OpenAI’s commitment to safety and innovation.