sa国际传媒

Skip to content
Join our Newsletter

OpenAI forms safety committee as it starts training latest artificial intelligence model

OpenAI says it's setting up a safety and security committee and has begun training a new AI model to supplant the GPT-4 system that underpins its ChatGPT chatbot.
5f3b09b9-66a7-4db3-b7e7-185564c6d4f6
FILE - The OpenAI logo is seen displayed on a cell phone with an image on a computer monitor generated by ChatGPT's Dall-E text-to-image model, Friday, Dec. 8, 2023, in Boston. OpenAI says it's setting up a new safety and security committee and has begun training a new artificial intelligence model to supplant the GPT-4 system that underpins its ChatGPT chatbot. The San Francisco startup said in a blog post Tuesday May 28, 2024 that the committee will advise the full board on 鈥渃ritical safety and security decisions" for its projects and operations. (AP Photo/Michael Dwyer, File)

OpenAI says it's setting up a safety and security committee and has begun training a new AI model to supplant the GPT-4 system that underpins its ChatGPT chatbot.

The San Francisco startup said in a blog post Tuesday that the committee will advise the full board on 鈥渃ritical safety and security decisions" for its projects and operations.

The safety committee arrives as debate swirls around AI safety at the company, which was thrust into the spotlight after a researcher, Jan Leike, resigned and leveled for letting safety 鈥渢ake a backseat to shiny products." OpenAI co-founder and chief scientist Ilya Sutskever also resigned, and the company disbanded the 鈥渟uperalignment鈥 team focused on AI risks that they jointly led.

Leike said Tuesday he's joining rival AI company Anthropic, founded by ex-OpenAI leaders, to 鈥渃ontinue the superalignment mission鈥 there.

OpenAI said it has 鈥渞ecently begun training its next frontier model鈥 and its AI models lead the industry on capability and safety, though it made no mention of the controversy. 鈥淲e welcome a robust debate at this important moment,鈥 the company said.

AI models are prediction systems that are trained on vast datasets to generate on-demand text, images, video and human-like conversation. Frontier models are the most powerful, cutting edge AI systems.

The safety committee is filled with company insiders, including OpenAI CEO Sam Altman and Chairman Bret Taylor, and four OpenAI technical and policy experts. It also includes board members Adam D鈥橝ngelo, who鈥檚 the CEO of Quora, and Nicole Seligman, a former Sony general counsel.

The committee's first job will be to evaluate and further develop OpenAI鈥檚 processes and safeguards and make its recommendations to the board in 90 days. The company said it will then publicly release the recommendations it's adopting 鈥渋n a manner that is consistent with safety and security.鈥

鈥斺赌

The Associated Press and OpenAI have that allows OpenAI access to part of the AP鈥檚 text archives.

Associated Press, The Associated Press