sa国际传媒

Skip to content
Join our Newsletter

Biden administration to host international AI safety meeting in San Francisco after election

Government scientists and artificial intelligence experts from at least nine countries and the European Union will meet in San Francisco after the U.S. elections to coordinate on safely developing AI technology and averting its dangers.
2225194c5babbe3eacff33f57594c7538cdf56ae78cc4558974952e4d451823d
FILE - Secretary of Commerce Gina Raimondo, left, and Secretary of State Antony Blinken, attend the U.S.-EU Trade and Technology Council Ministerial Meeting at the State Department, Jan. 30, 2024, in Washington. (AP Photo/Manuel Balce Ceneta, File)

Government scientists and artificial intelligence experts from at least nine countries and the European Union will meet in San Francisco after the U.S. elections to coordinate on safely developing AI technology and averting its dangers.

President Joe Biden's administration on Wednesday announced a two-day international AI safety gathering planned for November 20 and 21. It will happen just over a year after delegates at an in the United Kingdom pledged to work together to contain the potentially catastrophic risks posed by AI advances.

U.S. Commerce Secretary Gina Raimondo told The Associated Press it will be the 鈥渇irst get-down-to-work meeting鈥 after the UK summit and a May that sparked a network of publicly backed safety institutes to advance research and testing of the technology.

Among the urgent topics likely to confront experts is a steady rise of AI-generated fakery but also the tricky problem of how to know when an AI system is so widely capable or dangerous that it needs guardrails.

鈥淲e鈥檙e going to think about how do we work with countries to set standards as it relates to the risks of synthetic content, the risks of AI being used maliciously by malicious actors," Raimondo said in an interview. "Because if we keep a lid on the risks, it鈥檚 incredible to think about what we could achieve.鈥

Situated in a city that's become a hub of the current wave of , the San Francisco meetings are designed as a technical collaboration on safety measures ahead of a broader AI summit set for February in Paris. It will occur about two weeks after a presidential election between Vice President Kamala Harris 鈥 U.S. stance on AI risks 鈥 and former President Donald Trump, who Biden's signature AI policy.

Raimondo and Secretary of State Antony Blinken announced that their agencies will co-host the convening, which taps into a network of newly formed national AI safety and UK, as well as Australia, sa国际传媒, France, Japan, Kenya, South Korea, Singapore and the 27-nation European Union.

The biggest AI powerhouse missing from the list of participants is China, which isn't part of the network, though Raimondo said 鈥渨e鈥檙e still trying to figure out exactly who else might come in terms of scientists.鈥

鈥淚 think that there are certain risks that we are aligned in wanting to avoid, like AIs applied to nuclear weapons, AIs applied to bioterrorism,鈥 she said. 鈥淓very country in the world ought to be able to agree that those are bad things and we ought to be able to work together to prevent them.鈥

Many governments have pledged to safeguard AI technology but they've taken different approaches, with the EU the first to that sets the strongest restrictions on the riskiest applications.

Biden last October signed an executive order on AI that requires developers of the to share safety test results and other information with the government. It also delegated the Commerce Department to create standards to ensure AI tools are safe and secure before public release.

San Francisco-based OpenAI, maker of ChatGPT, said last week that before releasing its latest model, called o1, it granted early access to the U.S. and UK national AI safety institutes. The new product goes beyond the company's famous chatbot in being able to 鈥減erform complex reasoning鈥 and produce a "long internal chain of thought" when answering a query, and poses a 鈥渕edium risk鈥 in the category of weapons of mass destruction, the company has said.

Since generative AI tools began captivating the world in late 2022, the Biden administration has been pushing AI companies to commit to testing their most sophisticated models before they鈥檙e let out into the world.

鈥淭hat is the right model," Raimondo said. 鈥淭hat being said, right now, it鈥檚 all voluntary. I think we probably need to move beyond a voluntary system. And we need Congress to take action.鈥

Tech companies have mostly agreed, in principle, on the need for AI regulation, but some have chafed at proposals they argue could stifle innovation. In California, Gov. Gavin Newsom on Tuesday to crack down on political deepfakes ahead of the 2024 election, but has yet to sign, or veto, a more controversial measure that would regulate extremely powerful AI models that don't yet exist but could pose grave risks if they're built.

Matt O'brien, The Associated Press