sa国际传媒

Skip to content
Join our Newsletter

Exec tells first UN council meeting that big tech can't be trusted to guarantee AI safety

UNITED NATIONS (AP) 鈥 The handful of big tech companies leading the race to commercialize AI can鈥檛 be trusted to guarantee the safety of systems we don鈥檛 yet understand and that are prone to 鈥渃haotic or unpredictable behavior,鈥 an artificial intellig
20230718190752-64b725b350acc221b9f2aea5jpeg
In this photo provided by United Nations Photo, a wide view of the first ever Security Council meeting on artificial intelligence (AI) held Tuesday, July 18, 2023, at U.N. headquarters. This meeting, convened by the United Kingdom, addresses the topic "Artificial intelligence: opportunities and risks for international peace and security." The Secretary-General delivered remarks during the debate stating, "I urge the Council to approach this technology with a sense of urgency, a global lens, and a learner's mindset." (Eskinder Debebe/U.N. Photo via AP)

UNITED NATIONS (AP) 鈥 The handful of big tech companies leading the race to commercialize AI can鈥檛 be trusted to guarantee the safety of systems we don鈥檛 yet understand and that are prone to 鈥渃haotic or unpredictable behavior,鈥 an artificial intelligence company executive told the first U.N. Security Council meeting on AI鈥檚 threats to global peace on Tuesday.

Jack Clark, co-founder of the AI company Anthropic, said that鈥檚 why

Clark, who says his company bends over backwards to train its AI chatbot to emphasize safety and caution, said the most useful things that can be done now 鈥渁re to work on developing ways to test for capabilities, misuses and potential safety flaws of these systems.鈥 Clark left OpenAI, creator of the best-known ChatGPT chatbot, to form Anthropic, whose competing AI product is called Claude.

He traced the growth of AI over the past decade to 2023 where new AI systems can beat military pilots in air fighting simulations, stabilize the plasma in nuclear fusion reactors, design components for next generation semiconductors, and inspect goods on production lines.

But while AI will bring huge benefits, its understanding of biology, for example, may also use an AI system that can produce biological weapons, he said.

Clark also warned of 鈥減otential threats to international peace, security and global stability鈥 from two essential qualities of AI systems 鈥 their potential for misuse and their unpredictability 鈥渁s well as the inherent fragility of them being developed by such a narrow set of actors.鈥

Clark stressed that across the world it鈥檚 the tech companies that have the sophisticated computers, large pools of data and capital to build AI systems and therefore they seem likely to continue to define their development

In a video briefing to the U.N.鈥檚 most powerful body, Clark also expressed hope that global action will succeed.

He said he鈥檚 encouraged to see many countries emphasize the importance of safety testing and evaluation in their AI proposals, including the European Union, China and the United States.

Right now, however, there are no standards or even best practices on 鈥渉ow to test these frontier systems for things like discrimination, misuse or safety,鈥 which makes it hard for governments to create policies and lets the private sector enjoy an information advantage, he said.

鈥淎ny sensible approach to regulation will start with having the ability to evaluate an AI system for a given capability or flaw,鈥 Clark said. 鈥淎nd any failed approach will start with grand policy ideas that are not supported by effective measurements and evaluations.鈥

With robust and reliable evaluation of AI systems, he said, 鈥済overnments can keep companies accountable, and companies can earn the trust of the world that they want to deploy their AI systems into.鈥 But if there is no robust evaluation, he said, 鈥渨e run the risk of regulatory capture compromising global security and handing over the future to a narrow set of private sector actors.鈥

Other AI executives such as OpenAI's CEO, Sam Altman, have also called for regulation. But skeptics say regulation could be a boon for deep-pocketed first-movers led by OpenAI, Google and Microsoft as smaller players are elbowed out by the high cost of making their large language models adhere to regulatory strictures.

U.N. Secretary-General Antonio Guterres said the United Nations is 鈥渢he ideal place鈥 to adopt global standards to maximize AI鈥檚 benefits and mitigate its risks.

He warned the council that the advent of generative , pointing to its potential use by terrorists, criminals and governments causing 鈥渉orrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale.鈥

As a first step to bringing nations together, Guterres said he is appointing a high-level Advisory Board for Artificial Intelligence that will report back on options for global AI governance by the end of the year.

The U.N. chief also said he welcomed calls from some countries for the creation of a new United Nations body to support global efforts to govern AI, 鈥渋nspired by such models as the International Atomic Energy Agency, the International Civil Aviation Organization, or the Intergovernmental Panel on Climate Change.鈥

Professor Zeng Yi, director of the Chinese Academy of Sciences Brain-inspired Cognitive Intelligence Lab, told the council 鈥渢he United Nations must play a central role to set up a framework on AI for development and governance to ensure global peace and security.鈥

Zeng, who also co-directs the China-UK Research Center for AI Ethics and Governance, suggested that the Security Council consider establishing a working group to consider near-term and long-term challenges AI poses to international peace and security.

In his video briefing, Zeng stressed that recent generative AI systems 鈥渁re all information processing tools that seem to be intelligent鈥 but don鈥檛 have real understanding, and therefore 鈥渁re not truly intelligent.鈥

And he warned that 鈥淎I should never, ever pretend to be human,鈥 insisting that real humans must maintain control especially of all weapons systems.

Britain鈥檚 Foreign Secretary James Cleverly, who chaired the meeting as the UK holds the council presidency this month, said this autumn the United Kingdom will bring world leaders together for the first major global summit on AI safety.

鈥淣o country will be untouched by AI, so we must involve and engage the widest coalition of international actors from all sectors,鈥 he said. 鈥淥ur shared goal will be to consider the risks of AI and decide how they can be reduced through coordinated action.鈥

鈥斺赌

AP Technology Writer Frank Bajak contributed to this report from Boston

Edith M. Lederer, The Associated Press