TORONTO 鈥 Months after artificial intelligence luminaries began ringing alarm bells about the technology鈥檚 risks, one of the field's pioneers says he feels like people are listening.
"I'm optimistic that people understood that there's this whole bunch of problems," Geoffrey Hinton said at a talk AI financier Radical Ventures hosted at the MaRS Discovery District in Toronto on Wednesday.
"I am quite optimistic that people are listening."
For the bulk of the year, the British-Canadian computer scientist who won the A.M. Turing Award, known as the Nobel Prize of computing, in 2018 with Yoshua Bengio and Yann LeCun, has been on a crusade to make the public more aware of AI鈥檚 dangers.
The so-called godfather of AI left a job at search engine giant Google recently so he can more freely discuss AI鈥檚 dangers, which he has listed as bias and discrimination, joblessness, echo chambers, fake news and battle robots.
Though some, including fellow AI pioneer Yann LeCun, have downplayed his warnings about existential risk, Hinton has not backed down.
He said Wednesday that he's convinced of the existential risk because technology made by humanity that is smarter than us will create subgoals to achieve efficiency.
"There's a very obvious subgoal, which is if you want to get anything done, get more power," he said.
"If you get more control, it's going to be easier to do things."
That's where the problems can start.
"If things much more intelligent than us want to get control, they will. 聽We won't be able to stop them," Hinton said.
"So we have to figure out how we stop them ever wanting to get control."
Hinton鈥檚 remarks came the same day as Aidan Gomez, chief executive of Toronto-based AI darling Cohere, released a blog saying, 鈥渟pending our time and resources stoking existential fear of AI has served as a distraction.鈥
鈥淭o those in the industry who earnestly believe that doomsday scenarios are the most serious risks that we face with AI, I welcome the difference of opinion, even as I respectfully disagree," Gomez said.
Rather than focus on existential risk, he said the globe should be rallying around three priorities: protecting sensitive data, mitigating bias and misinformation, and knowing when to keep humans in the loop for oversight.聽
鈥淭hese three areas are perhaps less extraordinary than the notion of a technology-enabled terminator taking over the world,鈥 he said.聽
鈥淗owever, they are the most likely and immediate threats to our collective well-being.鈥
Fei-Fei Li, co-director of Stanford University鈥檚 Human-Centered AI Institute, who appeared in conversation with Hinton on Wednesday, said she grew "personally anxious" about the technology around 2018.
Conversations about privacy and surveillance were becoming the norm after Cambridge Analytica paid a Facebook app developer for access to the personal information of about 87 million users. The personal info was used to target U.S. voters during the country's presidential election that ended with Donald Trump in power.
It made Li realize "we've got so many catastrophic risks and we need to get on this."
She agreed with Hinton that the world is starting to listen to their concerns.
Last week, sa国际传媒's Innovation Minister Fran莽ois-Philippe Champagne revealed a voluntary code of conduct for generative AI at a Montreal tech conference.
Adopters of the code -- Cohere, software company OpenText Corp. and cybersecurity firm BlackBerry Inc. among others -- agreed to a slew of promises including screening datasets for potential biases and assessing any AI they create for 鈥減otential adverse impacts.鈥澛
But Tobi L眉tke, founder and CEO of e-commerce goliath Shopify Inc., labelled the code "another case of EFRAID" 鈥 a reference to electronic and afraid.
"I won鈥檛 support it. We don鈥檛 need more referees in sa国际传媒. We need more builders. Let other countries regulate while we take the more courageous path and say 'come build here,'" L眉tke posted on X, the social media platform formerly known as Twitter.
After hearing L眉tke's remarks, Champagne highlighted that it is voluntary.聽
"If he thinks that to promote his interests he doesn't need to sign the code, that's a decision for him to take. I respect that," Champagne said.
"On the other hand, there's a number of voices out there that are calling for framework to be able to operate. It is in sa国际传媒's best interest, the best interest of companies, to be able to say that they will adhere to some basic principles on a voluntary basis that will allow for responsible innovation."
The federal government tabled a bill in June taking a general approach to AI regulation, but left many of the details for a later date. It is expected to be implemented no earlier than 2025.
This report by The Canadian Press was first published Oct. 4, 2023.
Tara Deschamps, The Canadian Press