If 2023 was a year of wonder about artificial intelligence, 2024 was the year to try to get that wonder to do something useful without breaking the bank.
There was a 鈥渟hift from putting out models to actually building products,鈥 said Arvind Narayanan, a Princeton University computer science professor and co-author of the new book 鈥淎I Snake Oil: What Artificial Intelligence Can Do, What It Can鈥檛, and How to Tell The Difference.鈥
The first 100 million or so people who experimented with ChatGPT upon its release two years ago actively sought out the chatbot, finding it amazingly helpful at some tasks or laughably mediocre at others.
Now such generative AI technology is baked into an increasing number of technology services whether we're looking for it or not 鈥 for instance, through in Google search results or new AI techniques in photo editing tools.
鈥淭he main thing that was wrong with generative AI last year is that companies were releasing these really powerful models without a concrete way for people to make use of them,鈥 said Narayanan. 鈥淲hat we鈥檙e seeing this year is gradually building out these products that can take advantage of those capabilities and do useful things for people."
At the same time, since OpenAI released GPT-4 in March 2023 and competitors introduced similarly performing AI large language models, these models have stopped getting significantly 鈥渂igger and qualitatively better," resetting overblown expectations that AI was racing every few months to some kind of better-than-human intelligence, Narayanan said. That's also meant that the public discourse has shifted from 鈥渋s AI going to kill us?鈥 to treating it like a normal technology, he said.
AI's sticker shock
On quarterly earnings calls this year, tech executives often heard questions from Wall Street analysts looking for assurances of future payoffs from huge spending on AI research and development. Building AI systems behind generative AI tools like OpenAI's ChatGPT or Google's Gemini requires investing in running on powerful . They require so much electricity that tech giants announced deals this year to to help run them.
鈥淲e鈥檙e talking about hundreds of billions of dollars of capital that has been poured into this technology,鈥 said Goldman Sachs analyst Kash Rangan.
Another analyst at the New York investment bank drew attention over the summer by arguing AI isn't solving the complex problems that would justify its costs. He also questioned whether AI models, even as they're being trained on much of the written and visual data produced over the course of human history, will ever be able to do what humans do so well. Rangan has a more optimistic view.
鈥淲e had this fascination that this technology is just going to be absolutely revolutionary, which it has not been in the two years since the introduction of ChatGPT,鈥 Rangan said. "It鈥檚 more expensive than we thought and it鈥檚 not as productive as we thought."
Rangan, however, is still bullish about its potential and says that AI tools are already proving 鈥渁bsolutely incrementally more productive鈥 in sales, design and a number of other professions.
AI and your job
Some workers wonder whether AI tools will be used to or to replace them as the technology continues to grow. The tech company Borderless AI has been using an AI chatbot from Cohere to write up employment contracts for workers in Turkey or India without the help of outside lawyers or translators.
with the Screen Actors Guild-American Federation of Television and Radio Artists who went on strike in July said they feared AI could reduce or eliminate job opportunities because it could be used to into a number of other movements without their consent. how movie studios will use last year鈥檚 film and television strikes by the union, which lasted four months. Game companies have also signed side agreements with the union that codify certain AI protections in order to keep working with actors during the strike.
Musicians and authors have voiced similar concerns over AI scraping their voices and books. But generative AI still can't create unique work or 鈥渃ompletely new things,鈥 said Walid Saad, a professor of electrical and computer engineering and AI expert at Virginia Tech.
鈥淲e can train it with more data so it has more information. But having more information doesn鈥檛 mean you鈥檙e more creative,鈥 he said. 鈥淎s humans, we understand the world around us, right? We understand the physics. You understand if you throw a ball on the ground, it鈥檚 going to bounce. AI tools currently don鈥檛 understand the world.鈥
AI can mimic what it learns from patterns, he said, but can鈥檛 鈥渦nderstand the world so that they reason on what happens in the future.鈥 That, he said, is where AI falls short.
鈥淚t still cannot imagine things,鈥 he said. 鈥淎nd that imagination is what we hope to achieve later.鈥
Saad pointed to a meme about AI as an example of that shortcoming. When someone prompted an AI engine to create an image of salmon swimming in a river, he said, the AI created a photo of a river with cut pieces of salmon found in grocery stores.
鈥淲hat AI lacks today is the common sense that humans have, and I think that is the next step,鈥 he said.
An 鈥榓gentic future鈥
That type of reasoning is a key part of the process of making AI tools more useful to consumers, said Vijoy Pandey, senior vice president of Cisco's innovation and incubation arm, Outshift. AI developers are increasingly pitching the next wave of generative AI chatbots as AI 鈥渁gents鈥 that can do more useful things on people鈥檚 behalf.
That could mean being able to ask an AI agent an ambiguous question and have the model able to reason and plan out steps to solving an ambitious problem, Pandey said. A lot of technology, he said, is going to move in that direction in 2025.
Pandey predicts that eventually, AI agents will be able to come together and perform a job the way multiple people come together and solve a problem as a team rather than simply accomplishing tasks as individual AI tools. The AI agents of the future will work as an ensemble, he said.
Future Bitcoin software, for example, will likely rely on the use of AI software agents, Pandey said. Those agents will each have a specialty, he said, with 鈥渁gents that check for correctness, agents that check for security, agents that check for scale.鈥
鈥淲e鈥檙e getting to an agentic future,鈥 he said. 鈥淵ou鈥檙e going to have all these agents being very good at certain skills, but also have a little bit of a character or color to them, because that鈥檚 how we operate.鈥
AI makes gains in medicine
AI tools have also streamlined, or lent in some cases a literal helping hand, to the medical field. This year's Nobel Prize in chemistry 鈥 one of two Nobels 鈥 went to work led by Google that could help discover new medicines.
Saad, the Virginia Tech professor, said that AI has helped bring faster diagnostics by quickly giving doctors a starting point to launch from when determining a patient's care. AI can't detect disease, he said, but it can and point out potential problem areas for a real doctor to investigate. As with other arenas, however, it poses a risk of perpetuating falsehoods.
Tech giant OpenAI has touted its AI-powered transcription tool Whisper as having near 鈥渉uman level robustness and accuracy,鈥 for example. But experts have said that Whisper has a major flaw: It is or even entire sentences.
Pandey, of Cisco, said that some of the company's customers who work in pharmaceuticals have noted that AI has helped bridge the divide between 鈥渨et labs,鈥 in which humans conduct physical experiments and research, and 鈥渄ry labs鈥 where people analyze data and often use computers for modeling.
When it comes to pharmaceutical development, that collaborative process can take several years, he said 鈥 with AI, the process can be cut to a few days.
"That, to me, has been the most dramatic use," Pandey said.
Matt O'brien And Sarah Parvini, The Associated Press