The 2010s were huge for artificial intelligence, thanks to advances in deep learning, a branch of AI that has become feasible because of the growing capacity to collect, store, and process large amounts of data. Today, deep learning is not just a topic of scientific research but also a key component of many everyday applications.
A decade’s worth of research and application has made it clear that in its current state, deep learning is not the final solution to solving the ever-elusive challenge of creating human-level AI. What do we need to push AI to the next level? More data and larger neural networks? New deep learning algorithms? Approaches other than deep learning?
Titled “AI debate 2: Moving AI forward: An interdisciplinary approach,” the debate was attended by scientists from a range of backgrounds and disciplines.
Cognitive scientist Gary Marcus, who cohosted the debate, reiterated some of the key shortcomings of deep learning, including excessive data requirements, low capacity for transferring knowledge to other domains, opacity, and a lack of reasoning and knowledge representation.
Marcus, who is an outspoken critic of deep learning-only approaches, published a paper in early 2020 in which he suggested a hybrid approach that combines learning algorithms with rules-based software.
Other speakers also pointed to hybrid artificial intelligence as a possible solution to the challenges deep learning faces.
“One of the key questions is to identify the building blocks of AI and how to make AI more trustworthy, explainable, and interpretable,” computer scientist Luis Lamb said.
Lamb, who is a coauthor of the book Neural-symbolic Cognitive Reasoning, proposed a foundational approach for neural-symbolic AI that is based on both logical formalization and machine learning.
“We use logic and knowledge representation to represent the reasoning process that [it] is integrated with machine learning systems so that we can also effectively reform neural learning using deep learning machinery,” Lamb said.
Work on image classification and computer vision has helped trigger the deep learning revolution of the past decade.
Li pointed out that intelligence in humans and animals emerges from active perception and interaction with the world, a property that is sorely lacking in current AI systems, which rely on data curated and labeled by humans.
“There is a fundamentally critical loop between perception and actuation that drives learning, understanding, planning, and reasoning. And this loop can be better realized when our AI agent can be embodied, can dial between explorative and exploitative actions, is multi-modal, multi-task, generalizable, and oftentimes social,” she said.
OpenAI researcher Ken Stanley also discussed lessons learned from evolution.
“Reinforcement learning is the first computational theory of intelligence,” Sutton said, referring to the branch of AI in which agents are given the basic rules of an environment and left to discover ways to maximize their reward.
“Reinforcement learning is explicit about the goal, about the whats and the whys. In reinforcement learning, the goal is to maximize an arbitrary reward signal. To this end, the agent has to compute a policy, a value function, and a generative model,” Sutton said.
He added that the field needs to further develop an agreed-upon computational theory of intelligence and said that reinforcement learning is currently the standout candidate, though he acknowledged that other candidates might be worth exploring.
Sutton is a pioneer of reinforcement learning and author of a seminal textbook on the topic.
DeepMind, the AI lab where he works, is deeply invested in “Deep reinforcement learning,” a variation of the technique that integrates neural networks into basic reinforcement learning techniques. In recent years, DeepMind has used deep reinforcement learning to master games such as Go, chess, and StarCraft 2.
While reinforcement learning bears striking similarities to the learning mechanisms in human and animal brains, it also suffers from the same challenges that plague deep learning.
Reinforcement learning models require extensive training to learn the simplest things and are rigidly constrained to the narrow domain they are trained on.
For the time being, developing deep reinforcement learning models requires very expensive compute resources, which makes research in the area limited to deep-pocketed companies such as Google, which owns DeepMind, and Microsoft, the quasi-owner of OpenAI, integrating world knowledge and common sense into AI. Computer scientist and Turing Award winner Judea Pearl, best known for his work on Bayesian networks and causal inference, stressed that AI systems need world knowledge and common sense to make the most efficient use of the data they are fed.
Instead, we employ the innate structures in our brains to interact with the world, and we use data to interrogate and learn from the world, as witnessed in newborns, who learn many things without being explicitly instructed.
“That kind of structure must be implemented externally to the data. Even if we succeed by some miracle to learn that structure from data, we still need to have it in the form that is communicable with human beings,” Pearl said.
University of Washington professor Yejin Choi also underlined the importance of common sense and the challenges its absence presents to current AI systems, which are focused on mapping input data to outcomes.
“We know how to solve a dataset without solving the underlying task with deep learning today,” Choi said.
Choi also pointed out that the space of reasoning is infinite, and reasoning itself is a generative task and very different from the categorization tasks today’s deep learning algorithms and evaluation benchmarks are suited for.
But how do we reach common sense and reasoning in AI? Choi suggests a wide range of parallel research areas, including combining symbolic and neural representations, integrating knowledge into reasoning, and constructing benchmarks that are not just categorization.