There’s an interesting debate going on the in AI community. It has actually been happening for a while, it’s just that recently it has become more public and more personal.
Not exactly the old nature vs. nurture discussion but
something similar. The essential question is: What is intelligence and how do
AI-agents become intelligent (more intelligent)? Let’s simplify the debate and
make it about two people: Rich Sutton and Gary Marcus.
Rich Sutton is the leading proponent of reinforcement
learning (the trial, error, and reward based learning often associated with
Gary Marcus is a cognitive scientist who has often called for
the return of symbolic AI (e.g. GOFAI) and is currently advancing what he calls
In a nutshell:
Sutton, The Bitter Lesson (2019), believes that intelligence is computation and that all we need to do is leverage computational scale and intelligent agents/systems (actions) will emerge. All the human knowledge, building in what AI folks call “priors”, is a “distraction” – not worth the effort. General systems always win.
Score one for nurture.
Marcus, The Next
Decade in AI: Four Steps Towards Robust Artificial Intelligence (2020), does
not dispute that computation has a role in intelligence, he just doesn’t think it’s
sufficient or even efficient. People learn, in part, because they have “priors”
– innate understand and knowledge. We use that as building blocks to create
more knowledge as we experience new situations. Why not, says Marcus, use those
priors in AI to speed understanding and knowledge creation?
Score one for nature and
From my perspective, both approaches are interesting and, frankly, valid (in their own way). The difference for me is the outcome. From my bias and interest in machine information behaviour, the agent resulting from the strategy of Sutton or Marcus will behave differently. And that’s OK. I think. Grin.
Periodically the AI field has entered an “AI Winter” where the dominant paradigm seems to have run its course and researchers look for new options.
Are we entering another AI Winter?
Three recent books suggest not so much renewed stormy weather as a need to broaden perspectives … some looking backward, some merely looking around.
The basic questions raised are simple: Is Deep Learning (the state of the art in machine learning) sufficient? Is it the path to towards more intelligent machines (even AGI – artificial general intelligence).
Russell is widely known as the co-author of Artificial Intelligence: A Modern Approach (3rd ed. 2009), the definitive textbook in the field. In the past few years he has been exploring the concept of “beneficial AI” and this book further articulates that concept.
“The history of AI has been driven by a single mantra: ‘The more intelligent the better.’ I am convinced that this is a mistake.”
Russell. Human Compatible (2019)
Russell’s concern is that the current path of increasing AI autonomy fueled by more data, opaque algorithms, and enhanced computing will lead to a loss of control by humans. Not as bleak as Bostrom’s Superintelligence (2014), Russell’s solution is a design concept: make intelligent systems defer to human preferences.
Russell has three guiding principles:
The machine’s only objective is to maximize the realization of human preferences.
The machine is initially uncertain about what those preferences are.
The ultimate source of information about human preferences is human behavior.
Putting humans at the center of intelligent machines seems reasonable and certainly desirable. But will it be effective and advance AI?
The concern of Marcus (a long standing and vocal critique of Deep Learning) and Davis is related to Russell’s but the focus is different: not a control problem but a myopic problem – AI “doesn’t know what its talking about”; it doesn’t actually “understand” anything.
“The cure for risky AI is better AI, and the royal road to better AI is through AI that genuinely understands the world.” p. 199
Marcus & Davis. Re-Booting AI (2019)
And the way to understand the world is through “common sense”. In part this looks back to the symbolic (logic) representations of GOFAI (“Good Old Fashioned AI”) and it part it is about training AI about “time, space, causality, basic knowledge of physical objects and their interactions, basic knowledge of humans and their interactions.” Getting there requires us to train AI like children learn (an observation Turing made in 1950).
Smith picks up the issue of “understanding the world” and argues that AI must be “in the world” in a more visceral way – “deferring” to the world (reality) as we do. Two key concepts standout: judgment and ontology.
Judgment: Smith makes the distinction between “reckoning” (which most machine learning systems accomplish – calculation and prediction) and “judgment” which he views as the essence of intelligence and the missing component in AI.
Ontology: Smith contends that machine learning has “broken ontology.” It has given us a view of the world as more “ineffably dense” than we have ever perceived. The complexity and richness of the world require us to conceptualize the world differently.
The arguments about judgment and ontology converge in a discussion about knowledge presentation and point the way for machine learning to transcend its current limitations:
“If we are going to build a system that is itself genuinely intelligent, that knows what it is talking about, we have to build one this is itself deferential – that itself submits to the world it inhabits , and does not merely behave in ways that accord with our human deference.”
Smith. The Promise of AI (2019)
This book celebrates the power of machine learning while lamenting its shortcomings. However:
“I see no principled reason why systems capable of genuine judgment might not someday be synthesized – or anyway may not develop out of synthesized origins.”