There’s an interesting debate going on the in AI community. It has actually been happening for a while, it’s just that recently it has become more public and more personal.
Not exactly the old nature vs. nurture discussion but something similar. The essential question is: What is intelligence and how do AI-agents become intelligent (more intelligent)? Let’s simplify the debate and make it about two people: Rich Sutton and Gary Marcus.
Rich Sutton is the leading proponent of reinforcement learning (the trial, error, and reward based learning often associated with humans).
Gary Marcus is a cognitive scientist who has often called for the return of symbolic AI (e.g. GOFAI) and is currently advancing what he calls “robust AI”.
In a nutshell:
Sutton, The Bitter Lesson (2019), believes that intelligence is computation and that all we need to do is leverage computational scale and intelligent agents/systems (actions) will emerge. All the human knowledge, building in what AI folks call “priors”, is a “distraction” – not worth the effort. General systems always win.
Score one for nurture.
Marcus, The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence (2020), does not dispute that computation has a role in intelligence, he just doesn’t think it’s sufficient or even efficient. People learn, in part, because they have “priors” – innate understand and knowledge. We use that as building blocks to create more knowledge as we experience new situations. Why not, says Marcus, use those priors in AI to speed understanding and knowledge creation?
Score one for nature and nurture.
From my perspective, both approaches are interesting and, frankly, valid (in their own way). The difference for me is the outcome. From my bias and interest in machine information behaviour, the agent resulting from the strategy of Sutton or Marcus will behave differently. And that’s OK. I think. Grin.
Join the debate.