Building Machines That Learn and Think Like People Building Machines That Learn and Think Like People
Paper summary TLDR; The author explore the gap between Deep Learning methods and human learning. The argue that natural intelligence is still the best example of intelligence, so it's worth exploring. To demonstrate their points they explore two challenges: 1. Recognizing new characters and objects 2. Learning to play the game Frostbite. The authors make several arguments: - Humans have an intuitive understanding of physics and psychology (understanding goals and agents) very early on. These two types of "software" help them to learn new tasks quickly. - Humans build causal models of the world instead of just performing pattern recognition. These models allow humans to learn from far fewer examples than current Deep Learning methods. For example, AlphaGo played a billion games or so, Lee Sedol perhaps 50,000. Incorporating compositionality, learning-to-learn (transfer learning) and causality helps humans to build these models. - Humans use both model-free and model-based learning algorithms.
Building Machines That Learn and Think Like People
Brenden M. Lake and Tomer D. Ullman and Joshua B. Tenenbaum and Samuel J. Gershman
arXiv e-Print archive - 2016 via Local arXiv
Keywords: cs.AI, cs.CV, cs.LG, cs.NE, stat.ML


Summary by Abhishek Das 3 years ago
Your comment:
Summary by Denny Britz 4 years ago
Your comment: allows researchers to publish paper summaries that are voted on and ranked!

Sponsored by: and