Building Machines That Learn and Think Like People Building Machines That Learn and Think Like People
Paper summary TLDR; The author explore the gap between Deep Learning methods and human learning. The argue that natural intelligence is still the best example of intelligence, so it's worth exploring. To demonstrate their points they explore two challenges: 1. Recognizing new characters and objects 2. Learning to play the game Frostbite. The authors make several arguments: - Humans have an intuitive understanding of physics and psychology (understanding goals and agents) very early on. These two types of "software" help them to learn new tasks quickly. - Humans build causal models of the world instead of just performing pattern recognition. These models allow humans to learn from far fewer examples than current Deep Learning methods. For example, AlphaGo played a billion games or so, Lee Sedol perhaps 50,000. Incorporating compositionality, learning-to-learn (transfer learning) and causality helps humans to build these models. - Humans use both model-free and model-based learning algorithms.
arxiv.org
scholar.google.com
Building Machines That Learn and Think Like People
Lake, Brenden M. and Ullman, Tomer D. and Tenenbaum, Joshua B. and Gershman, Samuel J.
arXiv e-Print archive - 2016 via Local Bibsonomy
Keywords: dblp


Summary by Abhishek Das 8 months ago
Loading...
Your comment:
Summary by Denny Britz 2 years ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and