Investigating Human Priors for Playing Video Games Investigating Human Priors for Playing Video Games
Paper summary Authors investigated why humans play some video games better than machines. That is the case for games that do not have continuous rewards (e.g., scores). They experimented with a game -- inspired by _Montezuma's Revenge_ -- in which the player has to climb stairs, collect keys and jump over enemies. RL algorithms can only know if they succeed if they finish the game, as there is no rewards during the gameplay, so they tend to do much worse than humans in these games. To compare between humans and machines, they set up RL algorithms and recruite players from Amazon Mechanical Turk. Humans did much better than machines for the original game setup. However, authors wanted to check the impact of semantics and prior knowledge on humans performance. They set up scenarios with different levels of reduced semantic information, as shown in Figure 2. This is what the game originally looked like: And this is the version with lesser semantic clues: You can try yourself in the [paper's website]( Not surprisingly, humans took much more time to complete the game in scenarios with less semantic information, indicating that humans strongly rely on prior knowledge to play video games. The authors argue that this prior knowledge should also be somehow included into RL algorithms in order to move their efficiency towards the human level. ## Additional reading [Why humans learn faster than AI—for now]( [OpenReview submission](
Investigating Human Priors for Playing Video Games
Rachit Dubey and Pulkit Agrawal and Deepak Pathak and Thomas L. Griffiths and Alexei A. Efros
arXiv e-Print archive - 2018 via Local arXiv
Keywords: cs.AI, cs.LG


Summary by Fábio Perez 1 month ago
Your comment: allows researchers to publish paper summaries that are voted on and ranked!

Sponsored by: and