Learning States Representations in POMDP Learning States Representations in POMDP
Paper summary The authors present a model that learns representations of sequential inputs on random trajectories through the state space, then feed those into a reinforcement learner, to deal with partially observable environments. They apply this to a POMDP mountain car problem, where the velocity of the car is not visible but has to be inferred from successive observations.

Your comment:

Short Science allows researchers to publish paper summaries that are voted on and ranked!