Learning to Pass Expectation Propagation MessagesLearning to Pass Expectation Propagation MessagesHeess, Nicolas and Tarlow, Daniel and Winn, John2013

Paper summarynipsreviewsThis paper proposes to learning expectation propagation (EP) message update operators from data that would enable fast and efficient approximate inference in situations where computing these operators is otherwise intractable.
This paper attacks the problem of computing the intractable low dimensional statistics in EP message passing by training a neural network. Training data is obtained using importance sampling and assuming that we know the forward model. The paper appears technically correct, honest about shortcomings, provides an original approach to a known challenge within EP and nicely illustrates the developed method in a number of well-chosen examples.
The authors propose a method for learning a mapping from input messages to the output message in the context of expectation propagation. The method can be thought of as a sort of "compilation" step, where there is a one-time cost of closely approximating the true output messages using important sampling, after which a neural network is trained to reproduce the output messages in the context of future inference queries.

This paper proposes to learning expectation propagation (EP) message update operators from data that would enable fast and efficient approximate inference in situations where computing these operators is otherwise intractable.
This paper attacks the problem of computing the intractable low dimensional statistics in EP message passing by training a neural network. Training data is obtained using importance sampling and assuming that we know the forward model. The paper appears technically correct, honest about shortcomings, provides an original approach to a known challenge within EP and nicely illustrates the developed method in a number of well-chosen examples.
The authors propose a method for learning a mapping from input messages to the output message in the context of expectation propagation. The method can be thought of as a sort of "compilation" step, where there is a one-time cost of closely approximating the true output messages using important sampling, after which a neural network is trained to reproduce the output messages in the context of future inference queries.