Early Inference in Energy-Based Models Approximates Back-Propagation Early Inference in Energy-Based Models Approximates Back-Propagation
Paper summary # Very Short The authors define a neural network as a nonlinear dynamical system whose fixed points correspond to the minima of some **energy function**. They then show that if one were to start at a fixed-point and *perturb* the output units in the direction that minimizes a loss, the initial perturbation that would flow back through the network would be proportional to the gradient of the neural activations with respect to this loss. Thus, the initial propagation of those propagations (i.e. **early inference**) **approximates** the **backpropagated** gradients of the loss.
arxiv.org
arxiv-sanity.com
scholar.google.com
Early Inference in Energy-Based Models Approximates Back-Propagation
Yoshua Bengio and Asja Fischer
arXiv e-Print archive - 2015 via Local arXiv
Keywords: cs.LG

more

Summary by Peter O'Connor 5 months ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and