Sparse Additive Text Models with Low Rank Background Sparse Additive Text Models with Low Rank Background
Paper summary This paper presents a model inspired by the SAGE (Sparse Additive GEnerative) model of Eisenstein et al. The authors use a different approach for modeling the "background" component of the model. SAGE uses the same background model for all; the authors allow different backgrounds for different topics/classification labels/etc., but try to keep the background matrix low rank. To make inference faster when using this low rank constraint, they use a bound on the likelihood function that avoids the log-sum-exp calculations from SAGE. Experimental results are positive for a few different tasks. Sparse additive models represent sets of distributions over large vocabularies as log-linear combinations of a dense, shared background vector and a sparse, distribution-specific vector. The paper presents a modification that allows distributions to have distinct background vectors, but requires that the matrix of background vectors be low-rank. This method leads to better predictive performance in a labeled classification task and in a mixed-membership LDA-like setting. Previous work on SAGE introduced a new model for text. It built a lexical distribution by adding deviation components to a fixed background. The model presented in this paper SAM-LRB, builds on SAGE and claims to improve it by two additions. First, providing a unique background for each class/topic. Second, providing an approximation of log-likelihood so as to provide a faster learning and inference algorithm in comparison to SAGE.
Sparse Additive Text Models with Low Rank Background
Shi, Lei
Neural Information Processing Systems Conference - 2013 via Local Bibsonomy
Keywords: dblp

Summary by NIPS Conference Reviews 5 years ago
Your comment: allows researchers to publish paper summaries that are voted on and ranked!

Sponsored by: and