Optimal Neural Population Codes for High-dimensional Stimulus Variables Optimal Neural Population Codes for High-dimensional Stimulus Variables
Paper summary Finding the objective functions that regions of the nervous system are optimized for is a central question in neuroscience, providing a central computational principle behind neural representation in a given region. One common objective is to maximize the Shannon Information the neural response encodes about the input (infomax). This is supported by some experimental. Another is to minimize the decoding error when the neural population is decoded for a particular variable or variable. This has also been found to have some experimental evidence. These two different objectives are similar in some circumstances, giving similar predictions, in other cases they differ more. Studies finding model optimal distributions of neural population tuning that minimizes decoding error (L2-min) have mostly considered 1-dimensional stimuli. In this paper the authors extend substantially on this, by developing analytical methods for finding the optimal distributions of neural tuning for higher dimensional stimuli. Their methods apply under certain limited conditions , such as when there is an equal number of neurons as stimulus dimensions (diffeomorphic). The authors compare their results to the infomax solution (in most detail for the 2D case), and find fairly similar results in some respects, but with two key differences. That the L2-min basis functions are more orthogonal than the infomax, and that the L2-min has discrete solutions rather than the continuum found for infomax. A consequence of these differences is that L2-min representations encode more correlated signals.
Optimal Neural Population Codes for High-dimensional Stimulus Variables
Wang, Zhuo and Stocker, Alan A. and Lee, Daniel D.
Neural Information Processing Systems Conference - 2013 via Bibsonomy
Keywords: dblp

Your comment:

Short Science allows researchers to publish paper summaries that are voted on and ranked!