Abstract
Experience constantly shapes neural circuits through a variety of plasticity mechanisms. While the functional roles of some plasticity mechanisms are well-understood, it remains unclear how changes in neural excitability contribute to learning. Here, we develop a normative interpretation of intrinsic plasticity (IP) as a key component of unsupervised learning. We introduce a novel generative mixture model that accounts for the class-specific statistics of stimulus intensities, and we derive a neural circuit that learns the input classes and their intensities. We will analytically show that inference and learning for our generative model can be achieved by a neural circuit with intensity-sensitive neurons equipped with a specific form of IP. Numerical experiments verify our analytical derivations and show robust behavior for artificial and natural stimuli. Our results link IP to non-trivial input statistics, in particular the statistics of stimulus intensities for classes to which a neuron is sensitive. More generally, our work paves the way toward new classification algorithms that are robust to intensity variations.
Original language | English (US) |
---|---|
Pages (from-to) | 4285-4293 |
Number of pages | 9 |
Journal | Advances in Neural Information Processing Systems |
State | Published - 2016 |
Event | 30th Annual Conference on Neural Information Processing Systems, NIPS 2016 - Barcelona, Spain Duration: Dec 5 2016 → Dec 10 2016 |
ASJC Scopus subject areas
- Computer Networks and Communications
- Information Systems
- Signal Processing