Seminars & Colloquia Calendar
Embedding of Low-Dimensional Attractor Manifolds by Neural Networks
Remi Monasson - École Normale Supérieure, Paris
Date & time: Wednesday, 25 August 2021 at 10:45AM - 11:45AM
Recurrent neural networks (RNN) have long been studied to explain how fixed-point attractors may emerge from noisy, high-dimensional dynamics. More recently, computational neuroscientists have devoted lots of efforts to understand how RNN could embed attractor manifolds of finite dimension, in particular in the context of the representation of space by mammals. A natural open issue is the characterization of the trade-off between quantity (number) and quality (accuracy of encoding) of the stored manifolds.
I will show how to FIX the ?N2 pairwise interactions in a RNN with N neurons to embed L manifolds of dimension D<<N, which is optimal according to some dynamical stability criterion. The resulting capacity, i.e., the maximal ratio L/N, decreases as ~ |log ?|^-D , where ? is the error on the position encoded by the neural activity along each manifold. These results, derived using a combination of analytical tools from statistical mechanics and random matrix theory show that RNN are flexible memory devices capable of storing a large number of manifolds at high spatial resolution.