Learning a new skill requires assimilating into our brain the regularities of the external world and how our body interacts with them as we engage in this skill. Mechanistically, this entails a translation of inputs, rules, and outputs into changes to the structure of neural networks in our brain. How this translation occurs is still largely unknown. We will follow the process of this assimilation using Trained Recurrent Neural Networks (TRNNs), which are increasingly used as models of neural circuits of trained animals.
Cancer cells embedded in healthy tissue can revert to normal cells, and vice versa for healthy tissue in a tumor environment. This highlights two parallel learning processes: cell and tissue, in the development or suppression of disease. Cancer cells use their intrinsic dynamic plasticity to escape and explore novel. Simultaneously, tissue homeostasis is a target of the collective of cells forming the tissue, which oppresses this exploration and keeps cell type stable. We use the language of machine learning to characterize these two learning processes.
Training Machine learning algorithms often introduces the phenomenon of underspecification: A wide gap between the dataset used for training and the real task. A parallel phenomenon in Neuroscience is the variety of strategies with which animals can approach a given task. These observations imply that for every task and training set there exists a space of solutions that is equivalent on that set. Both the structure of this space and the rules of motion within it are not understood. In this work, we study the space of solutions that emerges from those degrees of freedom in Recurrent Neural Networks (RNNs) trained on neuroscience-inspired tasks.