Much like artificial networks, the brain adapts the strength of synaptic weights to modify neuronal pathways and improve performance. Here, we explore learning through the lens of functional connectivity. We use instantaneous correlation matrices as proxies for functional connectivity, where the main challenge is to account for the fact that they do not obey Euclidean geometry. Thus, we aim to develop an interpretable AI-based system that 1) considers Riemannian relations between correlations and 2) reveals the intrinsic network components giving rise to connectivity dynamics.
Feedback of error signals is essential for learning. But how does the brain incorporate the signaling of errors during learning? In this project, we will study the fundamental differences between two main strategies for reinforcement learning – policy-based and value-based. We will develop theoretical models incorporating both and use them to uncover the underlying mechanisms of neuronal recordings from dopaminergic cells in learning animals.