This project uses transformers and linear systems to forecast neural activity. It aims to create predictive embeddings that capture the brain’s dynamics. By integrating these models, the project seeks to capture complex temporal dependencies in neural data and generate accurate forecasts, exploring the intersection of machine learning and neuroscience.
This project investigates Large Language Models (LLMs), focusing on their errors and vulnerabilities. It aims to identify conditions that cause mistakes and ways to exploit these errors. By comparing LLMs’ mistakes with human errors, the project will categorize these mistakes and design tasks for humans under similar conditions. This comparison will highlight differences between LLMs and human reasoning, offering insights to enhance LLM robustness and reliability.
This project examines how reinforcement learning (RL) utilizes predictive maps to enhance task performance. It explores how RL leverages semi-supervised predictive models to create robust representations. These algorithms are tested on neuroscientific tasks and compared with neuroscience data to understand how the brain integrates predictive and RL-based models to solve tasks.