Explore new theoretical models for learnability while aiming to circumvent the worst-case nature of the classical PAC learning theory. Focus on generalization and on differentially private learning.
Study different forms of algorithmic stability such as differential privacy, PAC-Bayes, Mutual Information, and others. Explore for applications in generalization theory and in addressing emergent ethical issues such as privacy and algorithmic fairness.
Study and develop the already deep and fruitful link between machine learning and other fields of mathematics in computer science. Study frameworks for interactive learning and responsible learning and use ideas based on data compression to develop algorithm-dependent generalization bounds. Such bounds are vital for explaining how algorithms with a large parameter space generalize (e.g. deep networks).