MACHINE LEARNING FOUNDATIONS
My research is focused on geometric machine learning with a focus on the mathematical and algorithmic foundations of deep learning. My research is guided by an ambition to solve core problems in medical image analysis, whilst aiming for generic solutions that have a wide application scope. My current research focus is on (generalizations of) group convolutional neural networks and the improvement of computational and representational efficiency through sparse and adaptive learning mechanisms.
Topic areas: deep learning, Bayesian statistics, probabilistic modelling, generative models, anomaly detection, medical image analysis, group convolutional neural networks, (sub-)Riemannian geometry
Patrick Forré’s research is centered around the study of mathematical aspects of machine learning, like the analysis of causal and graphical models, information theory and conditional independence structures, geometric deep learning and topology, etc. Furthermore, he is enthusiastic about applications of machine learning techniques to scientific data problems and enjoys collaborating with researchers from other fields of sciences.
Topic areas: causal and graphical models, information theory, conditional independence structures, geometric deep learning, topology
My research focuses on causality and causality-inspired machine learning, i.e. applications of causal inference to machine learning that would improve its robustness, safety and sample-efficiency. My expertise is in learning causal relations jointly from different observational and experimental settings, especially in the case of latent confounders and small samples, as well as methods to design experiments that would allow one to learn causal relations in a sample-efficient and intervention-efficient way. Recently, I have been focusing on causality-inspired domain adaptation, both in the context of policy transfer in RL and for improving the translation of medical insights from mice to humans.
Topic areas: causality, causal inference, causality-inspired ML, robustness, safety
My research is focused on the design and study of statistical methods for the estimation of causal models from data and their use in predicting the effects of interventions on systems. Most of the applications I work on are in the biological sciences, the medical domain and in designing recommender systems.
Topic areas: causality, causal modeling, causal inference, causal discovery
My research is focused on building autonomous systems using probabilistic and statistical modelling. These systems should not simply demonstrate artificial intelligence but artificial humility as well. They need to be transparent about their beliefs and willing to admit when they might be wrong. My research consists of formulating general algorithms that use probabilistic reasoning to answer these questions.
Topic areas: deep learning, Bayesian statistics, probabilistic modelling, generative models, anomaly detection
My group combines probabilistic programming with deep learning to develop probabilistic models for machine learning, data science, and artificial intelligence. I am one of the creators of Anglican, a probabilistic programming system that is closely integrated with Clojure. Our group currently develops Probabilistic Torch, a library for deep generative models that extends PyTorch. Many of my students collaborate with other faculty to develop applications to neuroscience, health, natural language processing, and robotics.
Topic areas: probabilistic programming, differentiable programming, variational inference, Monte Carlo methods
My research focuses on machine learning for autonomous robots in perceptually challenging environments. Currently, I’m working on new ways to exploit known robot models and/or simulators to make reinforcement learning more efficient. I am looking to use a generative model of the robot to characterise its belief over unknown parameters, and pre-training a policy that learns to trade-off exploration and exploitation based on this characterisation.
Topic areas: reinforcement learning, robotics, machine learning
My research group designs mathematically well-founded machine learning methods for automatic hyper-parameter tuning in online convex optimization. This includes identifying statistically ‘easy’ situations (e.g. low noise, small norm optimal parameters, etc.) in which it is possible to learn more efficiently, with less data. We then construct adaptive methods that exploit these easy cases when present, but automatically fall back to slower robust learning strategies when there is no easy structure. In addition, we have recently started on a formal mathematical analysis of explainability methods, which explain the black-box decisions made by machine learning systems to a user. In the past I have also worked on topics in information theory, model selection for Bayesian statistics, and PAC-Bayesian concentration inequalities.
Topic areas: online convex optimization, learning theory, explainable machine learning, model selection
Prof. Dr. Max Welling is a research chair in Machine Learning at the University of Amsterdam and leads AMLAB. He is also VP at Qualcomm Technologies and fellow of CIFAR and ELLIS. He co-directs two ICAI labs: QUVA and Delta labs. He is furthermore in the founding board of the ELLIS society and directs the Amsterdam ELLIS unit. Prof. Welling main interest is geometric deep learning, graphical and generative models, and quantum ML.
Topic areas: geometric deep learning, graphical and generative models, quantum machine learning