Data is becoming an abundant data source which we can only fully leverage by developing techniques that do not require manual human annotations. This is particularly true for image and video data, and why my research focuses on self-supervised learning: Here, we aim to learn meaningful representations and solve various tasks all without using annotations, making them possible to scale and generalize to new and larger settings without the burden of using labels.
Topic areas: computer vision, self-supervised learning, representation learning, multi-modal learning
My research focuses on three axes: Temporal Machine Learning and Dynamics, Efficient Vision, and Machine Learning for Oncology. Here are some research questions I’m interested in. How do we model these complex and perhaps continuous or even online temporal and spatiotemporal data? Is there an ordinary or partial or stochastic differential equation that corresponds to ImageNet? Is there a link between the consistent and mystical learning behind deep neural networks and chaotic behaviors (or lack thereof) in dynamical systems? Is there a causal connection between neural networks and dynamics? Can learn spatiotemporal models that generalize beyond static and stationary data? These and many more are some fundamental questions that I and colleagues are trying to answer.
Topic areas: temporal machine learning and dynamics, efficient vision, machine learning for oncology
Our research is focused on computer vision and deep learning, and in particular image processing, 3D object understanding and human behavior analysis with industrial and societal applications.
Topic areas: computer vision
My research is focused on automatically understanding the content of videos, with a focus on bridging the gap between deep video learning and information from prior semantic and symbolic knowledge. The research agenda includes non-Euclidean manifolds for video representations, recognising actions without training examples, learning the common structure between video examples, and reasoning beyond action labels.
Topic areas: computer vision, video understanding, structured video representations
Arnold Smeulders is professor of artificial intelligence at the University of Amsterdam and currently envoy for ELLIS to the EU. He is recipient of the Korteweg medaillon and old enough to receive the ACM SIGMM lifetime award. He is member of the Academia Europeana. He has graduated 60 PhD students.
Topic areas: computer vision
Prof. dr. Cees Snoek heads the Video & Image Sense Lab, where we make sense of video and images with artificial and human intelligence. We study computer vision, deep learning and cognitive science. The VIS Lab also embeds four public-private AI labs. QUVA Lab with Qualcomm, Delta Lab with Bosch, Atlas Lab with TomTom and AIM Lab with the Inception Institute of Artificial Intelligence. For our most recent work please check: https://ivi.fnwi.uva.nl/vislab/.
Topic areas: computer vision, deep learning, cognitive science
We research multimedia analytics by developing AI techniques for getting the richest information possible from the data (image/video/text/graphs) interactions surpassing human and machine intelligence, and visualizations blending it all in effective interfaces for applications in health, forensics and law enforcement, cultural heritage, urban livability, and social media analysis.
Topic areas: multimedia integration, interactive learning, visual analytics
My research is focusing on physics based computer vision: Light traveling in the 3D world interacts with the scene through intricate processes before being captured by a camera. These processes result in the dazzling effects like color and shading, complex surface and material appearance, different weathering, just to name a few. Physics based vision aims to invert the processes to recover the scene properties, such as shape, reflectance, light distribution, medium properties, etc., from the images by modelling and analyzing the imaging process to extract desired features or information.
Topic areas: computer vision, physics based vision, perception based vision, 3D geometry
My research is focused on computer vision, machine learning and medical image analysis. My current research topics include meta-learning, variational Bayesian inference and their applications to few-shot learning, domain generalisation, continual learning, multi-task learning, multi-modal learning, semantic segmentation and automatic report generation. I am also intrigued by interdisciplinary topics, e.g., memory and attention mechanisms, between cognitive science and artificial intelligence.
Topic areas: meta-learning, variational Bayesian inference, out-of-distribution generalisation, neural memory