Loading...
Our Mission
UvA
Qualcomm

The mission of the QUVA-lab is to perform world-class research on deep vision. Such vision strives to automatically interpret with the aid of deep learning what happens where, when and why in images and video. Deep learning is a form of machine learning with neural networks, loosely inspired by how neurons process information in the brain. Research projects in the lab focus on learning to recognize objects in images from a single example, personalized event detection and summarization in video, and privacy preserving deep learning. The research is published in the best academic venues and secured in patents. Read the full blog post...

Our Projects

Harmonic Analysis for Stochastic Neural Networks

Adeel Pervez

Stochastic neural networks are widely used in probabilistic modeling but often have problems of instability. The type of stochasticity that can be usefully employed is also a limiting factor. In this project we apply harmonic analysis techniques to improve learning of stochastic models and to develop methods for efficient training of discrete stochastic models.

Temporal Causality in Machine Learning

Phillip Lippe

Identifying causal relations beyond correlations from data is a key step towards generalization in deep learning. In this project we study and develop new methods to incorporate causal understanding in machine learning models and identify causal variables from high-dimensional observations, with a focus on temporal relations.

Unsupervised Learning for Source Compression

Natasha Butt

Learned compression has seen recent advantages relative to traditional compression codecs. In this project, we will explore new methods for lossless and lossy compression with unsupervised learning.

Federated Learning

Rob Romijnders

Future of machine learning will see data distributed across multiple devices. This project studies effective model learning when data is distributed and communication bandwith between devices is limited.

Efficient Video Representation Learning

Mohammadreza Salehi

Despite the enormous advances of image representation learning, video representation learning has not been well explored yet because of the higher computational complexity and the space-time dynamics shaping the content of a video. In this research, we try to capture the content of a video more efficiently in different aspects such as data and computational efficiency.

Video Action Recognition

Pengwan Yang

In this project, we focus on video understanding with the goal of alleviating the dependency on labels. We develop methods that leveraging few-shot, weakly-supervised, and unsupervised learning signals.

Hardware-Aware Learning

Winfried van den Dool

We study and develop novel approaches for hardware-aware learning, focusing on actual hardware constraints, and work towards a unified framework for scaling and improving noisy and low-precision computing.

Generalizeable Video Representation Learning

Michael Dorkenwald

In this project we aim to develop self-supervised methods that obtain generalizeable video representations and solve novel tasks for which the usage of multiple modalities and is a necessity, such as video scene understanding.

Quantum Machine Learning

Evgenii Egorov

We develop the theory of quantum neural networks and its Bayesian formulation, and will construct quantum inspired classical deep learning models as a new class of classically efficient neural networks that use quantum statistics as inference model.

Geometric Deep Learning

Gabriele Cesa

Many machine learning tasks come with some intrinsic geometric structure. In this project, we will study how to encode the geometry of a problem into neural-network architectures to achieve improved data efficiency and generalization. A particular focus will be given to 3D data and the task of 3D reconstruction, where the global 3D structure is only accessible through a number of 2D observations.

Symmetries and Causality

Pim de Haan

Most types of data possess symmetries and neural networks can be made to respect these. In this project, we are developing how to generalize there notions, introducing local symmetries, and explore connections between symmetries and interventions done to causal models.

Previous Projects

Temporal modeling in videos

Amir Ghodrati

This project aims to learn representations able to maintain the sequential structure for video, for use cases in temporal video prediction tasks.

Fine-grained object recognition

Shuai Liao

Automatically recognize fine-grained categories with interactive accuracy by using very deep convolutional representations computed from automatically segmented objects and automatically selected features.

Personal event detection and recounting

Kirill Gavrilyuk

Automatically detect events in a set of videos with interactive accuracy for the purpose of personal video retrieval and summarization. We strive for a generic representation that covers detection, segmentation, and recounting simultaneously, learned from few examples.

Counting

Tom Runia

The goal of this project is to accurately count the number of arbitrary objects in an image and video independent of their apparent size, their partial presence, and other practical distractors. For use cases as in Internet of Things or robotics. Completed PhD thesis

Robust Mobile Tracking

Ran Tao

In an experimental view of tracking, the objective is to track the target’s position over time given a starting box in frame 1 or alternatively its typed category especially for long-term, robust tracking.

One shot visual instance search

Berkay Kıcanaoglu

Often when searching for something, a user will have available just 1 or very few images of the instance of search with varying degrees of background knowledge.

Statistical machine translation

Mert Kilickaya

The objective of this work package is to automatically generate grammatical descriptions of images that represent the meaning of a single image, based on the annotations resulting from the above projects.

The story of this

Noureldien Hussein

Often when telling a story one is not interested in what happens in general in the video, but what happens to this instance (a person, a car to pursue, a boat participating in a race). The goal is to infer what the target encounters and describe the events that occur it. Completed PhD thesis

Distributed deep learning

Matthias Reisser

Future applications of deep learning will run on mobile devices and use data from distributed sources. In this project we will develop new efficient distributed deep learning algorithms to improve the efficiency of learning and to exploit distributed data sources.

Automated Hyper-parameter Optimization

Changyong Oh

Deep neural networks have a very large number of hyper-parameters. In this project we develop new methods to automatically and efficiency determine these hyperparameters from data for deep neural networks.

Symmetry adapted network architectures

Maurice Weiler

The training process of deep neural networks requires huge datasets which are expensive to collect. In this project we aim to improve the networks data efficiency by modeling domain adapted prior knowledge like symmetry properties of the data into the networks architectures.

New learning rules for deep generative models

Peter O'Connor

Successful deep learning algorithms on massive datasets and distributed over many CPUs and GPUs require dedicated algorithms. In this project we will develop novel deep learning algorithms that process observations online and make effective use of memory and computational resources.