Research collaboration between Qualcomm and UvA

Our Mission


The mission of the QUVA-lab is to perform world-class research on deep vision. Such vision strives to automatically interpret with the aid of deep learning what happens where, when and why in images and video. Deep learning is a form of machine learning with neural networks, loosely inspired by how neurons process information in the brain. Research projects in the lab focus on learning to recognize objects in images from a single example, personalized event detection and summarization in video, and privacy preserving deep learning. The research is published in the best academic venues and secured in patents.

Learn More



QUVA Lustrum 2020

Oct 6, 2020

On Oct 6, we celebrate the lustrum of the QUVA deep vision lab, a collaboration between Qualcomm and U. Amsterdam. Please join us here. Keynote by Aapo Hyvarinen at 17:50 CEST and a panel discussion at 18:30.

Continue Reading...

QUVA Colloqium

The Qualcomm-UvA Deep Vision Seminars is an Amsterdam meetup for people that are passionate about AI, machine learning, deep learning and computer vision. Our guest speakers come both from industry and academia, working for organizations such as DeepMind, Oxford University, Toyota Research Center and more. Subscripe to our Meetup page to be notified about upcoming talks, or watch past presentations here.

Work with the QUVA Lab

Interested in working with the QUVA Lab? We have open positions! See all here.

PhD student | Video action recognition

Cees Snoek

In this work package we study, develop and benchmark new data-efficient learning algorithms and architectures for spatio-temporal video action recognition that exploit the sensory, semantic and streaming ability of the video medium in an offline and online fashion.

Details and application

PhD student | Multi-task multi-modal learning

Cees Snoek

In this work package we study, develop and benchmark new data-efficient algorithms and architectures for multi-task learning that exploit the commonalities and differences between modalities at sensory, representation and semantic levels.

Details and application

PhD student | Video representation and efficiency

Cees Snoek, Efstratios Gavves

In this work package we study, develop and benchmark new data-efficient and computationally-efficient neural network models and architectures that are optimal for videos of varying lengths and complexities.

Details and application

PhD student | Hardware-aware learning

Max Welling

In this work package, we study and develop novel approaches for hardware-aware learning, focusing on actual hardware constraints, and work towards a unified framework for hardware-aware learning.

Details and application

PhD student | Federated learning

Max Welling

In this work package we will study and develop novel robust distributed algorithms and techniques that will advance the state of the art in federated learning while focusing on the privacy and safety aspects.

Details and application

PhD student | Combinatorial optimization

Max Welling

We will study new Bayesian optimization methods, improve RL based methods, and incorporate classical combinatorial solvers into deep neural architectures.

Details and application

PhD student | Unsupervised learning for source compression

Max Welling

The goal of this project is to further improve deep learning based lossy and lossless compression methods in terms of their rate/distortion performance and visual quality.

Details and application


University of Amsterdam

  • Science Park 904, Room C3.250a
    1098XH Amsterdam
    The Netherlands


  • Virgine Mes 
      v.m.mes at uva dot nl

  •  Twitter: @quvalab