Research collaboration between Qualcomm and UvA
“It is amazing to observe that in these days the fruits of fundamental research are transferred almost instantaneously into industrial applications. In QUVA Lab we perform bleeding edge fundamental research with 10+ PhD students and postdocs and work with researchers at Qualcomm AI Research to transfer those results to real world applications.”
“The research collaboration in the QUVA Lab is not only good for Qualcomm, the University and the talent, but also strengthens the larger AI-ecosystem in Amsterdam and the Netherlands.”
“Intelligence is the ability to adapt to change. With QUVA Lab, the University of Amsterdam and Qualcomm we are adapting and breaking ground, not only academically but also societally, making Amsterdam an AI center of excellence. Come join us!”
“The QUVA joint effort is very important to Qualcomm AI Research. It has successfully established a close academic research collaboration and published high-quality papers at top ML/AI conferences. The fundamental ML research conducted at QUVA is very exciting and has developed technology assets and insights in the areas of power efficiency, symmetry, and learning representation. Furthermore, computer vision research advances how AI can perceive and understand the world through digital images and videos, which is a main driver for AI and inter-disciplinary research innovation in the recent years. Looking forward, we expect that the collaboration will continue to push the research frontiers and drive thought leadership in exciting areas such as quantum ML, causality, and reasoning.”
The mission of the QUVA-lab is to perform world-class research on deep vision. Such vision strives to automatically interpret with the aid of deep learning what happens where, when and why in images and video. Deep learning is a form of machine learning with neural networks, loosely inspired by how neurons process information in the brain. Research projects in the lab focus on learning to recognize objects in images from a single example, personalized event detection and summarization in video, and privacy preserving deep learning. The research is published in the best academic venues and secured in patents.
The Qualcomm-UvA Deep Vision Seminars is an Amsterdam meetup for people that are passionate about AI, machine learning, deep learning and computer vision. Our guest speakers come both from industry and academia, working for organizations such as DeepMind, Oxford University, Toyota Research Center and more. Subscripe to our Meetup page to be notified about upcoming talks, or watch past presentations here.
Interested in working with the QUVA Lab? We have open positions! See all here.
In this work package we study, develop and benchmark new data-efficient learning algorithms and architectures for spatio-temporal video action recognition that exploit the sensory, semantic and streaming ability of the video medium in an offline and online fashion.
In this work package we study, develop and benchmark new data-efficient algorithms and architectures for multi-task learning that exploit the commonalities and differences between modalities at sensory, representation and semantic levels.
Cees Snoek, Efstratios Gavves
In this work package we study, develop and benchmark new data-efficient and computationally-efficient neural network models and architectures that are optimal for videos of varying lengths and complexities.
In this work package, we study and develop novel approaches for hardware-aware learning, focusing on actual hardware constraints, and work towards a unified framework for hardware-aware learning.
In this work package we will study and develop novel robust distributed algorithms and techniques that will advance the state of the art in federated learning while focusing on the privacy and safety aspects.
We will study new Bayesian optimization methods, improve RL based methods, and incorporate classical combinatorial solvers into deep neural architectures.
The goal of this project is to further improve deep learning based lossy and lossless compression methods in terms of their rate/distortion performance and visual quality.