November 22, 2024

HAVA-Lab: Pioneering research on Human-Aligned Video-AI

Under the leadership of Prof. Dr. Cees Snoek, the head of the ELLIS unit Amsterdam, the newly-formed HAVA-Lab aims to define human-aligned video-AI, develop computable models, and examine the factors influencing its societal acceptance. The HAVA-Lab embodies this interdisciplinary approach, drawing expertise from all seven faculties of the University of Amsterdam (UvA), which is aligned with the recently introduced interdisciplinary track of the European Laboratory for Learning and Intelligent Systems (ELLIS) facilitating PhD candidates and Postdocs aiming to foster research collaborations in fields traditionally not associated with machine learning/AI, such as law, biology, social sciences, and humanities. This project is the first at UvA to involve all seven faculties, highlighting the significance of interdisciplinary collaboration to address complex societal challenges.

Video-AI - A Powerful AI Application 

Video-AI refers to all applications of artificial intelligence (AI) that analyze, interpret, and manipulate video content. This technology holds significant potential for scientific exploration, business applications, and improving well-being. However, while video-AI can drive innovations such as automated video surveillance and autonomous vehicles, it also brings risks like deepfakes, misinformation, and privacy invasion.

Human-Aligned AI  

The process of ensuring that artificial intelligence systems operate in ways that are compatible with human values, ethics, and intentions is defined here as Human-aligned AI. This involves designing and programming AI so that its goals, behaviors, and outputs align with what humans consider beneficial and safe.

 

»Video-AI holds the promise to explore what is unreachable, monitor what is imperceivable and to protect what is most valuable.«
– Prof. Dr. Cees Snoek –

Human alignment Focus of the HAVA-Lab

The same video-AI that has proven to be so useful in many domains is also accountable for self-driving cars crashing into pedestrians, deep fakes making us believe misinformation, and mass-surveillance systems monitoring our behavior. Furthermore, while current video-AI systems can recognize objects, activities, and their interactions using deep learning algorithms, they often fail in real-world situations that differ from their training environments. Scaling up training data to improve performance is not a sustainable solution due to the high computational costs and the risk of inherent biases in the data. Moreover, in Europe, there is increasing resistance to the ethical implications of storing and processing massive amounts of video data without consent. Therefore, to make Video-AI deliver on its big promise, human alignment is key. The HAVA-Lab has been proposed as the first of its kind, studying Video-AI from an interdisciplinary perspective to understand what defines human-aligned Video-AI, how it can be made computable, and what determines its societal acceptance.

Interdisciplinarity is crucial to the success of the HAVA-Lab. According to Prof. Dr. Snoek, the seven faculties of the University of Amsterdam »provide expertise on the different axes of human and societal values alignment, inspire us with real-world problems where human-aligned Video-AI is urgent, and they are our target audience for democratization of human-aligned Video-AI.«

An example for the necessity of interdisciplinary research is the work by criminologist Marie Lindegaard, which highlights how video-based observations can provide unbiased insights into crime, contrasting with traditional, often biased, data sources. To develop Video-AI tools to detect shoplifting, it is not only necessary to align with criminologists who can distinguish and define common shoplifting behavior, but it also requires involvement of legal and ethical scholars to study what determines societal acceptance of such video surveillance.

The HAVA-Lab enables human-aligned video-AI by addressing three research objectives.

  • Objective 1 will focus on the development of human aligned values in Video-AI
  • Objective 2 will focus on algorithmic development of Video-AI with human alignment for real-world scenarios.

Objective 1 and 2 form a continuous feedback loop; developing new alignment will drive algorithmic development, algorithmic development will shed new light on where alignment is most needed

  • Objective 3 strives to democratize human-aligned Video-AI by creating new knowledge, bringing the knowledge creation efforts at all UvA-Faculties to a higher level, and attracting and developing new talent

Official opening of the HAVA-Lab on Wednesday 16 October by the University of Amsterdam Rector Magnificus, Peter-Paul Verbeek

 

Composition of the HAVA-Lab

The HAVA-Lab team includes seven PhD students, each supervised by two experts — one with knowledge in human alignment and one in Video-AI. The team also comprises seven co-principal investigators from all seven faculties of the University of Amsterdam, providing diverse expertise. Prof. Dr. Snoek leads the lab, with Pascal Mettes as the lab manager. The lab is located in the University Library in Amsterdam’s vibrant city center, offering a dynamic environment for research. Prof. Dr. Snoek points out: »Anyone with an interest in human-aligned video-AI is most welcome to visit us.«.

Lineup Collaborations

A wide variety of projects is planned for the HAVA-Lab, including a project in collaboration with Heleen Janssen from the Faculty of Law will consider Video-AI compliance with fundamental rights and ethical values our European societies are based on. Prof. Dr. Snoek formulates the research questions as follows: »Can we incorporate privacy and legal standards of non-maleficence, equity, or justice by design? Can we develop human-aligned video-AI that accords with legal and regulatory concerns, while grounding legal and policy discussions in technical realities?«. Other projects include a collaboration with Stevan Rudinac from the Faculty of Economics and Business and will study responsible marketing, focusing on the role of videos in social media analysis, as well as a collaboration with Erwin Berkhout from the Faculty on Dentistry with the aim of improving diagnostic training for dental students by using Video-AI to analyze student behavior and gaze patterns during assessments.

 

Example of Object Classification in Video Scenes

 

Envisioned Impact and Future Vision

The HAVA-Lab aims to lead advancements in Video-AI by integrating human values into its core. Prof. Dr. Snoek envisions the lab as a central hub for advising policymakers on the responsible deployment of Video-AI, potentially influencing practices across Europe and beyond. By fostering interdisciplinary collaboration and innovation, the HAVA-Lab aims to set new standards for ethical and beneficial AI technologies. He expects the lab's impact to extend nationally as well as internationally, with potential spin-off companies brining human-aligned Video-AI from the lab to society at large.

»Research on video-AI technologies and on AI alignment is currently mostly developed in isolation. The HAVA-lab offers the opportunity to intensively collaborate on this urgent intersection«

The HAVA-Lab was selected in response to the interdisciplinary research call launched by the University of Amsterdam Data Science Centre in 2023. While the initial funding is set for five years, Prof. Dr. Snoek expects the HAVA-Lab to pave the way for even more collaborations across UvA-disciplines. Joint proposals are already planned for the NWO Call on Collaboration between Humans and (semi-)Autonomous systems and NWO’s Open Technology programme, as well as personal grants innovating with Video-AI outside computer science.

On Wednesday, 16 October 2024, Peter-Paul Verbeek, Rector Magnificus of the University of Amsterdam, officially opened the Human-Aligned Video-AI Laboratory (HAVA-Lab), launching a groundbreaking research project focusing on the human alignment of video-Artificial Intelligence.

Read more about the HAVA Lab. 

 

Other blogs

May 19, 2024
Advancing AI Frontiers: The ELLIS Amsterdam MSc Honours Programme (2/2)
May 10, 2024
Advancing AI Frontiers: The ELLIS Amsterdam MSc Honours Programme (1/2)
December 21, 2023
Amsterdam's AI initiative in the spotlight: Inclusive AI