NeurIPS Fest 2023
Annual NeurIPS-preview party spotlighting Amsterdam’s finest and latest research in machine learning
- Date
December 7, 2023
- Time
17:00-20:00
- Location
Lab42, Amsterdam Science Park
Come and deep dive into the state-of-the-art machine learning research that will be showcased at the NeurIPS Conference in New Orleans!
Event programme
17:00-18:00
Keynote Presentation
Prof. Juergen Gall (University of Bonn)
Venue: L1.01
18:00-20:00
Poster Session
with bites & drinks
Venue: ground floor
Prof. Dr. Juergen Gall
Professor and Head of Computer Vision Group at the University of Bonn
- Keynote speaker
About the keynote speaker
Prof. Dr. Juergen Gall is Professor and Head of the Computer Vision Group at the University of Bonn since 2013, spokesperson of the Transdisciplinary Research Area “Mathematics, Modelling and Simulation of Complex Systems”, and member of the Lamarr Institute for Machine Learning and Artificial Intelligence. After his Ph.D. in computer science from the Saarland University and the Max Planck Institute for Informatics, he was a postdoctoral researcher at the Computer Vision Laboratory, ETH Zurich, from 2009 until 2012 and senior research scientist at the Max Planck Institute for Intelligent Systems in Tübingen from 2012 until 2013. He received a grant for an independent Emmy Noether research group from the German Research Foundation (DFG) in 2013, the German Pattern Recognition Award of the German Association for Pattern Recognition (DAGM) in 2014, an ERC Starting Grant in 2016, and an ERC Consolidator Grant in 2022.
Anticipation: From Human Motion to Wildfires
In this talk, he will give an overview of some recent works on anticipating human motion. In particular, he will discuss Social Diffusion, a diffusion approach for short-term and long-term forecasting of the motion of multiple persons as well as their social interactions. He will also introduce the “Humans in Kitchens” dataset, a new benchmark for multi-person human motion forecasting with scene context. Finally, he will briefly describe an approach for forecasting unintentional actions and, if time permits, he will also discuss how wildfire and agricultural droughts can be forecast.
Posters showcased at the event
Main track papers
Flow Factorized Representation Learning
- Yue Song
- Thomas Anderson Keller
- Nicu Sebe
- Max Welling
Implicit Convolutional Kernels for Steerable CNNs
- Maksim Zhdanov
PDE-Refiner: Achieving Accurate Long Rollouts with Neural PDE Solvers
- Phillip Lippe
- Bastiaan S. Veeling
- Paris Perdikaris
- Richard E. Turner
- Johannes Brandstetter
Towards Characterizing the First-order Query Complexity of Learning (Approximate) Nash Equilibria in Zero-sum Matrix Games
- H. Hadiji
- S. Sachs
- T. van Erven
- W. M. Koolen
Latent Field Discovery in Interacting Dynamical Systems with Neural Fields
- Miltiadis Kofinas
- Erik J. Bekkers
- Naveen Shankar Nagaraja
- Efstratios Gavves
Star-Shaped Denoising Diffusion Probabilistic Models
- Andrey Okhotin
- Dmitry Molchanov
- Vladimir Arkhipkin
- Grigory Bartosh
- Viktor Ohanesian
- Aibek Alanov
- Dmitry Vetrov
TriRE: A Multi-Mechanism Learning Paradigm for Continual Knowledge Retention and Promotion
- Preetha Vijayan
- Prashant Shivaram Bhat
- Bahram Zonooz
- Elahe Arani
Don’t just prune by magnitude! Your mask topology is a secret weapon
- Duc Hoang
- Souvik Kundu
- Shiwei Liu
- Zhangyang Wang
Learn to Categorize or Categorize to Learn? Self-Coding for Generalized Category Discovery
- Sarah Rastegar
- Hazel Doughty
- Cees G. M. Snoek
Learning to Learn Prototypical Networks by Task-Guided Diffusion
- Yingjun Du
- Zehao Xiao
- Shencai Liao
- Cees Snoek
Episodic Multi-Task Learning with Heterogeneous Neural Processes
- Jiayi Shen
- Xiantong Zhen
- Qi (Cheems) Wang
- Marcel Worring
Towards Anytime Classification in Early-Exit Architectures by Enforcing Conditional Monotonicity
- Metod Jazbec
- James Urquhart Allingham
- Dan Zhang
- Eric Nalisnick
Geometric Algebra Transformers
- Johann Brehmer
- Pim de Haan
- Sönke Behrends
- Taco Cohen
Learning Dynamic Attribute-factored World Models for Efficient Multi-object Reinforcement Learning
- Fan Feng
- Sara Magliacane
Adapting Neural Link Predictors for Data-Efficient Complex Query Answering
- Erik Arakelyan
- Pasquale Minervini
- Daniel Daza
- Michael Cochez
- Isabelle Augenstein
The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter
- Ajay Jaiswal
- Shiwei Liu
- Tianlong Chen
- Zhangyang Wang
Towards Data-Agnostic Pruning At Initialization: What Makes a Good Sparse Mask?
- Hoang Pham
- Anh Ta
- Shiwei Liu
- Dung D. Le
- Long Tran-Thanh
Kernelized Reinforcement Learning with Order Optimal Regret Bounds
- Sattar Vakili
- Julia Olkhovskaya
Learning Unseen Modality Interaction
- Yunhua Zhang
The Memory-Perturbation Equation: Understanding Model’s Sensitivity to Data
- Peter Nickl
- Lu Xu
- Dharmesh Tailor
- Thomas Möllenhoff
- Mohammad Emtiyaz Khan
Rotating Features for Object Discovery
- Sindy Löwe
- Phillip Lippe
- Francesco Locatello
- Max Welling
First- and Second-Order Bounds for Adversarial Linear Contextual Bandits
- J. Olkhovskaya
- J. Mayo
- T. van Erven
- G. Neu
- C. Wei
Adaptive Selective Sampling for Online Prediction with Experts
- R. M. Castro
- F. Hellström
- T. van Erven
Clifford Group Equivariant Neural Networks
- David Ruhe
- Johannes Brandstetter
- Patrick Forré
Modulated Neural ODEs
- Ilze Amanda Auzina
- Cagatay Yildiz
- Sara Magliacane
- Matthias Bethges
- Efstratios Gavves
Dynamic Sparsity Is Channel-Level Sparsity Learner
- Lu Yin
- Gen Li
- Meng Fang
- Li Shen
- Tianjin Huang
- Zhangyang Wang
- Vlado Menkovski
- Xiaolong Ma
- Mykola Pechenizkiy
- Shiwei Liu
Equivariant Neural Simulators for Stochastic Spatiotemporal Dynamics
- Koen Minartz
- Yoeri Poels
- Simon Koop
- Vlado Menkovski
Workshop papers
Euclidean, Projective, Conformal: Choosing a Geometric Algebra for Equivariant Transformers
- Pim De Haan
- Taco Cohen
- Johann Brehmer
- Max Welling
Causal Representation Learning Workshop of the NeurIPS Conference 2023
Multi-View Causal Representation Learning with Partial Observability
- Dingling Yao
- Danru Xu
- Sebastien Lachapelle
- Sara Magliacane
- Perouz Taslakian
- Georg Martius
- Julius von Kügelgen
- Francesco Locatello
DONUT-hole: DONUT Sparsification by Harnessing Knowledge and Optimizing Learning Efficiency
- Azhar Syaikh
- Sahar Yousefi
Hierarchical Causal Representation Learning
- Angelos Nalmpantis
- Phillip Lippe
- Sara Magliacane
ProtoHG: Prototype-Enhanced Hypergraph Learning for Heterogeneous Information Networks
- Shuai Wang
- Jiayi Shen
- Athanasios Efthymiou
- Stevan Rudinac
- Monika Kackovic
- Nachoem Wijnberg
- Marcel Worring
LightGCN: Evaluated and Enhanced
- Milena Kapralova
- Luca Pantea
- Andrei Blahovici
[Re] FairCal: Fairness Calibration for Face Verification
- Marga Don
- Satchit Chatterji
- Milena Kapralova
- Ryan Amaudruz
Beyond Top-Class Agreement: Using Divergences to Forecast Performance under Distribution Shift
- Mona Schirmer
- Dan Zhang
- Eric Nalisnick
Reproducibility study of “Quantifying societal bias amplification in image captioning”
- Farrukh Baratov
- Goksenin Yuksel
- Darie Petcu
- Jan Bakker
A Sparsity Principle for Partially Observable Causal Representation Learning
- Danru Xu
- Dingling Yao
- Sebastien Lachapelle
- Perouz Taslakian
- Julius von Kügelgen
- Francesco Locatello
- Sara Magliacane
[Re] RELIC: Reproducibility and Extension on LIC metric in quantifying bias in captioning models
- Paula Antequera
- Egoitz Gonzalez
- Marta Grasa
- Martijn van Raaphorst
On the Reproducibility of CartoonX
- Elias Dubbeldam
- Aniek Eijpe
- Jona Ruthardt
- Robin Sasse
GRAPES: Learning to Sample Graphs for Scalable Graph Neural Network
- Taraneh Younesian
- Thiviyan Thanapalasingam
- Emile van Krieken
- Daniel Daza
- Peter Bloem
The NeurIPS Fest 2023 is ELLIS unit Amsterdam’s annual NeurIPS-preview party spotlighting Amsterdam’s finest and latest research in machine learning.