January 19, 2023

Scaling vs Modelling Round Table Discussion of The Deep Thinking Hour @ Amsterdam

What does it mean to accurately model using generative models? Is it about building informative representations of real-world data? Do they allow us to investigate questions and ideas about the world that we couldn’t before? Researchers in machine learning focus on constructing more domain-specific methods, using available prior knowledge on the problem, and effectively trying to smartly reduce the hypothesis space of function approximators. In light of the recent developments from DALLE, Imagen, ChatGPT, Diffusion models and many other that have become incredibly popular making use of scale to achieve incredible results. The best-performing diffusion model uses very simple building blocks and seems like it achieves its incredible performance especially because it is capable of making use of the large resources available. When will this stop? What are the limits of scale? Should researchers focus their attention on better scaling algorithms or on more specific models based on prior knowledge? Will research soon reach a plateau? What role does accurate mathematical modelling play in better-performing methods? This and more will be discussed by a lineup of senior researchers (Jakub Tomczak, Emiel Hoogeboom, Yuki Asano, Efstratios Gavves, and more) in a round table format. Join us at L3.36 of Lab42 from 14:00 to 16:00 (CEST) January 19th , more details on https://thedeepthinkinghour.github.io/.