We conduct research on state-of-the-art deep generative models that are used to enable real-world Bosch systems to be data-efficient. We are looking for a PhD student who is interested in researching creative applications of generative models (e.g. stable diffusion) as a controllable dataset representation for training and validating networks for downstream tasks.
Not all data points in a dataset are equally important for the performance of a neural network. As training progresses, loss on some data points might become uninformative since the network already learned what it can from it. As such, it can be advantageous to observe the network training to serve it the right type of data at the right time. However, simply selecting data from a fixed dataset could be problematic when no image with the precise mix of attributes exists. The goal of this PhD project is to develop new learning algorithms for generating relevent data "on demand" in response to the need of the target network. This includes improving training efficiency by synthesizing the most relevant data, enforcing desired invariance by creating example, etc.
© 2023 ELLIS Unit Amsterdam
Website by Giuul’s Studio