Archived News

Sindy Löwe wins UvA Thesis Prize 2020

In her winning thesis, Sindy Löwe describes an original and innovative concept within Artificial Intelligence, which she conceived and developed independently. It concerns a new algorithm for training neuronal networks and gives a better explanation for and good insight in the way cerebral neuronal networks learn. The jury believes that Löwe shows the ability to develop a groundbreaking and innovative idea at a high academic level in a scientifically sound manner.


Virtual UvA-Bosch Delta Lab Deep Learning Seminar
Robots Learning (Through) Interactions
Jens Kober – May 7th 2020, 11:00.

Abstract:
The acquisition and self-improvement of novel motor skills is among the most important problems in robotics. Reinforcement learning and imitation learning are two different but complimentary machine learning approaches commonly used for learning motor skills.
In this seminar, Jens Kober will discuss various learning techniques we developed that can handle complex interactions with the environment. Complexity arises from non-linear dynamics in general and contacts in particular, taking multiple reference frames into account, dealing with high-dimensional input data, interacting with humans, etc. A human teacher is always involved in the learning process, either directly (providing demonstrations) or indirectly (designing the optimization criterion), which raises the question: How to best make use of the interactions with the human teacher to render the learning process efficient and effective?
All these concepts will be illustrated with benchmark tasks and real robot experiments ranging from fun (ball-in-a-cup) to more applied (unscrewing light bulbs).

Replay of the talk:

Slides from the talk, with links to better-quality videos: As pdf file

Jens Kober is an associate professor at the TU Delft, Netherlands. He worked as a postdoctoral scholar jointly at the CoR-Lab, Bielefeld University, Germany and at the Honda Research Institute Europe, Germany. He graduated in 2012 with a PhD Degree in Engineering from TU Darmstadt and the MPI for Intelligent Systems. For his research he received the annually awarded Georges Giralt PhD Award for the best PhD thesis in robotics in Europe, the 2018 IEEE RAS Early Academic Career Award, and has received an ERC Starting grant. His research interests include motor skill learning, (deep) reinforcement learning, imitation learning, interactive learning, and machine learning for control.

UvA-Bosch Delta Lab Deep Learning Seminar
The Blessings of Multiple Causes
David Blei – October 17 2019

Abstract: Causal inference from observational data is a vital problem, but it comes with strong assumptions. Most methods require that we observe all confounders, variables that affect both the causal variables and the outcome variables. But whether we have observed all confounders is a famously untestable assumption. We describe the deconfounder, a way to do causal inference with weaker assumptions than the classical methods require.

David Blei is a Professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute. He studies probabilistic machine learning, including its theory, algorithms, and application. David has received several awards for his research, including a Sloan Fellowship (2010), Office of Naval Research Young Investigator Award (2011), Presidential Early Career Award for Scientists and Engineers (2011), Blavatnik Faculty Award (2013), ACM-Infosys Foundation Award (2013), a Guggenheim fellowship (2017), and a Simons Investigator Award (2019). He is the co-editor-in-chief of the Journal of Machine Learning Research.  He is a fellow of the ACM and the IMS.

UvA-Bosch Delta Lab Deep Learning Seminar
How Do Neural Networks See?
Christopher Olah – January 29 2019

Abstract: Neural networks greatly exceed anything humans can design directly at computer vision by building up their own internal hierarchy of internal visual concepts. So, what are they detecting? How do they implement these detectors? How do they fit together to create the behavior of the network as a whole? At a more practical level, can we use these techniques to audit neural networks? Or find cases where the right decision is made for bad reasons? To allow human feedback on the decision process, rather than just the final decision? Or to improve our ability to design models?

Chris Olah is best known for DeepDream, the Distill journal, and his blog. He spent five years at Google Brain, where he focused on neural network interpretability and safety. He’s also worked on various other projects, including early TensorFlow, generative models, and NLP. Prior to Google Brain, Chris dropped out of university and did deep learning research independently as a Thiel Fellow. Chris recently joined OpenAI to start a new interpretability team there.

Vacancy – PhD position in Machine Learning and Deep Learning. [closed]

PhD position in Machine Learning and Deep Learning. This position is funded by Bosch Research, is within UvA-Bosch Delta Lab whose research focuses on deep learning and applications to intelligent vehicles. As PhD candidate,  you will perform cutting edge research in AMLAB in the field of machine learning, will be supervised by Prof dr. Max Welling (promotor). The research topic is ‘Methods for Robust Feature Learning’ where the goals are to learn features that are robust or invariant to changing conditions, to learn classifiers that perform well on multiple domains, to learn representations that can be transferred between domains and to learn classifiers that do not suffer from adversarial examples. Keywords: Bayesian Regularization, Model Uncertainty, Domain Transfer, Representation Learning.

Vacancy – PhD position in Machine Learning and Deep Learning. [closed]

PhD position in Machine Learning and Deep Learning. This position is funded by Bosch Research, is within UvA-Bosch Delta Lab whose research focuses on deep learning and applications to intelligent vehicles. As PhD candidate,  you will perform cutting edge research in AMLAB in the field of machine learning, will be supervised by Prof dr. Arnold Smeulders (promotor). The research topic is ‘Learning to Follow Objects over Multiple Cameras‘ where the goals are to follow objects over multiple camera’s regardless variations in pose, illumination, occlusion, scale and other sources of variance in spite of the fact that most subjects are seen for the first time. This is a hard problem which requires learning generics on many examples and subsequently tailoring it to the current specific case. Keywords: Object Tracking, Generic to specific, Representation learning.

Colloquium at the ICAI opening and the 1st UvA-Bosch Delta Lab Deep Learning Seminar
Adaptive Deep Learning for Perception, Action, and Explanation
Prof. Dr. Trevor Darrell – October 19 2018

Abstract: Learning of layered or “deep” representations has provided significant advances in computer vision in recent years, but has traditionally been limited to fully supervised settings with very large amounts of training data, where the model lacked interpretability. New results in adversarial adaptive representation learning show how such methods can also excel when learning across modalities and domains, and further can be trained or constrained to provide natural language explanations or multimodal visualizations to their users. I’ll present recent long-term recurrent network models that learn cross-modal description and explanation, using implicit and explicit approaches, which can be applied to domains including fine-grained recognition and visuomotor policies.

Lecture recording is now online!

Prof. Darrell is on the faculty of the CS and EE Divisions of the EECS Department at UC Berkeley. He leads Berkeley’s DeepDrive (BDD) Industrial Consortia, is co-Director of the Berkeley Artificial Intelligence Research (BAIR) lab, and is Faculty Director of PATH at UC Berkeley. Darrell’s group develops algorithms for large-scale perceptual learning, including object and activity recognition and detection, for a variety of applications including autonomous vehicles, media search, and multimodal interaction with robots and mobile devices. His areas of interest include computer vision, machine learning, natural language processing, and perception-based human computer interfaces. Prof. Darrell previously led the vision group at the International Computer Science Institute in Berkeley, and was on the faculty of the MIT EECS department from 1999-2008, where he directed the Vision Interface Group. He was a member of the research staff at Interval Research Corporation from 1996-1999, and received the S.M., and PhD. degrees from MIT in 1992 and 1996, respectively. He obtained the B.S.E. degree from the University of Pennsylvania in 1988.

Prof. Darrell also serves as consulting Chief Scientist for the start-up Nexar, and is a technical consultant on deep learning and computer vision for Pinterest. Darrell is on the scientific advisory board of several other ventures, including DeepScaleWaveOneSafelyYou, and Graymatics. Previously, Darrell advised Tyzx (acquired by Intel), IQ Engines (acquired by Yahoo), Koozoo, BotSquare/Flutter (acquired by Google), and MetaMind (acquired by Salesforce). As time permits, Darrell has served and is available as an expert witness for patent litigation relating to computer vision.

Colloquium – Computational Sensorimotor Learning
Pulkit Agrawal – July 6 2018

Abstract: An open question in artificial intelligence is how to endow agents with common sense knowledge that humans naturally seem to possess. A prominent theory in child development posits that human infants gradually acquire such knowledge through the process of experimentation. According to this theory, even the seemingly frivolous play of infants is a mechanism for them to conduct experiments to learn about their environment. Inspired by this view of biological sensorimotor learning, I will present my work on building artificial agents that use the paradigm of experimentation to explore and condense their experience into models that enable them to solve new problems. I will discuss the effectiveness of my approach and open issues using case studies of a robot learning to push objects, manipulate ropes, finding its way in office environments and an agent learning to play video games merely based on the incentive of conducting experiments.
Pulkit Agrawal is a Ph.D. Student in the department of computer science at UC Berkeley. He is advised by Dr. Jitendra Malik and his research spans robotics, deep learning, computer vision and computational neuroscience. Pulkit completed his bachelors in Electrical Engineering from IIT Kanpur and was awarded the Director’s Gold Medal. His work has appeared multiple times in MIT Tech Review, Quanta, New Scientist, NYPost etc. He is a recipient of Signatures Fellow Award, Fulbright Science and Technology Award, Goldman Sachs Global Leadership Award, OPJEMS, Sridhar Memorial Prize and IIT Kanpur’s Academic Excellence Awards among others. Pulkit holds a “Sangeet Prabhakar” (equivalent to bachelors in Indian classical music) and occasionally performs in music concerts.

Colloquium – Feature Generating Networks for Zero-Shot Learning
Yongqin Xian – May 9 2018

Abstract: Suffering from the extreme training data imbalance between seen and unseen classes,  most of existing state-of-the-art approaches fail to achieve satisfactory results for the challenging generalized zero-shot learning task. To circumvent the need for labeled examples of unseen classes, we propose a novel generative adversarial network (GAN) that synthesizes CNN features conditioned on class-level semantic information, offering a shortcut directly from a semantic descriptor of a class to a class-conditional feature distribution. Our proposed approach, pairing a Wasserstein GAN with a classification loss, is able to generate sufficiently discriminative CNN features to train softmax classifiers or any multimodal embedding method. Our experimental results demonstrate a significant boost in accuracy over the state of the art on five challenging datasets — CUB, FLO, SUN, AWA and ImageNet — in both the zero-shot learning and generalized zero-shot learning settings.
Yongqin Xian received his B.Sc. degree from Beijing Institute of Technology, China, in 2013 and his M.Sc. degree with honors in computer science from Saarland University, Germany in 2016. He is currently a PhD student with Prof. Bernt Schiele and dr. Zeynep Akata at the Max-Planck Institute for Informatics in Germany. He has served as a reviewer for CVPR 2018, IEEE TPAMI and IEEE TIP. He is a Qualcomm Innovation Fellowship Finalist. His research interests lie in zero-shot learning, few-shot learning, and generative adversarial networks.

Vacancy – PhD position in Machine Learning and Deep Learning. [filled]

PhD position in Machine Learning and Deep Learning. This position is funded by Bosch Research, is within UvA-Bosch Delta Lab whose research focuses on deep learning and applications to intelligent vehicles. As PhD candidate,  you will perform cutting edge research in AMLAB in the field of machine learning, will be supervised by Prof dr. Max Welling (promotor) and dr. Zeynep Akata (second supervisor). The research topic is ‘Methods for Semi-supervised Learning and Active Labeling’ where the goals are to use both labeled and unlabeled data in training a classifier, to train classifiers when the number of (labeled) examples is small and to detect bad labels and to suggest informative examples to be labeled.

Colloquium – Challenges of multiple instance learning in medical image analysis
Veronika Cheplygina – Friday, January 26 2018

Abstract: Data is often only weakly annotated: for example, for a medical image, we might know the patient’s diagnosis, but not where the abnormalities are located. Multiple instance learning (MIL), is aimed at learning classifiers from such data. In this talk, I will share a number of lessons I have learnt about MIL so far: (1) researchers do not agree on what MIL is, (2) there is no “one size fits all” approach (3) we need more thorough evaluation methods. I will give examples from several applications, including computer-aided diagnosis in chest CT images. I will also briefly discuss my work on crowdsourcing medical image annotations, and why MIL might be useful in this case.

Veronika Cheplygina is an assistant professor at the Medical Image Analysis group, Eindhoven University of Technology since February 2017. She received her Ph.D. from the Delft University of Technology for her thesis “Dissimilarity-Based Multiple Instance Learning“ in 2015. As part of her PhD, she was a visiting researcher at the Max Planck Institute for Intelligent Systems in Tuebingen, Germany. From 2015 to 2016 she was a postdoc at the Biomedical Imaging Group Rotterdam, Erasmus MC. Her research interests are centered around learning scenarios where few labels are available, such as multiple instance learning, transfer learning, and crowdsourcing. Next to research, Veronika blogs about academic life at http://www.veronikach.com.

Colloquium – Modular Multitask Reinforcement Learning with Policy Sketches
Jacob Andreas – Sept 17 2017

Abstract: We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate tasks with sequences of named subtasks, providing information about high-level structural relationships among tasks but not how to implement them—specifically not providing the detailed guidance used by much previous work on learning policy abstractions for RL. We present a model that associates every subtask with a modular subpolicy, and jointly maximizes reward over full task-specific policies by tying parameters across shared subpolicies. Experiments show that using our approach to learn policies guided by sketches gives better performance than existing techniques for learning task-specific or shared policies, while naturally inducing a library of interpretable primitive behaviors that can be recombined to rapidly adapt to new tasks.
Jacob Andreas is a fifth-year PhD student at UC Berkeley working with Dan Klein. His current research focuses on using natural language to more effectively train and understand machine learning models. Jacob received a B.S. from Columbia in 2012 and an M.Phil. from Cambridge in 2013. He was a Churchill scholar from 2012–2013, an NSF graduate fellow from 2013–2016, and is currently supported by a Facebook fellowship.

Delta Lab Opening – Towards Affordable Self-driving Cars
Raquel Urtasun – April 6, 2017

The revolution of self-driving cars will happen in the near future. Most solutions rely on expensive 3D sensors such as LIDAR as well as hand-annotated maps. Unfortunately, this is neither cost effective nor scalable, as one needs to have a very detailed up-to-date map of the world. In this talk, I’ll review our current efforts in the domain of autonomous driving. In particular, I’ll present our work on stereo, optical flow, appearance-less localization, 3D object detection as well as creating HD maps from visual information alone. This results in a much more scalable and cost-effective solution to self-driving cars.

Raquel Urtasun is an Associate Professor in the Department of Computer Science at the University of Toronto and a Canada Research Chair in Machine Learning and Computer Vision. Prior to this, she was an AssistantProfessor at the Toyota Technological Institute at Chicago (TTIC), anacademic computer science institute affiliated with the University ofChicago. She received her Ph.D. degree from the Computer Science department at Ecole Polytechnique Federal de Lausanne (EPFL) in 2006 and did her postdoc at MIT and UC Berkeley. Her research interests includemachine learning, computer vision, robotics and remote sensing. Her recent work involves perception algorithms for self-driving cars, deep structured models and exploring problems at the intersection of vision and language. Her lab was selected as an NVIDIA NVAIL lab. She is a recipient of an NSERC EWR Steacie Award (awarded to the top 6 scientistsin Canada), an NVIDIA Pioneers of AI Award, a Ministry of Education and Innovation Early Researcher Award, three Google Faculty Research Awards,an Amazon Faculty Research Award, a Connaught New Researcher Award and a Best Paper Runner up Prize awarded at the Conference on Computer Vision and Pattern Recognition (CVPR). She is also Program Chair of CVPR 2018, an Editor of the International Journal in Computer Vision (IJCV) and has served as Area Chair of multiple machine learning and vision conferences (i.e., NIPS, UAI, ICML, ICLR, CVPR, ECCV, ICCV).

Program
15.00-15.30 Registration
15.30-15.35 Welcome Prof. Max Welling, Faculty of Science
15.35-15.40 Prof. Peter van Tienderen, Dean Faculty of Science
15.40-16.25 Key-note speech Prof. Raquel Urtasun, University of Toronto
Title: Towards Affordable Self-driving Cars
16.25-16.35 Dr. Michael Bolle, President Corporate Research and Advanced Development, Robert Bosch GmbH
16.35-16.40 Prof. dr. ir. K.I.J. (Karen) Maex
Rector Magnificus University of Amsterdam
16.40-16.45 Ms. Simone Kukenheim, Alderman City of Amsterdam
16.45-16.50 Opening UvA-Bosch DELTA Lab
16.50-18.00 Drinks and Closure