My work concerns the design of models and algorithms that learn to represent, understand, and generate language data. Examples of specific problems I am interested in include language modelling, machine translation, syntactic parsing, textual entailment, text classification, and question answering. I also develop techniques to approach general machine learning problems such as probabilistic inference, gradient and density estimation.

Topic areas: natural language processing, statistics, machine learning, approximate inference, global optimisation, formal languages, computational linguistics

I work on responsible machine learning and representation learning for natural language processing. I am interested in tasks and applications where commonsense and real-world knowledge are necessary, including vision & language and applications in medicine and psychology.

Topic areas: natural language processing, vision & language, commonsense knowledge, medicine, psychology

My research concerns natural language processing, with a focus on linguistically and cognitively inspired approaches to conversational AI. In a nutshell, my group investigates how dialogue interaction shapes learning — about the world and about language itself. Our interests include language generation, vision and language modelling, uncertainty in NLP, and computational pragmatics.

Topic areas: natural language processing, dialogue, visual grounding, cognition

My group investigates intelligent systems that support people in their work with data and information from diverse sources. This includes addressing problems related to the preparation, management, integration and reuse of both structured and unstructured data. Topics include: data management for machine learning, information integration, causality-inspired machine learning, automated knowledge graph construction, data provenance.

Topic areas: knowledge graphs, data management for ML, data reuse, data provenance

My research focuses on automated information access, in particular access across languages. Topics of interest to me are: Statistical Machine Translation, Cross-Language Information Retrieval, Data Mining for Natural Language Processing.

Topic areas: machine translation, natural language processing

My research is in the area of natural language processing, with a specific focus on machine learning for natural language understanding tasks. My current interests include few-shot learning and meta-learning, cognitively-inspired models of language, joint modelling of language and vision, and multilingual NLP. My work also explores practical applications of NLP that can have direct societal impact, for instance, in the areas of hate speech and misinformation detection. At the UvA, I lead the Amsterdam Natural Language Understanding Lab, actively collaborating with industrial partners, such as Google, Facebook and Deloitte.

Topic areas: natural language processing, machine learning, meta-learning, cognitive science

Our research concentrates on statistical learning for language understanding and for modeling human language processing phenomena. Earlier work focused on developing statistical learning algorithms for NLP and on devising structured statistical models for machine translation, paraphrasing, semantic and morpho-syntactic parsing. We collaborate with industrial partners for the exchange of knowledge and research outcomes leading to the development and deployment of actual systems in practical settings.

Topic areas: language understanding, machine translation, paraphrasing, parsing, statistical NLP

My research is primarily on natural language understanding (e.g., question answering, information extraction, and semantic parsing) and language generation (e.g., summarization and machine translation). In my group, we are specifically interested in developing methods for reducing the need for expensive human annotation (semi-supervised, self-learning, integrating inductive biases), making the models robust under changes in data distribution (including systematic, compositional generalization) and building models interpretable to human users.

Topic areas: natural language processing, generation, interpretability, systematic generalization

My group does research in natural language processing, with a focus on interpretability techniques and the cognitive, neural relevance of modern language models, and venturing into the domains of music processing and language evolution. Our contributions include work on iterated learning, techniques for analyzing grammar learning in children and non-human animals, constituency and dependency parsing, tree-shaped LSTMs and other neural networks, interpretability techniques like diagnostic probes and Shapley-based attributions, and correlating brain activity to word and sentence embeddings. We published both at AI venues (NeurIPS, ACL, EMNLP, JAIR) and in cognitive science journals and conferences (PNAS, TopiCS, CogSci).

Topic areas: natural language processing, music processing, language evolution