Transfer, low-shot, semi- and un-supervised learning

Probabilistic Test-Time Generalization by Variational Neighbor-Labeling

This paper strives for domain generalization, where models are trained exclusively on source domains before being deployed on unseen target domains. We follow the strict separation of source training and target testing, but exploit the value of the …

Low-Resource Vision Challenges for Foundation Models

Low-resource settings are well-established in natural language processing, where many languages lack sufficient data for deep learning at scale. However, low-resource problems are under-explored in computer vision. In this paper, we address this gap …

Learn to Categorize or Categorize to Learn? Self-Coding for Generalized Category Discovery

In the quest for unveiling novel categories at test time, we confront the inherent limitations of traditional supervised recognition models that are restricted by a predefined category set. While strides have been made in the realms of …

Order-preserving Consistency Regularization for Domain Adaptation and Generalization

Deep learning models fail on cross-domain challenges if the model is oversensitive to domain-specific attributes, e.g., lightning, background, camera angle, etc. To alleviate this problem, data augmentation coupled with consistency regularization are …

Dynamic Transformer for Few-shot Instance Segmentation

Few-shot instance segmentation aims to train an instance segmentation model that can fast adapt to novel classes with only a few reference images. Existing methods are usually derived from standard detection models and tackle few-shot instance …

Association Graph Learning for Multi-Task Classification with Category Shifts

In this paper, we focus on multi-task classification, where related classification tasks share the same label space and are learned simultaneously. In particular, we tackle a new setting, which is more realistic than currently addressed in the …

Variational Model Perturbation for Source-Free Domain Adaptation

We aim for source-free domain adaptation, where the task is to deploy a model pre-trained on source domains to target domains. The challenges stem from the distribution shift from the source to the target domain, coupled with the unavailability of …

Few-shot Semantic Segmentation with Support-induced Graph Convolutional Network

Few-shot semantic segmentation (FSS) aims to achieve novel objects segmentation with only a few annotated samples and has made great progress recently. Most of the existing FSS models focus on the feature matching between support and query to tackle …

Dynamic Prototype Convolution Network for Few-shot Semantic Segmentation

The key challenge for few-shot semantic segmentation (FSS) is how to tailor a desirable interaction among sup- port and query features and/or their prototypes, under the episodic training scenario. Most existing FSS methods im- plement such …

Hierarchical Variational Memory for Few-shot Learning Across Domains

Neural memory enables fast adaptation to new tasks with just a few training samples. Existing memory models store features only from the single last layer, which does not generalize well in presence of a domain shift between training and test …