low-shot

Probabilistic Prototype Calibration of Vision-Language Models for Generalized Few-shot Semantic Segmentation

Generalized Few-Shot Semantic Segmentation (GFSS) aims to extend a segmentation model to novel classes with only a few annotated examples while maintaining performance on base classes. Recently, pretrained vision-language models (VLMs) such as CLIP …

SimZSL: Zero-Shot Learning Beyond a Pre-defined Semantic Embedding Space

Zero-shot recognition is centered around learning representations to transfer knowledge from seen to unseen classes. Where foundational approaches perform the transfer with semantic embedding spaces, e.g., from attributes or word vectors, the current …

DynaPrompt: Dynamic Test-Time Prompt Tuning

Test-time prompt tuning enhances zero-shot generalization of vision-language models but tends to ignore the relatedness among test samples during inference. Online test-time prompt tuning provides a simple way to leverage the information in previous …

On the Transfer of Object-Centric Representation Learning

The goal of object-centric representation learning is to decompose visual scenes into a structured representation that isolates the entities into individual vectors. Recent successes have shown that object-centric representation learning can be …

MetaKernel: Learning Variational Random Features with Limited Labels

Few-shot learning deals with the fundamental and challenging problem of learning from a few annotated samples, while being able to generalize well on new tasks. The crux of few-shot learning is to extract prior knowledge from related tasks to enable …

ProtoDiff: Learning to Learn Prototypical Networks by Task-Guided Diffusion

Prototype-based meta-learning has emerged as a powerful technique for addressing few-shot learning challenges. However, estimating a deterministic prototype using a simple average function from a limited number of examples remains a fragile process. …

EMO: Episodic Memory Optimization for Few-Shot Meta-Learning

Few-shot meta-learning presents a challenge for gradient descent optimization due to the limited number of training samples per task. To address this issue, we propose an episodic memory optimization for meta-learning, we call mph{EMO}, which is …

MetaModulation: Learning Variational Feature Hierarchies for Few-Shot Learning with Fewer Tasks

Meta-learning algorithms are able to learn a new task using previously learned knowledge, but they often require a large number of meta-training tasks which may not be readily available. To address this issue, we propose a method for few-shot …