few-shot learning

Few-shot learning is a machine learning paradigm where a model must learn or adapt to a new task or classes using only a small number of labeled examples. This is often done by leveraging prior training, transfer learning, or meta-learning to generalize from limited data.

In the classical few-shot setting, models are trained or equipped to rapidly adapt from, say, 1 to 5 or up to 10 examples per class using techniques like metric-based matching, optimization-based initialization, or memory-augmented architectures.

In the context of large language models (LLMs), few-shot refers to providing the model with a few input-output demonstrations in the prompt without updating its parameters, then asking it to perform the task by analogy.

While few-shot methods reduce the need for large labeled datasets, they tend to be more sensitive to domain shift, example selection, prompt design, and other brittleness, so evaluations usually compare few-shot performance against zero-shot, one-shot, and fully supervised many-shot baselines.


By Leodanis Pozo Ramos • Updated Nov. 3, 2025