zero-shot learning

Zero-shot learning (ZSL) is a machine learning setting where a model is required to handle classes or tasks it did not encounter during training by leveraging additional semantic information or generalized representations acquired during pretraining.

Classical ZSL methods link seen and unseen classes using this information so that, at test time, the model can infer correct labels for previously unseen classes.

In the context of large language models (LLMs), zero-shot often refers to prompt-based use, where a pretrained model performs a new task or responds to new instructions without any task-specific examples.


By Leodanis Pozo Ramos • Updated Nov. 3, 2025