Zero-shot learning is performing tasks without any task-specific training examples, relying solely on instructions and prior knowledge. Unlike in-context learning, which uses examples in the prompt, zero-shot approaches leverage knowledge acquired during pretraining.

Zero-shot vs. few-shot

Zero-shot: Task description only, no examples
One/few-shot (in-context learning): Examples provided in prompt
Many-shot: Traditional supervised learning with thousands of examples

EXAMPLE

SPRING - Studying the Paper and Reasoning to Play Games: Achieves SOTA Crafter performance by reading game documentation instead of training.
CLIP: Classifies images into arbitrary categories never seen during training by matching image-text embeddings.