Few-shot
Providing a small number of examples in a prompt to guide an LLM's output format and behavior.
Few-shot prompting is the technique of providing a small number of input/output examples within your prompt to guide a large language model's behavior. Instead of just describing what you want (zero-shot), you show the model 2-5 examples of the desired input-output pattern, and the model generalizes from those examples to handle new inputs in the same style.
Why it matters: few-shot prompting dramatically improves LLM output quality and consistency, especially for tasks where describing the desired output precisely in words is difficult but showing examples is easy. It is one of the most practical prompt engineering techniques because it requires no model fine-tuning, works with any LLM, and can be implemented immediately. For marketing teams using AI for content generation, data formatting, or classification, few-shot examples are often the difference between usable and unusable output.
How it works: you include 2-5 examples in your prompt, each showing an input and the corresponding desired output. The model learns the pattern and applies it to a new input. For example, if you want an LLM to classify customer feedback as positive, negative, or neutral, you provide 3-5 classified examples, then give it a new piece of feedback to classify. The examples teach format, tone, length, and reasoning style far more effectively than written instructions alone.
When to use few-shot vs. zero-shot: use zero-shot for simple, well-defined tasks where the model already performs well (summarization, translation, basic Q&A). Use few-shot when you need a specific output format, when the task involves domain-specific conventions, when zero-shot output is inconsistent, or when you need the model to adopt a particular reasoning approach. The incremental cost of adding examples (more tokens) is almost always worth the improvement in output quality.
Best practices: choose diverse examples that cover the range of expected inputs (not five examples that all look the same). Order examples from simple to complex. Include edge cases if the model is likely to encounter them. Keep examples clean and unambiguous. If the model is still not getting it right with 3 examples, try 5. If 5 does not work, the task may require fine-tuning or a different approach.
Common mistakes: providing too many examples (which uses tokens without improving performance past 5-7 examples). Choosing unrepresentative examples that do not cover the actual input variation. Not updating examples as your requirements change. Providing examples with subtle errors that the model then replicates.
Practical example: a marketing team wants their LLM to generate product descriptions in a specific style. Zero-shot prompting ("Write a product description that is concise, benefit-focused, and uses active voice") produces inconsistent results. They add three examples of existing product descriptions they like to the prompt. The LLM now consistently produces descriptions matching their preferred style: same length, same tone, same structure. They save these few-shot examples as a reusable prompt template across their content production workflow.
Related terms
Asking an LLM to perform a task with no examples, relying entirely on the model's pre-trained knowledge and instructions.
The practice of crafting inputs to an LLM to reliably produce desired outputs, including system prompts and few-shot examples.
Large Language Model. A neural network trained on massive text data that can generate, summarize, and reason about language.
Training a pre-existing model on a specific dataset to improve its performance on a narrow task or domain.
Put these concepts into action
Oscom connects your SEO, content, ads, and analytics into one system. Stop context-switching between tools.
Start free trial