Mastering Generative AI with OpenAI

Understanding Prompt Engineering

Key Techniques of Prompt Engineering

Unlock the full potential of large language models (LLMs) by mastering three core prompt engineering strategies: zero-shot, one-shot, and few-shot prompting. Each technique offers a different balance between simplicity and control, helping you guide an LLM toward your exact needs.

Prompt Engineering Techniques Overview

The image illustrates three techniques used in prompt engineering: Zero Shot, One Shot, and Few Shot, each represented by icons within colored frames.

When designing prompts, you can choose:

  • Zero-Shot Prompting: Direct instruction, no examples.
  • One-Shot Prompting: Single example to illustrate format or style.
  • Few-Shot Prompting: Multiple examples demonstrating the pattern.
TechniqueDefinitionExample Task
Zero-ShotDirect instruction without examples.Summarize this article in 100 words.
One-ShotOne sample input–output pair.Example: “Hello → Hola”. Then translate “Hi”.
Few-ShotSeveral input–output pairs in prompt.Classifying animals by description.

Note

Choose zero-shot for quick tasks, one-shot when you need consistent formatting with minimal context, and few-shot for complex patterns or strict constraints.


1. Zero-Shot Prompting

The image illustrates the concept of zero-shot prompting for a large language model, showing a task input leading to a response output.

In zero-shot prompting, you present only an instruction. The model draws on its pre-training to handle the request.

Write a poem about love.

Because LLMs have processed vast amounts of text and code during training, they can perform many tasks right away:

The image explains zero-shot prompting in large language models (LLMs), highlighting their training on diverse datasets to perform tasks without prior specific examples.

Common zero-shot tasks include:

  • Translate a sentence from English to French.
  • Summarize this article in 100 words.
  • Answer: “What is the capital of Japan?”
  • Write a Python function to reverse a string.

The image shows examples of zero-shot prompting tasks, including writing a poem, translating a sentence, summarizing an article, answering a question, and writing code.

Tip

Zero-shot is fast to set up but may require more precise wording for specialized or nuanced tasks.


2. One-Shot Prompting

The image explains "One-Shot Prompting" for Large Language Models (LLMs), highlighting how a single example can teach an LLM to perform a task. It includes a diagram and text detailing the concept and its current research status.

One-shot prompting provides exactly one example to illustrate the desired output. The LLM uses that single sample as a template:

Example: “Translate ‘Hello, world!’ to Spanish → ‘¡Hola, mundo!’”
Now translate ‘Good morning!’ to Spanish →

This approach often yields more consistent results than zero-shot:

  • Write a short story about a detective solving a mystery.
  • Describe symptoms and treatments for seasonal allergies.
  • Provide steps to make a classic margherita pizza.

The image shows examples of one-shot prompting with prompts and responses for writing a short story, discussing seasonal allergies, and making a margherita pizza.


3. Few-Shot Prompting

The image shows examples of few-shot prompting with inputs describing animals and corresponding outputs naming the animals: camel, cat, and giraffe.

Few-shot prompting includes several illustrative examples. By showing multiple input–output pairs, you help the LLM infer the pattern:

Q: A tall mammal with a long neck, spotted coat → Answer: Giraffe
Q: A large aquatic mammal known for its intelligence and sonar → Answer: Dolphin
Q: A desert animal with humps for fat storage → Answer:

This technique typically achieves higher accuracy, especially when output format or domain knowledge is crucial.

Warning

Including many examples can increase token usage and latency. Keep your prompt concise to stay within model limits.

By experimenting with zero-, one-, and few-shot prompts—and refining your instructions and examples—you’ll identify the optimal strategy for any LLM-powered application.


Watch Video

Watch video content

Previous
Avoiding Hallucinations