Skip to main content
Master prompt engineering using LangChain to get consistent, useful, and controllable outputs from LLMs. This guide demonstrates practical prompting techniques — Zero-Shot, One-Shot, Few-Shot, and Chain-of-Thought — with runnable examples and recommended best practices. Why this matters: LLMs often produce vague, inconsistent, or incomplete responses when prompts lack structure. The techniques below help you control format, tone, length, and reasoning so outputs match your requirements.
A screenshot of a tutorial titled "Master Prompt Engineering with LangChain" that outlines prompting techniques like zero-shot, one-shot, few-shot, and chain-of-thought. The right sidebar shows a file list of Python task scripts.
High-level tip: choose the prompting technique that matches your goal — speed, format consistency, tone, or complex reasoning — and provide explicit constraints (format, length, audience) to get predictable outputs.
Environment verification Before starting the exercises, confirm your development environment is ready (virtualenv activated, LangChain installed, OpenAI credentials configured, and an LLM connection working). This prevents runtime errors and allows you to focus on prompt quality during experiments. Run this one-line environment check:
source /root/venv/bin/activate && python /root/code/verify_environment.py
Expected verification output (example):
🔧 Verifying Prompt Engineering Lab Environment...
========================================
✅ Virtual environment is active

✅ LangChain available (version: 0.3.27)

✅ OpenAI configuration found
API Base: https://dev.kk-ai-keys.kodekloud.com/v1

✅ LLM connection test passed

🎉 All environment checks passed!
Your prompt engineering lab environment is ready.
Do not commit API keys or secret files to version control. Keep credentials in environment variables or a secure secret store and verify access only from trusted machines.
Once verification passes and prompt utilities are available, proceed to the tasks below. Each task includes the concept, an example prompt or script, expected behavior, and best practices to help you reproduce consistent results.

Task 1 — Zero-Shot Prompting

Zero-shot prompting asks the model to perform a task with no examples. The quality of the output depends heavily on how explicit and constrained the instruction is.
A screenshot of a presentation slide titled "Task 1: Zero-Shot Prompting (2 minutes)" that defines zero-shot prompting, contrasts vague vs. specific prompts, and gives example prompts. A dark sidebar on the right lists code filenames (e.g., task_1_zero_shot.py).
Key idea: prefer a specific instruction that includes audience, jurisdiction, required sections, and constraints (word count, tone, or format). Illustrative Python comparison (vague vs. specific zero-shot prompts):
# task_1_zero_shot.py
def main(llm):
    vague_prompt = "Write a privacy policy."
    vague_response = llm.invoke(vague_prompt)
    print(f"\nVague response preview: {vague_response.content[:100]}...")
    print("Problem: Too generic, not useful for our company!")

    print("\n✅ Specific Zero-Shot Prompting")
    specific_prompt = (
        "Write a 200-word data privacy policy for European customers "
        "in compliance with the [General Data Protection Regulation (GDPR)](https://gdpr.eu/). "
        "Include retention (30 days), data subject rights, and data transfer rules."
    )
    specific_response = llm.invoke(specific_prompt)
    print(f"\nSpecific response preview: {specific_response.content[:200]}...")
    print("Success: Clear, actionable, company-specific!")

    print("\n📊 Comparison Results:")
    print(f"Vague response length: {len(vague_response.content)} characters")
    print(f"Specific response length: {len(specific_response.content)} characters")
Example console output (trimmed):
Vague response preview: We are committed to protecting your privacy...
Problem: Too generic, not useful for our company!

✅ Specific Zero-Shot Prompting

Specific response preview: We are committed to protecting the privacy of our European customers in accordance with the GDPR. This policy covers...
Success: Clear, actionable, company-specific!

📊 Comparison Results:
Vague response length: 263 characters
Specific response length: 1369 characters
Zero-shot best practices:
  • State the exact task and desired length.
  • Define context (jurisdiction, audience, domain).
  • List required sections or bullet points the output must include.
  • Constrain format where necessary (e.g., JSON, Markdown, or numbered sections).

Task 2 — One-Shot Prompting

One-shot prompting supplies a single example that demonstrates the desired format, tone, or structure. It’s useful when you want the model to reproduce a template or layout across many inputs. Example: Provide a refund policy template as the one-shot example and ask the model to produce a remote work policy using the same structure. One-shot example (refund policy template):
1. Eligibility: Within 30 days of purchase
2. Conditions: Product unused and in original packaging
3. Process: Submit request via support@company.com
4. Timeline: Refund processed within 5-7 business days
5. Exceptions: Digital products and custom orders non-refundable
Then ask the model:
🧪 Using the template above, create a REMOTE WORK POLICY for our company with the same five-section format.
Generated result (example):
REMOTE WORK POLICY
1. Eligibility: Employees approved by management for remote work
2. Conditions: Maintain a dedicated workspace and reliable internet connection
3. Process: Submit remote work request to HR at hr@company.com
4. Timeline: Approval communicated within 3 business days
5. Exceptions: Positions requiring on-site presence and confidential projects not eligible for remote work
One-shot benefits:
  • Enforces a single-template formatting.
  • Fast to set up for repetitive, structured documents.
  • Good when you want to preserve a strict layout without many examples.

Task 3 — Few-Shot Prompting

Few-shot prompting gives the model several diverse examples so it can learn format, tone, and response patterns. This technique is ideal for customer support, marketing copy, or any content that requires consistent voice across variations.
A dark-themed screen showing a presentation card titled "Task 3: Few-Shot Prompting (3 minutes)" with a definition and bullet points explaining why multiple examples matter. On the right is a file/sidebar list with Python files like task_2_one_shot.py and task_3_few_shot.py.
Example few-shot training set (customer support style examples):
Customer Issue: The product arrived damaged.
Support Response: I'm so sorry to hear that. Please send a photo to support@company.com so we can open a replacement or refund immediately. We'll respond within 2 business days.

Customer Issue: I haven't received my order.
Support Response: I apologize for the delay. Please share your order number and I'll check the shipping status. Expect an update within 24 hours.

Customer Issue: I need to change my billing address.
Support Response: Thanks for letting us know. Please confirm the new billing address and we'll update it for future invoices. This change will reflect within 1 business day.
New prompt and model output:
🧾 New customer issue:
Product not working

🤖 AI Response:
I'm sorry to hear the product isn't working as expected. Could you please provide a brief description of the issue and any error messages? Meanwhile, I'll check if there are known troubleshooting steps or recalls related to your product.
Quick response analysis example:
✓ Shows empathy: True
✓ Takes action (asks for next steps): True
✓ Provides timeline: False
Quality Score: 2/3
Few-shot advantages:
  • Learns subtleties of tone and phrasing across examples.
  • Keeps responses consistent across agents or channels.
  • Reduces need for labeled fine-tuning for many use-cases.

Task 4 — Chain-of-Thought (CoT) Prompting

Chain-of-Thought prompting encourages the model to expose intermediate reasoning steps. This produces more accurate and defensible answers for complex or multi-step tasks. Use CoT when you need the model to enumerate assumptions, weigh options, or provide a stepwise troubleshooting path. Example LangChain prompt templates (fixed and syntactically correct):
# task_4_chain_of_thought.py
from langchain.prompts import PromptTemplate, FewShotPromptTemplate

# Example list of examples (each is a dict with 'input' and 'output')
examples = [
    {"input": "User cannot connect to WiFi", "output": "Step 1: Ask for error details. Step 2: Confirm SSID and password. Step 3: Suggest restart and driver update."},
    {"input": "App crashes on startup", "output": "Step 1: Ask for device and OS. Step 2: Ask for app version and logs. Step 3: Suggest clearing cache or reinstalling."},
    {"input": "Invoice not received", "output": "Step 1: Verify order number. Step 2: Confirm billing email. Step 3: Resend invoice and confirm delivery."}
]

# Create the example template
example_prompt = PromptTemplate(
    template="Customer Issue: {input}\nSupport Response: {output}",
    input_variables=["input", "output"]
)

# Create the few-shot prompt template
few_shot_prompt = FewShotPromptTemplate(
    examples=examples,
    example_prompt=example_prompt,
    prefix="You are a helpful customer support agent. Here are examples of how to break problems down step-by-step:\n\n",
    suffix="Customer Issue: {input}\nSupport Response:",
    input_variables=["input"]
)

def generate_support_response(llm, user_issue):
    prompt = few_shot_prompt.format(input=user_issue)
    response = llm.invoke(prompt)
    return response.content
Simple CoT instruction pattern:
When solving the problem, think through it step-by-step:
1. Identify the main issue.
2. List possible causes.
3. Propose troubleshooting steps in order of likelihood.
4. Provide a recommended next action.
CoT best practices:
  • Provide worked examples that demonstrate intermediate steps.
  • Ask the model to enumerate assumptions and order steps by likelihood.
  • Use models/configurations that support longer contexts for full reasoning chains.
  • Prefer CoT when correctness and traceability matter.

Task 5 — Technique Showdown (Comparison)

Compare the techniques by running the same task through each style and evaluating differences in structure, tone, and completeness.
A dark-themed slide titled "Task 5: Technique Showdown" listing and briefly describing four prompting techniques (Zero-Shot, One-Shot, Few-Shot, Chain-of-Thought) with a highlighted key insight box; a colorful cursor points at "Chain-of-Thought." A file sidebar with task filenames is visible along the right edge of the screen.
Example Python script comparing all four techniques:
# task_5_comparison.py
def main(llm):
    test_problem = "Create an employee remote work policy"
    print(f"🧪 Test Problem: {test_problem}\nTesting all 4 prompting techniques...\n")

    results = {}

    # 1. ZERO-SHOT PROMPTING
    print("1️⃣ Zero-Shot Prompting")
    zero_shot_result = llm.invoke(test_problem)
    results["zero_shot"] = zero_shot_result.content
    print(f"Response length: {len(zero_shot_result.content)} characters")
    print(f"Preview: {zero_shot_result.content[:100]}...\n")

    # 2. ONE-SHOT PROMPTING
    print("2️⃣ One-Shot Prompting")
    one_shot_example = (
        "REMOTE WORK POLICY\n"
        "1. Eligibility: ...\n"
        "2. Conditions: ...\n"
        "3. Process: ...\n"
        "4. Timeline: ...\n"
        "5. Exceptions: ..."
    )
    one_shot_prompt = one_shot_example + "\n\nPlease create a remote work policy in the same format for our company."
    one_shot_result = llm.invoke(one_shot_prompt)
    results["one_shot"] = one_shot_result.content
    print(f"Response length: {len(one_shot_result.content)} characters\n")

    # 3. FEW-SHOT PROMPTING
    print("3️⃣ Few-Shot Prompting")
    few_shot_prompt = "Examples:\n" + one_shot_example + "\n\n[additional examples]\n\nNow create a remote work policy:"
    few_shot_result = llm.invoke(few_shot_prompt)
    results["few_shot"] = few_shot_result.content
    print(f"Response length: {len(few_shot_result.content)} characters\n")

    # 4. CHAIN-OF-THOUGHT PROMPTING
    print("4️⃣ Chain-of-Thought Prompting")
    cot_prompt = (
        "You are an expert HR advisor. When drafting the policy, think through it step-by-step:\n"
        "1. Identify objectives.\n2. Define eligibility and conditions.\n3. Describe the process and timelines.\n4. Note exceptions and compliance.\n\n"
        "Now create an employee remote work policy based on that reasoning."
    )
    cot_result = llm.invoke(cot_prompt)
    results["chain_of_thought"] = cot_result.content
    print(f"Response length: {len(cot_result.content)} characters\n")

    # Comparative summary
    for k, v in results.items():
        print(f"{k}: {len(v)} characters")
Typical comparative observations:
  • Zero-Shot: fastest but may miss company specifics or required sections.
  • One-Shot: enforces a strict format from a single template.
  • Few-Shot: matches tone and variations across multiple examples.
  • Chain-of-Thought: produces longer, structured reasoning and more comprehensive policies.
Comparison table (quick reference):
TechniquePrimary StrengthWhen to UseExample Outcome
Zero-ShotFast, minimal setupQuick answers, prototypesShort, generic policy
One-ShotTemplate enforcementWhen strict format mattersPolicy that matches template exactly
Few-ShotTone + consistencyCustomer support, brand voiceConsistent, styled responses
Chain-of-ThoughtDetailed reasoningComplex troubleshooting, policy designMulti-step, defensible recommendations
A screenshot of a coding tutorial popup titled "Congratulations!" listing mastered prompting techniques (zero-shot, one-shot, few-shot, chain-of-thought) and a key takeaway. To the right is a code editor/file explorer showing Python files such as task_5_comparison.py.

Wrap-up and Practical Tips

By completing these exercises you should now be able to:
  • Choose the right prompting technique based on goals (speed, format, tone, or reasoning).
  • Design explicit constraints (format, length, audience) to get predictable outputs.
  • Use One-Shot and Few-Shot prompts to enforce structure and brand voice.
  • Use Chain-of-Thought when you need traceable, stepwise reasoning.
Quick checklist before running experiments:
  • Provide role/context to the model (e.g., “You are an expert HR advisor”).
  • Include required sections and constraints.
  • Use examples to teach format or tone when needed.
  • Keep a simple evaluation rubric (empathy, actionability, timeline) to compare outputs.
Links and References Recommended next steps:
  • Run the provided tasks in your environment and compare outputs across multiple model sizes.
  • Iterate on prompts and evaluate with a small rubric (correctness, format, tone).
  • Automate comparisons with simple scripts (as shown in task_5_comparison.py) to measure improvements over prompt versions.

Watch Video

Practice Lab