Skip to main content
Applying prompt engineering This module builds on techniques for integrating Azure OpenAI via REST APIs and SDKs and focuses on prompt engineering: the practice of crafting inputs that produce reliable, relevant, and controllable outputs from AI models. Prompt engineering helps you reduce unexpected responses, increase accuracy, and align model outputs with application requirements. In this lesson we will:
  • Define prompt engineering and explain its importance for production systems.
  • Show how to design prompts tailored to different endpoints (for example, completions vs. chat).
  • Explore advanced techniques to improve consistency, accuracy, and safety of model responses and how to apply them in real applications.
A presentation slide titled "Learning Objectives" with three numbered points: 01 Defining Prompt Engineering, 02 Optimizing for different Endpoints, and 03 Exploring advanced techniques.
What you’ll gain from this module
  • Practical strategies to convert user intent into structured prompts.
  • How to select and adapt prompts depending on whether you call completion, chat, or function-calling endpoints.
  • Methods to increase output determinism (temperature & sampling strategies), enforce format constraints, and apply guardrails for safety and compliance.
Learning ObjectiveWhy it mattersExample outcome
Define prompt engineeringEstablishes a repeatable process for creating effective inputsClear, testable prompt templates
Optimize for endpointsDifferent endpoints expect different input formats and context handlingCorrect use of messages for chat vs prompt for completions
Apply advanced techniquesImprove model reliability, reduce hallucination, and enforce structureHigher fidelity JSON outputs and safer responses
Prompt engineering is both art and engineering: iterate with small, measurable changes (e.g., tweak temperature, add examples, constrain formats) and validate outputs with automated tests before deploying.
Key topics covered (high-level)
  • Prompt structure and components: system instructions, user instructions, examples, and constraints.
  • Endpoint considerations: completions vs. chat — when to use each and how to format prompts.
  • Advanced prompt patterns: few-shot examples, chain-of-thought prompting, step-by-step decomposition, and output validation.
  • Safety and control: using instructions, post-processing, and automated checks to reduce harmful or incorrect outputs.
  • Testing & monitoring: QA approaches to ensure prompt changes don’t degrade performance in production.
Recommended reading and references
Always validate generation outputs against concrete requirements (format, factuality, safety). Small prompt changes can substantially alter model behavior — use automated tests and monitoring before pushing updates to production.

Watch Video