| Objective | What you’ll learn | Where it applies |
|---|---|---|
| Understand generative AI | Distinguish generative AI from traditional ML and when to use it | Evaluating use cases such as chatbots, content generation, and summarization |
| Deploy and configure models | Create Azure OpenAI resources, pick models, and set up endpoints | Production and development deployments, SDK and REST usage |
| Use Azure AI Foundry portal | Run experiments, catalog models, and manage AI assets | Experimentation, governance, and model lifecycle management |

- A concise overview of generative AI concepts and how they differ from predictive or classification models.
- Step-by-step guidance for provisioning an Azure OpenAI resource, selecting a model (e.g., GPT family), and creating an API endpoint.
- Introduction to the Azure AI Foundry portal for experimentation, model versioning, and asset management.
- An active Azure subscription and permission to create Cognitive Services/OpenAI resources.
- Basic familiarity with REST APIs or one of the Azure SDKs for your preferred language.
- Understanding of common prompts, token usage, and cost implications for large models.
Before you begin: access to Azure OpenAI may require requesting access or enabling preview features depending on your subscription and region. Check the Azure OpenAI quickstart and subscription requirements before provisioning resources.
- Azure OpenAI documentation and quickstarts: https://learn.microsoft.com/azure/cognitive-services/openai/quickstart?tabs=command-line
- Azure AI overview and services: https://learn.microsoft.com/azure/ai-services/
- OpenAI research (GPT family): https://openai.com/research/gpt-4