In this guide, you’ll create a simple “Hello World” LCEL chain that connects a prompt template, a language model (LLM), and an output parser. You’ll then invoke the chain, inspect its input/output schemas, and explore individual component schemas.Documentation Index
Fetch the complete documentation index at: https://notes.kodekloud.com/llms.txt
Use this file to discover all available pages before exploring further.
Table of Contents
- Prerequisites
- Setup
- Build the Chain
- Invoke the Chain
- Inspect Input Schema
- Inspect Output Schema
- Explore Component Schemas
- Next Steps
- References
1. Prerequisites
- Python 3.8+
langchain-coreandlangchain-openaiinstalled
Ensure your OpenAI API key is set in the
OPENAI_API_KEY environment variable before running any examples.2. Setup
Import the necessary classes and define your prompt template, LLM, and output parser:3. Build the Chain
Chain the components together using the pipe operator (|):
4. Invoke the Chain
Provide thequestion input and call the chain:
5. Inspect Input Schema
Every LCEL chain exposes its input schema. To view it:6. Inspect Output Schema
Similarly, the chain’s output schema shows the format of the result:7. Explore Component Schemas
You can inspect schemas for each component in the chain.7.1 LLM Input Schema
| Property | Type | Description |
|---|---|---|
content | string | object | Message content |
additional_kwargs | object | Extra LLM parameters |
type | string (default “ai”) | Message role (AI only) |
7.2 LLM Output Schema
8. Next Steps
- Integrate custom prompt templates and parsers
- Combine multiple LLMs or logic in a single chain
- Use the pass-through element to include custom Python code