Receiving the LLM Response
Typically, sending a prompt (and optional context) looks like this:Formatting and Parsing LLM Output
Below are patterns to transform raw LLM responses into structured data.Example: Calculating Date Differences
Suppose the LLM returns two dates in plain text:Always
strip() your strings to remove extra whitespace or escape characters before parsing.Structuring Output in Desired Formats
As an alternative, ask the LLM to return JSON for easier parsing:| Format | Library | Use Case |
|---|---|---|
| JSON | json | Fast and ubiquitous |
| YAML | PyYAML | Human-readable configuration |
| XML | xml.etree | Interoperability with legacy |
| CSV | csv | Tabular data exchange |
Choose the format that best fits your downstream requirements and existing data pipeline.
Cleaning and Validating the Output
A robust post-processing pipeline should:- Trim whitespace and remove unwanted escape characters
- Validate date formats (e.g., using
datetime.strptime) - Check numeric ranges and required fields
- Handle missing or malformed values gracefully
Skipping validation can lead to runtime errors or corrupted data in production.
Next Steps: Model I/O Patterns
Explore best practices for serializing and deserializing data when calling language models. Learn how to:- Stream outputs incrementally
- Implement retry logic and error handling
- Cache or batch requests for performance