Introduction to OpenAI
Features
Batch Processing
Learn how to send multiple prompts in one go using OpenAI’s Python client. This guide covers setup, code structure, and best practices for efficient batch processing.
Table of Contents
- Prerequisites
- Install & Import
- Define Your Prompts
- Helper Function for a Single Prompt
- Batch Processing Loop
- Inspecting Results
- Run the Script
- Reference Links
Prerequisites
- Python 3.7+
- An OpenAI API key
openai
Python package
Note
Store your API key as an environment variable for security:
export OPENAI_API_KEY="sk-your-api-key-here"
Alternatively, pass it directly in code (not recommended for production).
Install & Import
Install the OpenAI Python client:
pip install openai
Then import and initialize the client:
from openai import OpenAI
client = OpenAI(api_key="sk-your-api-key-here")
Define Your Prompts
Build a list of user prompts to process in batch:
prompts = [
"Tell me a story about a warrior princess",
"Generate a list of 5 business ideas",
"Explain the theory of relativity in simpler terms",
"Write a poem about Michael Jordan"
]
Feel free to extend this list to dozens or hundreds of items.
Helper Function for a Single Prompt
Encapsulate the API call in a reusable function:
def process_prompt(prompt: str) -> str:
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt}],
max_tokens=250,
temperature=0.8
)
return response.choices[0].message.content
Model Parameters
Parameter | Description | Example |
---|---|---|
model | The chat model to use | gpt-4 |
messages | Conversation history for the model | [...] |
max_tokens | Maximum number of tokens in the reply | 250 |
temperature | Controls randomness (0.0–1.0) | 0.8 |
Batch Processing Loop
Iterate through all prompts and collect responses:
results = []
for prompt in prompts:
reply = process_prompt(prompt)
results.append(reply)
Warning
Batch requests can incur higher costs and rate limits. Monitor your usage in the OpenAI Dashboard.
Inspecting Results
Print each prompt alongside its generated response:
for idx, (prompt, reply) in enumerate(zip(prompts, results), start=1):
print(f"Prompt {idx}: {prompt}")
print(f"Response {idx}:\n{reply}\n")
Sample output:
Prompt 1: Tell me a story about a warrior princess
Response 1:
Once upon a time in the verdant kingdom of Eldoria, a fierce warrior princess...
Prompt 2: Generate a list of 5 business ideas
Response 2:
1. Eco-friendly packaging startup
2. Virtual event planning service
...
Prompt 3: Explain the theory of relativity in simpler terms
Response 3:
The theory of relativity, proposed by Albert Einstein, shows how time and space are linked...
Prompt 4: Write a poem about Michael Jordan
Response 4:
In courts of hardwood, he stood so tall...
Run the Script
Save the code to batch_processing.py
and execute:
python batch_processing.py
Extend or customize the process_prompt
function to add streaming output, error handling, or alternative model parameters as needed.
Reference Links
Watch Video
Watch video content
Practice Lab
Practice lab