Introduction to OpenAI
Text Generation
Chat Completions
Unlock richer conversational experiences by switching from legacy completions endpoints to the modern chat.completions.create
interface. This guide walks you through migrating your code, customizing your client, leveraging message roles, and fine-tuning parameters for GPT-3.5-Turbo, GPT-4, and beyond.
Why Migrate from Legacy Endpoints
OpenAI’s older completions.create
and Completion.create
endpoints stopped receiving updates as of July 2023. The new chat completions API supports structured conversations with roles, function calls, and more control over model behavior.
Deprecated Endpoints
Avoid using legacy calls: they no longer receive feature updates and may be removed in future releases.
Legacy Usage Examples
# Legacy OpenAI Python SDK (no longer recommended)
from openai import OpenAI
client = OpenAI()
response = client.completions.create(
model="gpt-3.5-turbo-instruct",
prompt="Write a tagline for an ice cream shop."
)
# Legacy openai-python (also deprecated)
import openai
response = openai.Completion.create(
model="gpt-3.5-turbo-instruct",
prompt="Write a tagline for an ice cream shop."
)
Modern Chat Completions
Switch to the chat API for richer, role-based interactions:
from openai import OpenAI
openai = OpenAI(api_key="sk-...")
response = openai.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Write a tagline for an ice cream shop."}]
)
print(response.choices[0].message.content)
Naming Your OpenAI Client
Feel free to name your client object whatever you like. Here’s a standard pattern:
from openai import OpenAI
openai = OpenAI(api_key="sk-...")
def chat_comp(prompt):
response = openai.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt}],
max_tokens=250
)
return response.choices[0].message.content
print(chat_comp("How can I make more money?"))
Or with a custom client name:
from openai import OpenAI
RobotBestFriend = OpenAI(api_key="sk-...")
def chat_comp(prompt):
response = RobotBestFriend.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt}],
max_tokens=250
)
return response.choices[0].message.content
print(chat_comp("How can I make more money?"))
Console output:
There are several strategies you can consider to increase your income. Here are some ideas:
1. **Negotiate Your Salary:** ...
2. **Acquire New Skills:** ...
3. **Side Gigs:** ...
4. **Investing:** ...
5. **Start a Business:** ...
6. **Monetize Hobbies:** ...
Secure Your API Key
Never commit your api_key
to public repositories. Use environment variables or secret management tools.
Roles in Chat Messages
The messages
array defines each turn in a conversation. Supported roles:
Role | Purpose | Example |
---|---|---|
system | Sets global context, tone, or behavior for the assistant. | "You are a helpful assistant." |
user | End user’s input or question. | "What's the weather today?" |
assistant | Model’s prior responses in a multi-turn conversation. | "The weather is sunny and 75°F." |
function | Allows the model to call a custom function you define. | "function_call": {"name": "get_weather", "arguments": {...}} |
JavaScript Example
const openai = new OpenAI({ apiKey: "sk-..." });
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [
{
role: "system",
content: [
{
type: "text",
text: "You are a friendly coding tutor who uses southern idioms."
}
]
},
{
role: "user",
content: [
{
type: "text",
text: "Are semicolons optional in JavaScript?"
}
]
}
]
});
console.log(response.choices[0].message.content);
Adding a System Prompt in Python
Include both system
and user
messages to guide the model’s tone:
from openai import OpenAI
RobotBestFriend = OpenAI(api_key="sk-...")
def chat_comp(prompt):
response = RobotBestFriend.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are a southern belle."},
{"role": "user", "content": prompt}
],
max_tokens=250
)
return response.choices[0].message.content
print(chat_comp("What are some side hustles I can try?"))
Console output:
There are several ways to bring a little more income if you set your mind to it, sugar...
Order Matters
Place the system
message before the user
message to ensure the context is applied first.
Exploring the create
Signature
The chat.completions.create
method supports numerous parameters for fine-tuning:
def create(
*,
model: str,
messages: Iterable[ChatCompletionMessageParam],
max_tokens: int | None = None,
n: int | None = None,
temperature: float | None = None,
top_p: float | None = None,
presence_penalty: float | None = None,
frequency_penalty: float | None = None,
logit_bias: dict[str, int] | None = None,
functions: Iterable[Function] | None = None,
function_call: FunctionCall | None = None,
stream: bool | None = None,
...
):
...
Experiment with these options for customized responses:
- temperature: Controls randomness.
- top_p: Nucleus sampling probability.
- presence_penalty & frequency_penalty: Encourage topic variety.
- functions & function_call: Invoke your own code.
Links and References
Watch Video
Watch video content