Tutorial guiding users through setting up the OpenAI Python client, making chat API calls, extracting responses, and estimating token usage and costs
Let’s start with the lab files you’ll work through:
Copy
README.mdtask_1_import_setup.pytask_2_client_initialization.pytask_3_api_call_explained.pytask_4_extract_response.pytask_5_tokens_and_costs.pyverify_environment.pyroot@controlplane ~/code via ⬢ v3.12.3 ❯
In this lesson you’ll learn how to make your first AI API calls with the OpenAI Python client. The goal is practical: verify your environment, connect to the API, make a chat completion request, extract the assistant’s reply, and inspect token usage and cost — all in progressive steps.
Before writing code, verify that your runtime is ready: activate the virtual environment, confirm Python is available, ensure the OpenAI package is installed, and verify your API keys are present. Run these commands in the lab VM:
If verification succeeds, the script prints readiness checks and exits. If something fails, re-check your virtual environment and that packages (like the OpenAI Python package) are installed.Environment variables commonly used in these examples:
Environment Variable
Purpose
OPENAI_API_KEY
Your secret API key (keep it private)
OPENAI_API_BASE
Optional API base URL (use when pointing to a custom endpoint)
OpenAI builds ChatGPT and families of large language models (e.g., GPT-4, GPT-4.1 Mini, GPT-3.5). The OpenAI Python client is the bridge between your Python code and the API.
Open task_1_import_setup.py. You need to import the OpenAI client library and the os module to read environment variables. The following file shows the required imports and writes a completion marker for the lab system.
Copy
#!/usr/bin/env python3"""Task 1: Import Required LibrariesLearn what libraries we need for AI API calls."""# Step 1: Import the OpenAI library# This library helps us talk to AI modelsimport openai# Step 2: Import os for environment variables# This helps us access API keys safelyimport osprint("✅ Step 1 Complete: Libraries imported!")print("- openai: For making API calls")print("- os: For accessing environment variables")# Create marker for the automated lab systemos.makedirs("/root/markers", exist_ok=True)with open("/root/markers/task1_imports_complete.txt", "w") as f: f.write("SUCCESS")
Run it like this:
Copy
root@controlplane ~/code via v3.12.3 (venv) ❯ python3 /root/code/task_1_import_setup.py✅ Step 1 Complete: Libraries imported!- openai: For making API calls- os: For accessing environment variables
Open task_2_client_initialization.py and initialize the OpenAI client using environment variables. This example creates a client object you can reuse across requests.
Copy
#!/usr/bin/env python3"""Task 2: Initialize the OpenAI clientSet up the client using environment variables."""import openaiimport osclient = openai.OpenAI( api_key=os.getenv("OPENAI_API_KEY"), api_base=os.getenv("OPENAI_API_BASE"))print("✅ Step 2 Complete: Connected to OpenAI!")api_key_preview = os.getenv('OPENAI_API_KEY')[:10] + "..." if os.getenv('OPENAI_API_KEY') else "(not set)"print(f"- API Key: {api_key_preview}")print(f"- Base URL: {os.getenv('OPENAI_API_BASE')}")
Example run:
Copy
root@controlplane ~/code via v3.12.3 (venv) ❯ python3 /root/code/task_2_client_initialization.py✅ Step 2 Complete: Connected to OpenAI!- API Key: Sk-kKA1-86...- Base URL: https://dev.kk-ai-keys.kodekloud.com/v1
If client initialization fails, confirm your environment variables are set and that the model you request is available for your account.
Chat completions implement conversational interactions. You send an ordered list of messages (with roles) and the model returns assistant messages.Minimal Python pattern:
Open task_3_api_call_explained.py. Configure the model and messages, then make a call where the AI introduces itself.
Copy
#!/usr/bin/env python3"""Task 3: Make your first API callSend a simple user message and print the full response."""import openaiimport osclient = openai.OpenAI( api_key=os.getenv("OPENAI_API_KEY"), api_base=os.getenv("OPENAI_API_BASE"))response = client.chat.completions.create( model="openai/gpt-4.1-mini", messages=[ {"role": "user", "content": "Hello AI, please introduce yourself"} ])print("✅ API Call Successful!")print()print("🤖 AI said:")print(response.choices[0].message.content)print()print(f"📊 Total tokens used: {response.usage.total_tokens}")
Example output:
Copy
root@controlplane ~/code via v3.12.3 (venv) ❯ python3 /root/code/task_3_api_call_explained.py✅ API Call Successful!🤖 AI said:Hello! I'm ChatGPT, your AI assistant here to help with a wide range of tasks—from answering questions and providing explanations to creative writing and problem-solving. How can I assist you today?📊 Total tokens used: 53
If you receive PermissionDenied or “model not supported” errors, switch to a model available on your account or check API permissions.
Responses contain nested structures. The straightforward path to the assistant’s reply is:
Copy
response.choices[0].message.content
Open task_4_extract_response.py to extract and print that text.
Copy
#!/usr/bin/env python3"""Task 4: Extract the AI's responseShow how to retrieve the assistant's message from the response object."""import openaiimport osclient = openai.OpenAI( api_key=os.getenv("OPENAI_API_KEY"), api_base=os.getenv("OPENAI_API_BASE"))response = client.chat.completions.create( model="openai/gpt-4.1-mini", messages=[ {"role": "user", "content": "What is Python in one sentence?"} ])ai_text = response.choices[0].message.contentprint("🍬 Successfully extracted the AI's response!")print("\n" + "="*60)print("Question: What is Python in one sentence?")print("\nAI's Answer:")print(ai_text)print("="*60)# Show the golden pathprint("\n🔑 THE GOLDEN PATH - Memorize this:")print("response.choices[0].message.content")
Example output:
Copy
root@controlplane ~/code via v3.12.3 (venv) ❯ python3 /root/code/task_4_extract_response.py🍬 Successfully extracted the AI's response!============================================================Question: What is Python in one sentence?AI's Answer:Python is a high-level, interpreted programming language known for its readability, simplicity, and versatility across various applications.============================================================🔑 THE GOLDEN PATH - Memorize this:response.choices[0].message.content✅ Task 4 completed! You now know how to extract AI responses!
Open task_5_tokens_and_costs.py. The response includes a usage object with three fields: prompt_tokens, completion_tokens, and total_tokens. Use these values to compute a simple cost estimate with your per-token pricing.
Copy
#!/usr/bin/env python3"""Task 5: Extract token usage and compute costRead usage from the response and print a cost breakdown."""import openaiimport os# Example per-token prices (replace with real prices for accurate cost)INPUT_TOKEN_PRICE = 0.000000789 # example price per input token in dollarsOUTPUT_TOKEN_PRICE = 0.0000023 # example price per output token in dollarsclient = openai.OpenAI( api_key=os.getenv("OPENAI_API_KEY"), api_base=os.getenv("OPENAI_API_BASE"))response = client.chat.completions.create( model="openai/gpt-4.1-mini", messages=[ {"role": "user", "content": "Explain what an API is in two sentences."} ])input_tokens = response.usage.prompt_tokensoutput_tokens = response.usage.completion_tokenstotal_tokens = response.usage.total_tokensinput_cost = input_tokens * INPUT_TOKEN_PRICEoutput_cost = output_tokens * OUTPUT_TOKEN_PRICEtotal_cost = input_cost + output_costprint("📊 Token Usage Report:")print("="*50)print(f" Your question used: {input_tokens} tokens")print(f" AI's response used: {output_tokens} tokens")print(f" Total tokens billed: {total_tokens} tokens")print("="*50)print("\n🧾 Cost Breakdown for This Call:")print(f"Input cost: ${input_cost:.6f} ({input_tokens} tokens)")print(f"Output cost: ${output_cost:.6f} ({output_tokens} tokens)")print(f"TOTAL COST: ${total_cost:.6f}")
Sample output (varies per request and model):
Copy
root@controlplane ~/code via v3.12.3 (venv) ❯ python3 /root/code/task_5_tokens_and_costs.py📊 Token Usage Report:================================================== Your question used: 19 tokens AI's response used: 301 tokens Total tokens billed: 320 tokens==================================================🧾 Cost Breakdown for This Call:Input cost: $0.000015 (19 tokens)Output cost: $0.000693 (301 tokens)TOTAL COST: $0.000708
Be careful with long model responses or high-frequency calls — costs can add up quickly. Use concise prompts, set max tokens when needed, and monitor usage.
Extracted assistant replies via response.choices[0].message.content
Read token usage and estimated costs
Next steps: experiment with system messages to control behavior, try longer multi-turn conversations, and test different models to compare quality and cost.