Skip to main content
After completing your fine-tuning job, you can immediately evaluate your custom model either from the command line or within a Python script. This guide walks you through both methods.

1. Test Your Fine-Tuned Model via CLI

Use the openai api completions.create command and specify your fine-tuned model’s ID, which you can copy from the fine-tuning job output:
openai api completions.create \
  -m "davinci:ft-janakiram-associates:sotu-qna-2023-08-05-17-12-17" \
  -p "When was the State of the Union presented?\n\n###\n\n" \
  --stop "['END','***']"
Example response:
When was the State of the Union presented?

### 

The State of the Union was presented on February 5, 2023.
Replace the model ID with your own fine-tuned model name. You can find it in the CLI output or in your OpenAI Dashboard.

2. Fine-Tuning Workflow Overview

Here’s a quick summary of the end-to-end fine-tuning process:
StepDescriptionCLI Example
1Prepare the dataset (clean & format JSONL)openai tools fine_tunes.prepare_data -f data.jsonl
2Upload and preprocessHandled automatically by the API
3Create and monitor the fine-tune jobopenai api fine_tunes.create -t data_prepared.jsonl -m davinci
4Test your deployed custom modelUse CLI (completions.create) or integrate via code
For detailed instructions, see the OpenAI Fine-Tuning Guide.

3. Test Your Model in Python

This Python example demonstrates:
  • Configuring your API key
  • Adding a suffix to control responses
  • Looping through multiple prompts
  • Printing questions with answers
import os
import openai

# 
# Keep your API key secure. Do not commit it to source control.
# 1. Set the API key
openai.api_key = os.getenv("OPENAI_API_KEY")

# 2. Suffix to enforce confident answers or “I don't know”
suffix = " answer only if you know it. Otherwise say 'I don't know'\n\n###\n\n"

# 3. Replace with your fine-tuned model ID
model_id = "davinci:ft-janakiram-associates:sotu-qna-2023-08-05-17-12-17"

# 4. Define prompts
questions = [
    "Who presented the State of the Union?",
    "When was the State of the Union presented?",
    "What is the key takeaway from the domestic policy?",
    "What positive trends in the US economy did President Biden highlight?",
    "What message did President Biden send to Republicans in his 2023 SOTU?"
]

# 5. Invoke the model for each question
for prompt in questions:
    response = openai.Completion.create(
        model=model_id,
        prompt=prompt + suffix,
        max_tokens=500,
        temperature=0,
        frequency_penalty=2.0,
        stop=["END", "***"]
    )
    answer = response.choices[0].text.strip()
    print(f"Q: {prompt}\nA: {answer}\n")

Key Parameters

ParameterPurposeExample Value
max_tokensMax length of the generated answer500
temperatureControls randomness (0 = deterministic)0
frequency_penaltyReduces repeated phrases2.0
stopTokens where generation halts["END","***"]

4. Why This Approach Works

  • Self-contained inference: The model depends solely on its fine-tuned parameters—no external context injection.
  • Controlled output: A suffix forces the model to admit uncertainty, preventing hallucinations.
  • Batchable prompts: Easily loop through multiple questions without managing conversational state.
With these examples, you can seamlessly integrate your fine-tuned OpenAI model into command-line tools, Jupyter notebooks, or production services. Apply the same pattern to tasks like summarization or classification by adjusting the training dataset and prompts.