-
Retaining previous messages preserves context and style
Conversation history lets the model remember the topic, user preferences, and previous instructions. This continuity helps the model avoid repetition, maintain the intended tone, and make context-aware decisions (for example, applying a previously set role or following a long-running task). -
Few-shot learning with example exchanges guides behavior
Including a few representative user–assistant pairs in the history teaches the model the pattern you expect it to follow. Few-shot examples show the mapping from input to output, enabling the model to generalize to new, similar inputs without retraining.

Including many past messages helps preserve context, but models have token limits. For long histories, summarize earlier turns or select representative examples to keep the prompt concise and relevant.
Avoid storing or sending sensitive personal data in conversation history. If you must retain private information, use secure storage and only include minimal, necessary context when calling the model.
Work, Personal, or Spam. The final user message is a new item to classify; the model should follow the system instruction and the examples to pick the correct label.
- System instruction: defines the task and the allowed outputs (
Work,Personal,Spam). - Few-shot examples: three user→assistant pairs show the expected mapping from message text to label.
- Final input: the model uses the system instruction plus the examples to infer the correct category for the new message. In this case, “Client requested a project update by Friday.” is best classified as
Work.
| Category | Typical signals | Example |
|---|---|---|
| Work | Mentions projects, clients, deadlines, meetings | Client requested a project update by Friday. |
| Personal | Invitations, social plans, family messages, casual tone | Want to grab dinner tonight? |
| Spam | Promotional language, click-to-claim offers, suspicious links | You won a free vacation. Click here to claim. |
- Start with a clear system instruction to define role, tone, and expected output format.
- Include 2–5 high-quality few-shot examples that represent the variety of inputs you expect.
- Keep examples concise and consistent in formatting to avoid introducing ambiguity.
- For long sessions, periodically summarize earlier turns to reduce token usage while preserving context.
- Always mask or omit sensitive data unless explicitly required and securely handled.
- OpenAI Prompting Best Practices
- Few-shot Learning Concepts (overview)
- Token Limits and Context Windows