Skip to main content
Improve the accuracy and coverage of a question-answering solution built with Custom Question Answering in Azure Language Studio. This guide explains practical techniques—implicit learning, explicit learning (user feedback), and synonyms—and shows where to make these adjustments in Language Studio to produce faster, more relevant answers.

Overview

Fine-tuning a QnA knowledge base combines three complementary approaches:
  • Implicit learning: automatic alternate phrasing detection.
  • Explicit learning: using user feedback to reinforce correct answers.
  • Synonyms: mapping equivalent terms to improve intent matching.
These techniques work together to reduce ambiguous matches and increase the hit rate for real user queries.

Implicit learning (automatic alternate phrasing)

Implicit learning runs behind the scenes to detect alternate phrasings users might use for the same question. For instance, when a user asks, “How do I change my flight?”, the system may propose variations such as “Can I modify my booking?” or “I need to update my reservation.” Language Studio surfaces these suggested alternates so you can review and accept them into your knowledge base. Example of a suggestion (as shown in Language Studio):
{
  "answers": [
    {
      "questions": ["How do I change my flight?"],
      "answer": "You can modify your flight booking by visiting our airline portal or calling 888-555-7890.",
      "score": 76.55,
      "id": 2
    }
  ]
}
Accepting implicit suggestions reduces manual work and helps the system generalize to real user language patterns without you adding every alternate phrasing.

Explicit learning (user feedback)

Explicit learning collects confirmatory signals from users. When the system returns multiple candidate answers, and a user selects one, that selection is stored as feedback. Over time, feedback helps the model rank the correct answer higher for similar queries by linking the feedback to the matched answer ID. Example: the answered entry returned to the user:
{
  "answers": [
    {
      "questions": ["How do I change my flight?"],
      "answer": "You can modify your flight booking by visiting our airline portal or calling 888-555-7890.",
      "score": 76.55,
      "id": 2
    }
  ]
}
Corresponding feedback record sent back to the system:
{
  "feedbackRecords": [
    {
      "userId": "user1",
      "userQuestion": "I need to reschedule my flight",
      "matchedId": 2
    }
  ]
}
Collecting these feedback records and submitting them to the service incrementally trains the matching behavior so the selected answer is favored for similar future queries.
Collecting user feedback can involve personal data. Ensure you follow your organization’s privacy policy and any applicable legal requirements before storing or sending identifiable feedback.

Synonyms for better matching

Define synonyms to treat different words or phrases as equivalent for intent matching. This is especially useful for domain-specific vocabulary (e.g., “reschedule”, “modify”, “change” for flight updates). Example synonyms configuration:
{
  "synonyms": {
    "alterations": ["reschedule", "modify", "change"]
  }
}
You can add synonyms via the API or directly in Language Studio. When used together with implicit and explicit learning, synonyms increase the system’s robustness to vocabulary variations.

Where to fine-tune in Language Studio

After deploying your knowledge base, use Language Studio to review and refine content. The primary areas to manage fine-tuning are:
AreaPurposeAction
Review SuggestionsInspect implicit alternate phrasing proposalsAccept, edit, or reject suggested alternates
Edit Knowledge BaseManual curation of Q&A pairsAdd alternate questions, edit answers, pin or remove alternates
Feedback / TelemetrySubmit user selections for explicit learningSend feedbackRecords to link user selections to answer IDs
SynonymsNormalize vocabulary across questionsAdd synonyms to map equivalent terms to a canonical form
In the Review Suggestions (or similar) section, alternate phrasing suggestions appear once the system has observed enough interactions. Initially you may see no suggestions; they populate as user traffic and feedback increase. Where to perform manual edits in the editor view:
  • Add alternate questions for an existing answer (e.g., add “Define Cognitive Services” as an alternate phrasing for “What is Cognitive Services?”).
  • Remove or pin alternate questions to control which variants are preferred.
  • Configure follow-up prompts or multi-turn dialog behavior to handle compound queries.
A screenshot of Azure AI Language Studio's Custom Question Answering editor, showing a knowledge base with question-answer pairs listed on the left and a selected answer plus many alternate questions displayed on the right. The top shows the Azure navigation and user account bar.
The editor displays each Q&A entry with its alternate questions and controls to accept, edit, remove, or pin alternates. Use these fine-tuning actions—accepting implicit suggestions, sending explicit feedback, and defining synonyms—to improve accuracy and reduce response ambiguity.

Best practices

  • Start with a focused set of high-confidence Q&A pairs and grow coverage iteratively.
  • Combine synonyms with alternate questions to capture both word-level and phrase-level variants.
  • Regularly review suggestion history and feedback telemetry to find gaps or misclassifications.
  • Automate feedback submission where appropriate, but always respect privacy and consent.
With these steps, you can systematically fine-tune a Custom Question Answering knowledge base to deliver more accurate and relevant responses to your users.

Watch Video