Skip to main content
All right — now that we understand how tools work and how to create them, let’s implement two practical tools for our helpdesk agent: user lookups and checking service status. For clarity and to make the example easy to run, we keep everything in a single file (agent.py). Later, you can break tools into separate modules if you prefer.
A presentation slide with the title "Implementing our Tools" and a dark teal curved panel on the right containing the word "Demo" in bright blue. A small "© Copyright KodeKloud" note appears in the bottom-left.
Overview
  • Build two tiny in-memory services:
    • a fake user directory
    • a fake service-status registry
  • Implement two strongly-typed tools that return structured dictionaries so the LLM knows what to expect:
    • lookup_user(email: str) -> Dict[str, Any]
    • check_service_status(service_name: str) -> Dict[str, Any]
  • Register these functions with an ADK Agent so the LLM can call them.
Comments and docstrings are visible to the LLM and can affect tool usage. Keep them accurate, concise, and machine-friendly.
Tool return schemas Use clear, predictable return shapes so the agent can consume results without guessing. The table below summarizes the two tools and their expected structured outputs.
ToolSignatureSuccess fieldsError fields
lookup_userlookup_user(email: str) -> Dict[str, Any]status: “success”, user: status: “error”, error_message
check_service_statuscheck_service_status(service_name: str) -> Dict[str, Any]status: “success”, service, status_textstatus: “error”, error_message
Concise example: agent.py Below is a corrected, concise implementation that demonstrates typed imports, small in-memory stores, the two tool functions, and the Agent registration exposing the tools to the LLM. Keep this code in a single file for now to simplify running and testing.
# agent.py
from typing import Dict, Any
from google.adk.agents.llm_agent import Agent

# Tiny in-memory "directory" of users.
_FAKE_USER_DIRECTORY: Dict[str, Dict[str, Any]] = {
    "alice@example.com": {
        "name": "Alice Johnson",
        "department": "Engineering",
        "status": "active",
    },
    "bob@example.com": {
        "name": "Bob Smith",
        "department": "Finance",
        "status": "active",
    },
    "carol@example.com": {
        "name": "Carol Lee",
        "department": "HR",
        "status": "locked",  # Their account may be locked.
    },
}

# Tiny in-memory "service status" registry.
_FAKE_SERVICE_STATUS: Dict[str, str] = {
    "vpn": "degraded",
    "gitlab": "outage",
    "wifi": "operational",
}


def lookup_user(email: str) -> Dict[str, Any]:
    """
    Look up a user in the fake directory.

    Returns a structured dict:
      - status: "success" or "error"
      - user: { email, name, department, status }  # only on success
      - error_message: str  # only on error
    """
    if not isinstance(email, str) or not email.strip():
        return {"status": "error", "error_message": "Invalid email provided."}

    normalized = email.strip().lower()
    user = _FAKE_USER_DIRECTORY.get(normalized)
    if not user:
        return {
            "status": "error",
            "error_message": f"No user found for email '{email}'.",
        }

    return {
        "status": "success",
        "user": {
            "email": normalized,
            "name": user["name"],
            "department": user["department"],
            "status": user["status"],
        },
    }


def check_service_status(service_name: str) -> Dict[str, Any]:
    """
    Check the status of a named IT service.

    For now this just looks up a value in an in-memory dict.

    Returns a structured dict:
      - status: "success" or "error"
      - service: normalized service name (on success)
      - status_text: operational state like "operational", "degraded", or "outage" (on success)
      - error_message: str (on error)
    """
    if not isinstance(service_name, str) or not service_name.strip():
        return {"status": "error", "error_message": "Invalid service name provided."}

    normalized = service_name.strip().lower()
    status_text = _FAKE_SERVICE_STATUS.get(normalized)
    if not status_text:
        return {
            "status": "error",
            "error_message": (
                f"Unknown service '{service_name}'. "
                f"Known services: {', '.join(sorted(_FAKE_SERVICE_STATUS.keys()))}."
            ),
        }

    return {
        "status": "success",
        "service": normalized,
        "status_text": status_text,
    }


# Register the agent and attach the tools.
root_agent = Agent(
    model="gemini-2.5-flash",
    name="helpdesk_root_agent",
    description="Smart IT Helpdesk assistant that helps troubleshoot basic IT issues.",
    instruction=(
        "You are a friendly but efficient IT helpdesk assistant for an internal company.\n"
        "\n"
        "Goals:\n"
        "1. Quickly understand the user's problem.\n"
        "2. Ask one or two clarifying questions if needed.\n"
        "3. Give clear, step-by-step instructions they can follow.\n"
        "4. Keep answers concise and practical.\n"
        "\n"
        "Constraints:\n"
        "- Use the available tools (lookup_user, check_service_status) when appropriate.\n"
        "- Do not claim to check real systems outside these tools.\n"
        "- When uncertain, use phrases like 'Based on common IT practice...' instead of pretending.\n"
    ),
    tools=[lookup_user, check_service_status],
)
Why normalize inputs
  • Make lookups case-insensitive and tolerant of leading/trailing whitespace by lower-casing and stripping inputs.
  • Returning a clear structure (status + payload or error_message) prevents the LLM from guessing shapes and reduces hallucinations.
Best practice
  • Explicitly list each tool in Agent.tools (e.g., tools=[lookup_user, check_service_status]) — this keeps tool availability explicit and discoverable by the agent.
  • Keep docstrings short, factual, and up-to-date: the LLM relies on these to decide when and how to call a tool.
Running and testing (example terminal session) Start your agent with the ADK CLI:
(.venv) $ adk run helpdesk_root_agent
Running agent helpdesk_root_agent, type exit to exit.
[user]: My email address is bob@example.com. But I have forgotten my name. What is it?
[helpdesk_root_agent]: Your name is Bob Smith.
[user]: My email address is jeremy@example.com, but I have forgotten my name. What is it?
[helpdesk_root_agent]: I couldn't find a user with the email address jeremy@example.com. Could you please double-check the email address for any typos?
[user]: Is there something wrong with the VPN?
[helpdesk_root_agent]: Based on common IT practice, the VPN service is currently degraded. Are you having trouble connecting to the VPN, or are you experiencing slow speeds?
Notes about ADK and how the LLM uses tools
  • ADK can auto-wrap plain Python functions as callable tools with structured I/O.
  • The agent coordinates between the LLM and your functions: the LLM decides which tool to call and prepares inputs; the function returns structured data; the LLM then translates that structured data into natural language for the user.
  • Explicit, structured tool outputs help avoid hallucination and allow concrete, checkable responses.
Common pitfalls and debugging:
  • NameError or import errors often indicate a missing import or incorrectly registered tool. Verify your imports and that you included tools in Agent.tools.
  • If the agent returns unexpected output, confirm your tool’s return schema matches its docstring and the LLM’s expectations.
Next steps
  • Implement stateful troubleshooting flows so the agent can remember context across multiple turns.
  • Add authentication and access controls when you move from fake in-memory stores to real data sources.
  • Split tools into modules for larger projects and add unit tests for each tool’s structured outputs.
Further reading