Skip to main content
In this lesson you’ll scaffold and run a minimal Google ADK agent — a “hello world” style example that shows how to:
  • scaffold an agent,
  • register simple tools,
  • and let the LLM decide which tool to call based on natural language.
This step-by-step walkthrough covers creating a Python virtual environment, installing the Google ADK package, scaffolding a starter application, implementing two deterministic tools (get_current_time and get_current_weather), and running the agent interactively.
A presentation slide that says "Build Your First Agent" on the left with a large "Demo" label on a dark, curved shape on the right. A small "© Copyright KodeKloud" appears in the lower-left corner.
What we’ll do
  • Create and activate a Python virtual environment
  • Install google-adk
  • Scaffold an ADK app
  • Add two simple tools
  • Run the agent and interact with it
Create and activate a virtual environment (macOS / Linux shown):
# Create venv
python3 -m venv .venv

# Activate venv (macOS / Linux)
source .venv/bin/activate

# Your prompt should indicate the venv is active, e.g.:
# (.venv) jeremy@MACSTUDIO hello-world %
Install the Google ADK package inside the virtual environment:
(.venv) jeremy@MACSTUDIO hello-world % pip install google-adk
The package installs several dependencies; expect to download multiple MBs. Scaffold a new ADK application from your project root:
(.venv) jeremy@MACSTUDIO hello-world % adk create my_agent
Follow the interactive prompts. For this lesson choose the Gemini model and Google AI backend:
Choose a model for the root agent:
1. gemini-2.5-flash
2. Other models (fill later)
Choose model (1, 2): 1
1. Google AI
2. Vertex AI
Choose a backend (1, 2): 1

Don't have API Key? Create one in AI Studio: https://aistudio.google.com/apikey

Enter Google API key:
After entering your API key the scaffold creates a basic layout (files like __init__.py and agent.py). The generated agent.py is the canonical place to register your agent and tools. Example agent.py Below is a minimal agent.py that defines two deterministic tools and registers them with the root agent. These are intentionally hard-coded for clarity; replace them with real API calls in production.
# agent.py
from google.adk.agents.llm_agent import Agent

def get_current_time(city: str) -> dict:
    """Returns the current time in a specified city."""
    # Hard-coded for demonstration
    return {"status": "success", "city": city, "time": "10:30 AM"}

def get_current_weather(city: str) -> dict:
    """Returns the current weather in a specified city."""
    # Hard-coded for demonstration
    return {"status": "success", "city": city, "weather": "Sunny, 75°F"}

root_agent = Agent(
    model="gemini-2.5-flash",
    name="root_agent",
    description="Tells the current time or weather in a specific city.",
    instruction=(
        "You are a helpful assistant that can provide the current time and the current weather in cities. "
        "Use the 'get_current_time' tool to get the time and the 'get_current_weather' tool to get the weather."
    ),
    tools=[get_current_time, get_current_weather],
)
Why register tools this way?
  • instruction: The text given to the LLM that defines the agent’s behavior and available tools.
  • tools: A list of Python callables the model may invoke. The LLM chooses which tool to call based on the user query.
Run the agent from the project root:
(.venv) jeremy@MACSTUDIO hello-world % adk run my_agent
A typical interactive session:
Log setup complete: /tmp/agents_log/agent.log
/Users/jeremy/.../.venv/lib/python3.14/site-packages/google/adk/cli/cli.py:155: UserWarning: [EXPERIMENTAL] InMemoryCredentialService: This feature is experimental and may change or be removed in future versions without notice.
  credential_service = InMemoryCredentialService()
Running agent root_agent, type exit to exit.
[user]: What do you do?
[root_agent]: I can tell you the current time or weather in a specific city. Just ask me something like "What time is it in New York?" or "What's the weather like in London?"
[user]: What time is it in New York City?
[root_agent]: The current time in New York is 10:30 AM.
[user]: What is the weather in Tokyo?
[root_agent]: The weather in Tokyo is currently Sunny, 75°F.
[user]:
Key concepts and best practices
  • The LLM acts as a router: the instruction plus the user query determines which tool (if any) gets invoked.
  • Tools are plain Python callables and can wrap HTTP APIs, SDKs, or complex business logic.
  • For production, use real services (timezone APIs, weather APIs), robust error handling, timeouts, and non-blocking I/O when appropriate.
  • Keep tool signatures simple and well-documented (type hints and short docstrings improve the LLM’s ability to choose correctly).
Tooling at a glance
ComponentPurposeExample / Notes
Agent instructionTells the model what tools are available and how to behaveUse clear, concise language describing tools and when to use them
tools parameterRegisters Python callables the agent may invokeFunctions, API wrappers, async callables (if supported)
Model selectionChooses the LLM that will act as the router and respondergemini-2.5-flash used in this tutorial
BackendUnderlying runtime for the modelGoogle AI (AI Studio) or Vertex AI
Tip: Keep your tool interfaces simple and well-documented (type hints and concise docstrings help the LLM choose the right tool). In production, prefer non-blocking calls and proper error handling in tools.
Warning: Some ADK features are experimental and may emit warnings at runtime. Pay attention to those messages and review the ADK changelog when upgrading.
Why this structure matters
  • The model performs semantic routing: given the instruction and a user prompt, it decides which tool to call and how to format the call.
  • This approach separates decision-making (LLM) from execution (tools/APIs), enabling safer, auditable, and extensible agents.
  • It mirrors retrieval-augmented patterns: the LLM identifies the appropriate capability or data source and returns the result in natural language.
Summary
  • We created a Python virtual environment, installed google-adk, scaffolded a project, added two deterministic tools, and ran an interactive agent that uses the LLM to choose tools based on natural language.
  • To build a production-ready agent, replace the stub tools with real API integrations (timezone, weather, or other services), add robust error handling, and monitor agent behavior.
Links and references