Skip to main content
We go deeper into MCP (Model Context Protocol) and demonstrate how to extend LangGraph agents with external tools. MCP acts like a universal port (think USB) that standardizes how AI systems connect to tools, databases, and APIs. With MCP, LangGraph agents can call out to external services and receive structured responses. This lesson covers:
  • Environment setup for the lab
  • Conceptual MCP architecture and how it maps to agents
  • Task 1: Run a simple MCP server (Calculator)
  • Task 2: Connect an agent to MCP tools
  • Task 3: Orchestrate multiple MCP servers and aggregate tools
  • Next steps and references

Environment — create the virtual environment and install dependencies

Create or activate your virtual environment, then install the required packages: LangGraph (workflow framework), LangChain (core model abstractions), and the MCP adapters for model integration and servers. Environment setup (bash)
cd /root && source /root/venv/bin/activate

pip install langgraph langchain langchain-openai langchain-mcp-adapters
You may see dependency messages during installation similar to:
Requirement already satisfied: sse-starlette>=1.6.1 in ./venv/lib/python3.12/site-packages (from mcp>=1.9.2->langchain-mcp-adapters) (3.0.2)
Requirement already satisfied: starlette>=0.27 in ./venv/lib/python3.12/site-packages (from mcp>=1.9.2->langchain-mcp-adapters) (0.48.0)
Requirement already satisfied: uvicorn>=0.31.1 in ./venv/lib/python3.12/site-packages (from mcp>=1.9.2->langchain-mcp-adapters) (0.37.0)
Requirement already satisfied: attrs>=22.2.0 in ./venv/lib/python3.12/site-packages (from jsonschema>=4.20.0->mcp>=1.9.2->langchain-mcp-adapters) (25.4.0)
Requirement already satisfied: jsonschema-specifications>=2023.03.6 in ./venv/lib/python3.12/site-packages (from jsonschema>=4.20.0->mcp>=1.9.2->langchain-mcp-adapters) (2025.9.1)
Requirement already satisfied: referencing>=0.28.4 in ./venv/lib/python3.12/site-packages (from jsonschema>=4.20.0->mcp>=1.9.2->langchain-mcp-adapters) (0.36.2)
Requirement already satisfied: rpds-py>=0.7.1 in ./venv/lib/python3.12/site-packages (from jsonschema>=4.20.0->mcp>=1.9.2->langchain-mcp-adapters) (0.27.1)
Requirement already satisfied: python-dotenv>=0.21.0 in ./venv/lib/python3.12/site-packages (from pydantic-settings>=2.5.2->mcp>=1.9.2->langchain-mcp-adapters) (1.1.1)
Run the verification script:
python3 /root/code/task_1_mcp_basics.py

MCP architecture — conceptual overview

MCP bridges an AI assistant built with LangGraph to external tools and services. The high-level flow:
  • The MCP server registers tools and publishes their schemas.
  • A client connects to the server and fetches tool definitions.
  • A LangGraph (or LangChain-style) agent receives those tools and decides when to call them.
  • When invoked, the MCP client routes the tool call to the server and returns a structured response.
A presentation slide titled "Understanding MCP Architecture" showing a diagram of an AI Assistant linked to an MCP Server via the MCP protocol. Below the diagram are four panels outlining MCP Server, Tools, Integration, and Naming with brief bullet points.
Analogy: MCP is the USB port — the protocol is the port, the server is a device, and tools are the device’s functions. LangGraph is the host computer using those functions.
A dark-themed screenshot of a slide or app UI showing a "SIMPLE EXAMPLE" box that compares MCP to USB devices with a numbered list (USB Port, Device, Functions, Computer). A colorful mouse cursor points at the list and nearby panels show headings like Integration and Naming.
Key MCP concepts at a glance:
ConceptPurposeNotes
MCP ServerHosts tools and exposes their schemasTools annotated with type hints produce structured schemas
MCP ClientDiscovers and calls toolsclient.get_tools() returns tools for the agent
AgentUses tools to extend capabilitiescreate_react_agent(model, tools) builds the agent
TransportsHow server and client communicatestdin/stdout, SSE, HTTP supported
Multi-serverAggregate tools across serversUse multi-server clients to merge toolsets

Task 1 — MCP basics: build a Calculator server

Create a simple MCP server named “Calculator” that exposes calculator tools (add, multiply). Servers can be run using stdin/stdout transport for local testing or via SSE/HTTP for networked deployments. Example server script (completed):
# Initialize the MCP server
mcp = FastMCP("Calculator")

# Create calculator tools using FastMCP decorators
@mcp.tool()
def add(a: float, b: float) -> float:
    """Add two numbers together"""
    result = a + b
    print(f"🔧 Tool 'add' called with a={a}, b={b}")
    print(f"➕ Result: {result}")
    return result

# Create the multiply tool
@mcp.tool()
def multiply(a: float, b: float) -> float:
    """Multiply two numbers"""
    result = a * b
    print(f"🔧 Tool 'multiply' called with a={a}, b={b}")
    print(f"✖ Result: {result}")
    return result
Tips for tool implementations:
  • Use Python type hints for parameters and return types; they generate structured schemas consumed by clients.
  • Keep logs inside tools for easier debugging (print statements or structured logging).
  • Choose the appropriate transport: stdin/stdout is easiest for local tests; SSE/HTTP is suitable for distributed clients.
Expected console output when the server starts (illustrative):
✅ Task 1 complete! MCP tools tested successfully.
------------------------------------------------------------
🚀 STARTING MCP SERVER
------------------------------------------------------------
The calculator MCP server is now starting...
Keep this terminal open - the server will run continuously.
Use Ctrl-C to stop the server when you're done.

Server ready! Waiting for client connections...
Keep the server terminal open while clients connect. If you stop the server, the agent will no longer be able to reach the tools.

Task 2 — Integrate MCP tools with a LangGraph agent

Connect the Calculator server to a LangGraph (or LangChain-style) agent. The client obtains tools via client.get_tools(), and the agent is created with those tools so it can choose when to call them (for example, using a ReAct-style agent). Example async integration (completed):
async def run_agent_with_mcp():
    """Create and run agent with MCP tools"""

    # Get tools from MCP client
    tools = await client.get_tools()

    # Create react agent with model and tools
    agent = create_react_agent(model, tools)

    print("✅ Agent created with MCP tools!\n")
    print("=" * 60)
    print("TESTING MCP-INTEGRATED AGENT:")
    print("=" * 60)

    # Test 1: Math query (should use MCP tools)
    print("\nTest 1: Math Query")
    math_response = await agent.ainvoke({
        "messages": "What is 25 plus 17?"
    })
    print(f"Response: {math_response['messages'][-1].content}")
Representative debug output:
Processing request of type ListToolsRequest
Processing request of type CallToolRequest

Response: 25 plus 17 is 42.

Test 2: Non-math Query
Response: The capital of France is Paris.
Notes:
  • The agent uses tool schemas to decide whether invoking a tool is appropriate.
  • Non-math queries that require general knowledge should be handled by the model directly without tool calls.

Task 3 — Multi-server orchestration (Calculator + Weather)

Scale the system by connecting multiple MCP servers (for example, Calculator and Weather). A MultiServerMCPClient or equivalent gathers tools from all servers; the agent is then built with the aggregated toolset so it can route requests to the right service. Example multi-server orchestration (cleaned and completed):
async def run_multi_server_agent():
    """Create and run agent with tools from multiple MCP servers"""

    print("📦 Loading tools from multiple servers...")

    # Get all tools from both servers
    tools = await client.get_tools()

    print(f"✅ Loaded {len(tools) if hasattr(tools, '__len__') else 'multiple'} tools from servers")

    # Create react agent with model and tools
    agent = create_react_agent(model, tools)

    print("\n" + "-" * 60)
    print("TESTING MULTI-SERVER ORCHESTRATION:")
    print("=" * 60)

    # Example queries
    print("\nTest 1: Calculator query")
    calc_response = await agent.ainvoke({"messages": "What is 8 times 9?"})
    print(f"Response: {calc_response['messages'][-1].content}")

    print("\nTest 2: Weather comparison query")
    weather_response = await agent.ainvoke({
        "messages": "Compare current weather in New York and Tokyo."
    })
    print(f"Response: {weather_response['messages'][-1].content}")
Representative outputs when multiple servers are used:
Processing request of type ListToolsRequest
Processing request of type CallToolRequest
Response: 8 times 9 is 72.

Processing request of type CallToolRequest
Processing request of type ListToolsRequest
Response: The current weather comparison between New York and Tokyo is as follows:

New York:
- Temperature: 17°C
- Condition: Clear
- Humidity: 58%
- Wind: 14 km/h

Tokyo:
- Temperature: 18°C
- Condition: Clear
- Humidity: 51%
- Wind: 16 km/h

Both cities have clear weather with similar temperatures, but New York has slightly higher humidity while Tokyo has a bit stronger wind.
Best practices for multi-server setups:
  • Use a consistent naming convention (for example, prefix tools with the server name) so tools from different servers do not collide.
  • Monitor ListToolsRequest and CallToolRequest logs to trace cross-server calls.
  • Start with read-only tools when exposing external systems (APIs, DBs) and gradually add write capabilities with proper access control.
A screenshot of a developer tutorial UI showing completed MCP integration with LangGraph: four task cards (MCP Basics, Integration, Multi-Server, Ready For) and a "Key Takeaways" panel listing points about MCP, naming, routing, and extensibility. A file explorer with Python files is visible in a dark sidebar on the right.

Deeper explorations and next steps

Once you are comfortable with MCP basics and multi-server orchestration, extend MCP to expose:
  • Databases (query/update operations)
  • External REST APIs (wrapped as typed tools)
  • File systems (search, read, write)
  • Human-in-the-loop endpoints (approval workflows)
The pattern stays the same:
  1. Expose structured tools on an MCP server.
  2. Fetch tools from the client (client.get_tools()).
  3. Build an agent (create_react_agent or similar) that orchestrates tool calls as needed.
Key reminders:
TopicRecommendation
Tool schemasUse type hints for strong, machine-readable schemas
TransportsStart with stdin/stdout for local tests; use HTTP/SSE for production
SecurityProtect write operations and external integrations with authentication
ObservabilityLog ListToolsRequest and CallToolRequest for troubleshooting
This concludes the lesson. Experiment with exposing new resources and creating safe, auditable human-in-the-loop flows.
  • LangChain course (overview)
  • LangGraph documentation (refer to your project docs or README for LangGraph usage)
  • MCP adapters (installed via pip as part of this lab)
Happy experimenting — extend your agents with real-world services using MCP and LangGraph.

Watch Video

Practice Lab