Guide to using MCP with LangGraph to expose and integrate external tools and multi-server orchestration for agents, including setup, calculator server, tool schemas, and examples.
We go deeper into MCP (Model Context Protocol) and demonstrate how to extend LangGraph agents with external tools. MCP acts like a universal port (think USB) that standardizes how AI systems connect to tools, databases, and APIs. With MCP, LangGraph agents can call out to external services and receive structured responses.This lesson covers:
Environment setup for the lab
Conceptual MCP architecture and how it maps to agents
Task 1: Run a simple MCP server (Calculator)
Task 2: Connect an agent to MCP tools
Task 3: Orchestrate multiple MCP servers and aggregate tools
Environment — create the virtual environment and install dependencies
Create or activate your virtual environment, then install the required packages: LangGraph (workflow framework), LangChain (core model abstractions), and the MCP adapters for model integration and servers.Environment setup (bash)
Copy
cd /root && source /root/venv/bin/activatepip install langgraph langchain langchain-openai langchain-mcp-adapters
You may see dependency messages during installation similar to:
Copy
Requirement already satisfied: sse-starlette>=1.6.1 in ./venv/lib/python3.12/site-packages (from mcp>=1.9.2->langchain-mcp-adapters) (3.0.2)Requirement already satisfied: starlette>=0.27 in ./venv/lib/python3.12/site-packages (from mcp>=1.9.2->langchain-mcp-adapters) (0.48.0)Requirement already satisfied: uvicorn>=0.31.1 in ./venv/lib/python3.12/site-packages (from mcp>=1.9.2->langchain-mcp-adapters) (0.37.0)Requirement already satisfied: attrs>=22.2.0 in ./venv/lib/python3.12/site-packages (from jsonschema>=4.20.0->mcp>=1.9.2->langchain-mcp-adapters) (25.4.0)Requirement already satisfied: jsonschema-specifications>=2023.03.6 in ./venv/lib/python3.12/site-packages (from jsonschema>=4.20.0->mcp>=1.9.2->langchain-mcp-adapters) (2025.9.1)Requirement already satisfied: referencing>=0.28.4 in ./venv/lib/python3.12/site-packages (from jsonschema>=4.20.0->mcp>=1.9.2->langchain-mcp-adapters) (0.36.2)Requirement already satisfied: rpds-py>=0.7.1 in ./venv/lib/python3.12/site-packages (from jsonschema>=4.20.0->mcp>=1.9.2->langchain-mcp-adapters) (0.27.1)Requirement already satisfied: python-dotenv>=0.21.0 in ./venv/lib/python3.12/site-packages (from pydantic-settings>=2.5.2->mcp>=1.9.2->langchain-mcp-adapters) (1.1.1)
MCP bridges an AI assistant built with LangGraph to external tools and services. The high-level flow:
The MCP server registers tools and publishes their schemas.
A client connects to the server and fetches tool definitions.
A LangGraph (or LangChain-style) agent receives those tools and decides when to call them.
When invoked, the MCP client routes the tool call to the server and returns a structured response.
Analogy: MCP is the USB port — the protocol is the port, the server is a device, and tools are the device’s functions. LangGraph is the host computer using those functions.
Key MCP concepts at a glance:
Concept
Purpose
Notes
MCP Server
Hosts tools and exposes their schemas
Tools annotated with type hints produce structured schemas
Create a simple MCP server named “Calculator” that exposes calculator tools (add, multiply). Servers can be run using stdin/stdout transport for local testing or via SSE/HTTP for networked deployments.Example server script (completed):
Copy
# Initialize the MCP servermcp = FastMCP("Calculator")# Create calculator tools using FastMCP decorators@mcp.tool()def add(a: float, b: float) -> float: """Add two numbers together""" result = a + b print(f"🔧 Tool 'add' called with a={a}, b={b}") print(f"➕ Result: {result}") return result# Create the multiply tool@mcp.tool()def multiply(a: float, b: float) -> float: """Multiply two numbers""" result = a * b print(f"🔧 Tool 'multiply' called with a={a}, b={b}") print(f"✖ Result: {result}") return result
Tips for tool implementations:
Use Python type hints for parameters and return types; they generate structured schemas consumed by clients.
Keep logs inside tools for easier debugging (print statements or structured logging).
Choose the appropriate transport: stdin/stdout is easiest for local tests; SSE/HTTP is suitable for distributed clients.
Expected console output when the server starts (illustrative):
Copy
✅ Task 1 complete! MCP tools tested successfully.------------------------------------------------------------🚀 STARTING MCP SERVER------------------------------------------------------------The calculator MCP server is now starting...Keep this terminal open - the server will run continuously.Use Ctrl-C to stop the server when you're done.Server ready! Waiting for client connections...
Keep the server terminal open while clients connect. If you stop the server, the agent will no longer be able to reach the tools.
Task 2 — Integrate MCP tools with a LangGraph agent
Connect the Calculator server to a LangGraph (or LangChain-style) agent. The client obtains tools via client.get_tools(), and the agent is created with those tools so it can choose when to call them (for example, using a ReAct-style agent).Example async integration (completed):
Copy
async def run_agent_with_mcp(): """Create and run agent with MCP tools""" # Get tools from MCP client tools = await client.get_tools() # Create react agent with model and tools agent = create_react_agent(model, tools) print("✅ Agent created with MCP tools!\n") print("=" * 60) print("TESTING MCP-INTEGRATED AGENT:") print("=" * 60) # Test 1: Math query (should use MCP tools) print("\nTest 1: Math Query") math_response = await agent.ainvoke({ "messages": "What is 25 plus 17?" }) print(f"Response: {math_response['messages'][-1].content}")
Representative debug output:
Copy
Processing request of type ListToolsRequestProcessing request of type CallToolRequestResponse: 25 plus 17 is 42.Test 2: Non-math QueryResponse: The capital of France is Paris.
Notes:
The agent uses tool schemas to decide whether invoking a tool is appropriate.
Non-math queries that require general knowledge should be handled by the model directly without tool calls.
Scale the system by connecting multiple MCP servers (for example, Calculator and Weather). A MultiServerMCPClient or equivalent gathers tools from all servers; the agent is then built with the aggregated toolset so it can route requests to the right service.Example multi-server orchestration (cleaned and completed):
Copy
async def run_multi_server_agent(): """Create and run agent with tools from multiple MCP servers""" print("📦 Loading tools from multiple servers...") # Get all tools from both servers tools = await client.get_tools() print(f"✅ Loaded {len(tools) if hasattr(tools, '__len__') else 'multiple'} tools from servers") # Create react agent with model and tools agent = create_react_agent(model, tools) print("\n" + "-" * 60) print("TESTING MULTI-SERVER ORCHESTRATION:") print("=" * 60) # Example queries print("\nTest 1: Calculator query") calc_response = await agent.ainvoke({"messages": "What is 8 times 9?"}) print(f"Response: {calc_response['messages'][-1].content}") print("\nTest 2: Weather comparison query") weather_response = await agent.ainvoke({ "messages": "Compare current weather in New York and Tokyo." }) print(f"Response: {weather_response['messages'][-1].content}")
Representative outputs when multiple servers are used:
Copy
Processing request of type ListToolsRequestProcessing request of type CallToolRequestResponse: 8 times 9 is 72.Processing request of type CallToolRequestProcessing request of type ListToolsRequestResponse: The current weather comparison between New York and Tokyo is as follows:New York:- Temperature: 17°C- Condition: Clear- Humidity: 58%- Wind: 14 km/hTokyo:- Temperature: 18°C- Condition: Clear- Humidity: 51%- Wind: 16 km/hBoth cities have clear weather with similar temperatures, but New York has slightly higher humidity while Tokyo has a bit stronger wind.
Best practices for multi-server setups:
Use a consistent naming convention (for example, prefix tools with the server name) so tools from different servers do not collide.
Monitor ListToolsRequest and CallToolRequest logs to trace cross-server calls.
Start with read-only tools when exposing external systems (APIs, DBs) and gradually add write capabilities with proper access control.