In this guide, you’ll learn how to build a session-aware agent that leverages the Tavily Search API to answer questions step by step. We’ll set up message history, define a rich prompt template, instantiate an LLM and tools, and finally run the agent in a persistent session.Documentation Index
Fetch the complete documentation index at: https://notes.kodekloud.com/llms.txt
Use this file to discover all available pages before exploring further.
Table of Contents
- Install Dependencies
- Import Libraries & Configure API Key
- Define the Prompt Template
- Initialize LLM, Tools, and History
- Create Agent & Executor
- Enable Session Management
- Invoke the Agent
- Component Reference Table
- Next Steps & Observations
- Links and References
1. Install Dependencies
Install the core LangChain packages and the community search tool:2. Import Libraries & Configure API Key
Load required modules and read your Tavily API key from the environment.Ensure
TAVILY_API_KEY is set in your environment (export TAVILY_API_KEY=your_key_here) before running the code.3. Define the Prompt Template
We use a chat prompt that includes:- A system instruction
- A placeholder for chat history
- The user’s input
- A placeholder for the agent scratchpad
4. Initialize LLM, Tools, and History
Instantiate the chat model, register the search tool, and prepare in-memory history.5. Create Agent & Executor
Build the tool-calling agent and wrap it in an executor for streamlined invocation.6. Enable Session Management
To maintain context across calls, wrap the executor withRunnableWithMessageHistory. In production, you can replace the in-memory store with Redis.
For scalable session storage, consider using a Redis-backed message history.
7. Invoke the Agent
Pass asession_id with every call to preserve dialogue history and agent scratchpad.
The LLM’s arithmetic can be inaccurate. In the next section, we’ll add a Python REPL tool for exact calculations.
8. Component Reference Table
| Component | Purpose | Instantiation |
|---|---|---|
| ChatOpenAI | Core chat-based LLM | ChatOpenAI() |
| TavilySearchResults | External search integration | TavilySearchResults(api_key=...) |
| AgentExecutor | Executes the tool-calling agent | AgentExecutor(agent=…, tools=…) |
| RunnableWithMessageHistory | Manages per-session message history | RunnableWithMessageHistory(agent_executor, …) |