LangChain

Using Tools

Understanding and Using tools

In this lesson, we’ll dive into LangChain tools—modular components that let your LLM applications fetch real-time data or perform advanced processing beyond what a static knowledge base or prompt alone can handle. By the end, you’ll understand when to use retrieval-augmented generation (RAG) versus real-time tools and how they fit into production workflows.

What Is a Tool?

A tool in LangChain is a configurable wrapper around external functionality—APIs, custom functions, or runtimes like Python REPL. Instead of relying solely on indexed documents or prompts, tools enable:

  • Real-time data retrieval (e.g., stock quotes, weather forecasts)
  • Dynamic computation or code execution
  • Integration with proprietary or third-party services

Real-World Example: Airline Chatbot

Imagine an airline support chatbot:

  1. Baggage Policy
    The policy is stored in a PDF, already indexed in your vector database. You use RAG to retrieve and inject the policy text into the prompt, and the LLM answers.

  2. Flight Arrival Time
    The expected arrival isn’t in static documents—you must call a flight-tracking API for live data. This is exactly where tools come in.

The image illustrates an airline use case involving a user asking about flight arrival times, with a flight tracking API and a PDF document icon.

Fetching Live Data with Tools

By plugging a flight-tracking API into your chain, the LLM can request and display up-to-the-minute flight status. You can also stream updates or combine multiple APIs in one workflow.

The image illustrates an airline use case where a user inquires about flight arrival times using a flight tracking API and tools.

Built-In Tools in LangChain

LangChain ships with a variety of ready-to-use tools:

  • Wikipedia: Fetch article summaries or full pages
  • Search: Perform live web searches
  • YouTube: Retrieve and summarize video transcripts
  • Python REPL: Run snippets of code for computation
  • Custom Functions: Wrap any proprietary or third-party API

Note

You can extend these tools with your own wrappers or SDKs to integrate specialized services.

Tools become most powerful when orchestrated by agents, which automatically decide which tool to call and when. We’ll cover agents in a future lesson.

When to Use RAG vs. Tools

CapabilityRAG (Pre-Indexed)Tools (Real-Time)
Data SourceStatic documents (PDFs, articles)Live APIs, streaming endpoints, custom runtimes
LatencyBatch-driven (indexing over hours/days)Synchronous, real-time
Use CasesFAQs, policy lookup, historical dataFlight tracking, stock quotes, on-the-fly compute
Integration ComplexityVector database + retriever + LLMAPI client + tool wrapper + LLM

Warning

Don’t use RAG for live data—indexed documents can’t provide up-to-the-minute information. Instead, plug in a real-time tool.

The image compares "RAG" and "Tools," highlighting that RAG is for pre-processed and indexed data and happens in batches, while Tools are for real-time information.

Putting It All Together

Returning to our airline chatbot:

  • Baggage Policy → RAG (static, indexed)
  • Flight Status → Tools (real-time API calls)

The image illustrates use cases for an airline chatbot, featuring categories like RAG, baggage policy, tools, and flight tracking. Each category is represented with an icon and text.

In upcoming demos, we’ll walk through several built-in LangChain tools and show how agents leverage them to build robust, interactive applications.

References

Watch Video

Watch video content

Previous
Using Retrieval Chain