LangChain
Introduction to LCEL
Overview of LCEL
Welcome back! In this lesson, we’ll dive into the LangChain Expression Language (LCEL)—a declarative, domain-specific language that makes it easy to compose, visualize, and maintain LangChain pipelines using a simple pipe (|
) operator.
LCEL treats each LangChain component (prompts, LLMs, retrievers, parsers, etc.) as a stage in a logical pipeline. The output of one stage seamlessly becomes the input of the next, resulting in clean, modular code that’s straightforward to extend and debug.
Every component in LangChain implements a clear input → processing → output contract. When you wire these components with |
, LCEL builds an executable graph under the hood:
This approach offers:
- Modularity: Swap, reorder, or reuse stages without rewriting the entire chain
- Readability: A visual, left-to-right flow similar to shell pipelines
- Extensibility: Integrate custom
Runnable
components, retrievers, or parsers
At the core of LCEL is the Runnable
base class. By subclassing Runnable[InputType, OutputType]
, you define how a component:
- Receives input
- Processes its logic (e.g., calls an LLM or parses text)
- Emits output
Implementing Runnable
ensures your custom components plug into LCEL pipelines without extra boilerplate.
Note
Define your component by extending Runnable
and overriding the invoke
method:
from langchain.core.runnable import Runnable
class MyCustomProcessor(Runnable[str, str]):
def invoke(self, input_text: str) -> str:
# transform input_text and return result
return input_text.upper()
LCEL is production-ready and powers systems with hundreds of chained components running reliably at scale.
If you’ve used Unix or Linux, you know how |
passes the output of one command to another:
$ cat file.txt | grep "error" | wc -l
cat file.txt
streams the file contentgrep "error"
filters for “error” lineswc -l
counts matching lines
You can achieve the same clarity in LCEL:
from langchain import PromptTemplate, OpenAI, RegexParser
prompt = PromptTemplate("Q: {question}\nA:")
llm = OpenAI(model="gpt-4")
parser = RegexParser(pattern=r"^.*$")
chain = prompt | llm | parser
result = chain.invoke({"question": "Tell me about the Godfather movie"})
print(result)
- Prompt formats the question
- LLM generates the answer
- Parser extracts the relevant output
Warning
Long, monolithic pipelines can be hard to debug. Break complex workflows into smaller sub-chains and reassemble them for clarity.
Component | Role | Example |
---|---|---|
Prompt | Template for input formatting | PromptTemplate("Q: {text}") |
LLM | Language model invocation | OpenAI(model="gpt-4") |
Retriever | External data lookup (e.g., docs) | PDFRetriever(file_path="report.pdf") |
OutputParser | Structured output extraction | RegexParser(pattern=r"Answer: (.*)") |
CustomRunnable | User-defined processing logic | MyCustomProcessor() |
LCEL also supports meta-chains—where one chain’s output feeds another chain. This unlocks powerful workflows, such as multi-step reasoning or data enrichment pipelines.
Most of our upcoming demos and labs will leverage LCEL to build expressive, maintainable LangChain applications. To deepen your understanding:
- Read the official LCEL documentation
- Explore the LangChain GitHub examples
- Try extending
Runnable
with your own custom processors
Let’s switch to the demo and see LCEL in action!
Watch Video
Watch video content