This article covers advanced LCEL patterns, including custom logic injection, metrics computation, and chain introspection using RunnableLambda and grandalf.
In this lesson, we dive into advanced LCEL patterns by injecting custom logic into your pipelines and inspecting their internal structure. You’ll learn how to:
Build a basic LCEL chain.
Wrap Python functions as RunnableLambda components.
Debug and compute additional metrics.
Visualize and explore your chain graph with grandalf.
Start by defining a simple chain that generates a one-line description for any topic using OpenAI’s Chat API:
Copy
Ask AI
from langchain_core.prompts import ChatPromptTemplatefrom langchain_openai import ChatOpenAIfrom langchain_core.output_parsers import StrOutputParserprompt = ChatPromptTemplate.from_template("Give me a one-line description of {topic}")model = ChatOpenAI()output_parser = StrOutputParser()chain = prompt | model | output_parserresult = chain.invoke({"topic": "AI"})print(result)# 'AI is the simulation of human intelligence processes by machines, especially computer systems.'
You can seamlessly insert your own Python functions into the chain as RunnableLambda components.
Copy
Ask AI
def to_titlecase(text: str) -> str: return text.title()chain = prompt | model | output_parser | RunnableLambda(to_titlecase)titlecased = chain.invoke({"topic": "AI"})print(titlecased)# 'Ai Is The Simulation Of Human Intelligence Processes By Machines, Especially Computer Systems.'
Use RunnableLambda to wrap any pure Python function and inject it at any stage of your chain.
This ASCII diagram illustrates the data flow from prompt generation through model invocation, parsing, custom transformations, and final metric calculation.
With custom runnables and introspection in your toolkit, you can combine multiple chains, integrate external APIs, and build complex, maintainable workflows in LCEL.