Skip to main content
LangChain enhances stateless LLMs by introducing two memory modules—short-term and long-term—so your applications can remember past interactions. By default, a large language model treats each prompt independently, forgetting previous exchanges. LangChain’s memory abstractions fix this, enabling more dynamic and context-aware agents.

Memory Types

Memory TypeDescriptionStorage Examples
Short-termTracks conversation history during a single session by appending each exchange.In-memory buffer
Long-termPersists select interactions across sessions for later recall.SQLite, Redis, text files
The image is a diagram illustrating a memory system involving a user, memory storage, a language model, and external databases like SQLite and Redis.

Short-term Memory

Short-term memory keeps track of user inputs and LLM responses only while the session is active. Each exchange is stored in an in-memory buffer, preserving conversational context within a single runtime.
Use short-term memory for interactive applications like chatbots or live Q&A, where context only matters during the current session.

Long-term Memory

Long-term memory saves important conversation snippets to an external store, allowing your agent to recall them across sessions. You can back this with any supported database or file system, such as SQLite, Redis, or a plain text file.
When storing sensitive data, implement encryption and proper access controls to protect user privacy.

What’s Next

In the following sections, we’ll walk through configuring and using both short-term and long-term memory in LangChain, complete with code examples and best practices.