LangChain enhances stateless LLMs by introducing two memory modules—short-term and long-term—so your applications can remember past interactions. By default, a large language model treats each prompt independently, forgetting previous exchanges. LangChain’s memory abstractions fix this, enabling more dynamic and context-aware agents.Documentation Index
Fetch the complete documentation index at: https://notes.kodekloud.com/llms.txt
Use this file to discover all available pages before exploring further.
Memory Types
| Memory Type | Description | Storage Examples |
|---|---|---|
| Short-term | Tracks conversation history during a single session by appending each exchange. | In-memory buffer |
| Long-term | Persists select interactions across sessions for later recall. | SQLite, Redis, text files |

Short-term Memory
Short-term memory keeps track of user inputs and LLM responses only while the session is active. Each exchange is stored in an in-memory buffer, preserving conversational context within a single runtime.Use short-term memory for interactive applications like chatbots or live Q&A, where context only matters during the current session.
Long-term Memory
Long-term memory saves important conversation snippets to an external store, allowing your agent to recall them across sessions. You can back this with any supported database or file system, such as SQLite, Redis, or a plain text file.When storing sensitive data, implement encryption and proper access controls to protect user privacy.