Agent Memory
Definition
The mechanism by which an agent persists and retrieves information across interactions — enabling it to learn from past conversations, maintain context over long tasks, and build knowledge over time. Memory systems vary in scope: working memory (current conversation context), short-term memory (recent interactions within a session), long-term memory (persistent across sessions), and episodic memory (specific past events). Implementation approaches include: conversation history (simplest), vector stores (semantic retrieval), structured databases (relational queries), and knowledge graphs (entity relationships).
Builder Context
The right memory architecture depends on what the agent needs to remember and for how long. For most agents, start with: (1) a sliding conversation window for working memory (last N turns), (2) a vector store for long-term semantic memory (past interactions, learned facts), and (3) a structured store for user preferences and state. The most common memory failure: storing everything without relevance filtering. An agent that retrieves irrelevant memories performs worse than one with no memory at all. Implement memory with a write filter (is this worth remembering?) and a read filter (is this relevant right now?).