Memory MCP Server

Anthropic · MCP
8.0

Anthropic's official Memory MCP Server gives Claude persistent cross-session memory using a local knowledge graph. Entities, relations, and observations stored as a JSON file.

stable memory updated 2026-01
install
npx -y @modelcontextprotocol/server-memory
npm: @modelcontextprotocol/server-memory
↗ GitHub
capabilities
Persistent cross-session memory via local knowledge graph Store and retrieve entities (people, projects, concepts) Define relationships between entities Add observations to entities over time Search and query the knowledge graph Survives Claude Desktop restarts
compatible with
Claude DesktopClaude CodeVS CodeCursorAny MCP-compatible client

Claude has no memory between conversations by default. Every session starts from scratch. The Memory MCP Server solves this by maintaining a persistent knowledge graph on your local machine — Claude can add facts, relationships, and observations that survive session restarts.

This is Anthropic’s reference implementation of persistent memory for the MCP ecosystem. It’s designed for personal knowledge management rather than production-scale storage — it writes to a local JSON file, not a database.

What the Knowledge Graph Contains

The memory system is built around three concepts:

Entities — Named nodes in the graph. These can be people (“Alice”), projects (“Project Phoenix”), organizations (“Qwibit”), or any concept you want to track. Each entity has a type and a list of observations.

Relations — Directed connections between entities, stored in active voice: “Alice works at Qwibit”, “Project Phoenix uses NanoClaw”. Relations give the graph its structure.

Observations — Discrete facts attached to an entity: “Alice prefers dark mode”, “Project Phoenix ships on Fridays”, “Bob’s preferred language is TypeScript”. These accumulate over time as Claude learns more about the entity.

When you ask Claude a question, it can search this graph and surface relevant context from previous conversations before generating a response.

Installation

Prerequisites: Node.js 18+

Claude Desktop configuration (claude_desktop_config.json):

{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-memory"],
      "env": {
        "MEMORY_FILE_PATH": "/Users/yourname/.claude-memory/memory.json"
      }
    }
  }
}

If you skip MEMORY_FILE_PATH, the server stores memory in the default npm cache directory — which can get wiped on package updates. Set an explicit path to a permanent location.

Getting Claude to actually use memory requires prompting. Add something like this to your Claude.ai Project instructions or system prompt:

You have access to a persistent memory system via MCP tools.
Before responding, search memory for relevant context about this user and task.
After learning new important facts, store them as observations on relevant entities.

Without this, Claude will have access to the tools but won’t use them proactively.

What It’s Good For

Personal assistant context: Store your preferences, projects, and recurring people so Claude doesn’t ask the same questions repeatedly. After one session establishing context, subsequent sessions start with real situational awareness.

Project continuity: A software project’s key decisions, architecture choices, and team members as entities with observations. Claude picks up context across sessions without you repeating it.

Research accumulation: Build a knowledge graph of a topic over multiple research sessions. Entities for concepts, relations between them, observations for specific findings.

Limitations

  • Local JSON file, not a database. There’s no vector search, no semantic similarity — searches are exact name matches. On large graphs, search quality degrades.
  • No cloud sync. Memory lives on your machine. Switch machines and the memory doesn’t follow. Back up the JSON file manually.
  • No automatic memory. Claude doesn’t add to memory unless you configure the system prompt to instruct it to. Default behavior is tool access without automatic use.
  • No access controls. Any MCP-compatible client connecting to this server has full read/write access to your memory graph.
  • Performance at scale. Not designed for graphs with thousands of entities — it reads and writes the entire JSON on each operation.

Alternatives

mem0 and Zep are hosted memory platforms with semantic search, auto-summarization, and multi-session context compression. Better for production agent systems. More complex to set up and not free.

NanoClaw’s built-in memory — if you’re running NanoClaw as your agent runtime, it maintains per-agent memory natively without a separate MCP server.

For most personal Claude Desktop setups, this reference implementation is sufficient. For production agent deployments, look at purpose-built memory platforms.

Our Take

The Memory MCP Server is a useful building block, not a complete solution. It works well for personal assistant workflows where you want Claude to remember a few hundred facts across sessions. The JSON-based storage and lack of semantic search are real constraints for anything larger.

What it does, it does reliably. The setup is straightforward, the cost is zero, and it’s the standard reference implementation — meaning better alternatives will integrate against the same tool interface as they mature.

Best for: Claude Desktop power users who want cross-session context without running additional infrastructure.

Skip if: You’re building a production agent system with thousands of facts — use mem0, Zep, or a dedicated vector database instead.

Rating: 8.0/10