🐚

Conch

Biological memory for AI agents.

Semantic search + decay. No API keys. Just a SQLite file.

$ cargo install conch

The Problem

Most AI agents use a flat memory.md file. It doesn't scale.

Loads everything into context

Every prompt gets the whole file. Tokens wasted on irrelevant history.

No semantic recall

grep finds keywords, not meaning. "Where does Jared work?" won't match "Jared is employed at Microsoft".

No decay

A fact from 6 months ago is weighted the same as one from today.

No deduplication

The same thing stored 10 times in slightly different words.

// memory.md after 6 months:
4,000 lines, loaded every prompt
// Conch after 6 months:
10,000 memories β†’ 5 relevant ones returned per recall
πŸ“‰

Biological Decay

Memories strengthen with use and fade over time β€” just like biological memory. Facts decay slowly (Ξ»=0.02/day), episodes faster (Ξ»=0.06/day). Anything below 0.01 strength is pruned forever.

Jared works at Microsoftstrength: 0.94

Recalled 3 days ago β€” reinforced

Prefers dark modestrength: 0.31

Last recalled 40 days ago β€” fading

Had lunch at Chipotlestrength: 0.008

Episode from 90 days ago β€” pruned ☠️

πŸ”

Semantic Search

Hybrid BM25 + vector recall fused via Reciprocal Rank Fusion. It finds meaning, not keywords. Ask a question in natural language and get relevant memories β€” even when the phrasing is completely different.

# Stored as a fact:
$ conch remember "Jared" "is employed at" "Microsoft"
# Recalled with different phrasing:
$ conch recall "where does Jared work?"
β†’ [fact] Jared is employed at Microsoft
score: 0.847 | strength: 0.94
πŸ•ΈοΈ

Graph Traversal

Spreading activation through shared subjects and objects. Recalling one memory surfaces related ones β€” building a web of associated knowledge, just like your brain does.

Jared→ works at →Microsoft
β”‚
Microsoft→ builds →M365 Copilot
β”‚
Jared→ works on →M365 Copilot

Asking about Jared surfaces Microsoft, which surfaces Copilot

🚫

Zero Infrastructure

One SQLite file. Local embeddings via FastEmbed β€” no API key, no internet, no config. Install and go. Your memories stay on your machine.

Conch

βœ… SQLite
βœ… Local embeddings
βœ… No API keys
βœ… Works offline
βœ… Zero config

Others

❌ Redis / Postgres
❌ OpenAI API key
❌ Docker compose
❌ Cloud dependency
❌ YAML config files
πŸ”Œ

MCP Support

Built-in Model Context Protocol server. Drop the config into any MCP-compatible client and your LLM gets direct access to remember and recall β€” no glue code needed.

{
  "mcpServers": {
    "conch": {
      "command": "conch-mcp",
      "env": {
        "CONCH_DB": "~/.conch/default.db"
      }
    }
  }
}

How It Works

Store a fact

$ conch remember "Jared" "works at" "Microsoft"

Store an episode

$ conch remember-episode "Deployed v2.0 to production"

Recall by meaning

$ conch recall "where does Jared work?"
β†’ [fact] Jared works at Microsoft (score: 0.847)

Run decay maintenance

$ conch decay
β†’ Decayed 847 memories, pruned 12 below threshold

Scoring formula

score = RRF(BM25, vector) Γ— recency Γ— access_weight Γ— strength
Recency β€” 7-day half-life
Access β€” log-normalized 1.0–2.0Γ—
Decay β€” facts Ξ»=0.02/day, episodes Ξ»=0.06/day
Death β€” pruned below 0.01 strength

Get Started

$ cargo install conch

No Cargo? Check the installation guide for prebuilt binaries.

MCP Server Config

{
  "mcpServers": {
    "conch": {
      "command": "conch-mcp",
      "env": { "CONCH_DB": "~/.conch/default.db" }
    }
  }
}