Memory that belongs to your code— not your model
Long-term synthetic memory for developers who don't want to be locked in.
Store architectural decisions in your project. Your connected AI tools remember them—across models, IDEs, and teams.
After installing, ask your AI this question:
Your AI should confirm it can access log_decision() and search_decisions(), then summarize the current handoff context when native tools are mounted.
What to Expect Next
- •Handoff context: Supported clients can surface repo-local handoff context automatically; degraded mode falls back to the checked-in Continuity files
- •No manual re-explaining: Once the client is mounted, you can rely on Continuity to surface the relevant project context instead of pasting notes each time
- •Persistent synthetic memory: Decisions you log today remain available to your AI tomorrow, next week, and next month
See the Math Behind Continuity
Don't just take our word for it. Read the formal proofs, analyses, and comparisons.
All documentation is open source and available for peer review.
Common Problems Continuity Solves
If you use AI coding assistants, you've probably experienced these frustrations
Why does Cursor keep forgetting my project structure?
Every time you start a new chat, Cursor loses context about your architecture, design patterns, and past decisions. You waste 15-30 minutes re-explaining how your codebase works.
How Continuity fixes this:
Automatically captures architectural decisions through git commits and file saves. Cursor (and other supported AI tools) can query this synthetic memory via the MCP protocol.
Claude/Cline loses context between sessions
AI coding assistants have temporary context windows that reset. Your project's history, conventions, and rationale disappear every time.
How Continuity fixes this:
Embedding-based retrieval keeps architectural decisions persistently accessible. 768-dimensional embeddings enable semantic search so relevant context is surfaced automatically.
I'm tired of re-explaining my codebase architecture
Onboarding new AI tools or starting fresh conversations means repeating yourself constantly about project structure, naming conventions, and design choices.
How Continuity fixes this:
A triple-detection system (git commits + file saves + AI conversations) builds a living knowledge graph of your architecture. All connected AI tools access the same synthetic memory.
My AI coding assistant doesn't remember past conversations
Each coding session starts from zero. The AI doesn't learn from previous interactions or remember solutions you've already discussed.
How Continuity fixes this:
Stores every architectural decision with relationships and timestamps. AI tools see the evolution of your codebase—not just the current state.
Many developers lose hours every week to context re-entry.
What is Synthetic Memory?
Context windows are temporary buffers. Synthetic memory is permanent storage for AI.
The Problem: Context Windows Reset
"We're using PostgreSQL, not MongoDB."
"Remember, we chose React over Vue for this."
"Like I said yesterday, our auth uses JWT tokens."
Every new chat starts from zero. You re-explain the same architectural decisions repeatedly.
The Solution: Synthetic Memory
Synthetic memory is permanent storage that lives outside the context window. When you log a decision ("Use Postgres for better transaction support"), it's stored in your project folder. Any supported AI tool you connect via Continuity can access it persistently.
- •Stored in .continuity/ as plain JSON
- •Works across supported MCP clients like Claude Code, Cursor, Cline / Roo Code, GitHub Copilot, and Google Gemini
- •Decision storage stays local-first in your repo
- •Commit to git, share with your team
How It Changes Your Workflow
Open any AI chat and it already knows why you picked Postgres, how your auth works, and which patterns you use. No more context-setting. No more re-explaining. Just start coding.
Start with a 14-day free trial — full Pro access, no credit card required. After trial: read/search stays available and logging is capped at 10 decisions (15 with email verification), or upgrade to Pro at $9/month, $89/year, or $199 lifetime.
Context windows are temporary RAM. Synthetic memory is permanent storage.
Why context windows aren't enough
Context windows are temporary buffers. You need permanent storage—synthetic memory.
Automated Decision Capture
Multiple detection methods work together to capture architectural decisions automatically. From file monitoring to AI conversation analysis, these layers ensure high capture rates without manual logging.
5 Detection Layers, 19 Detection Points
14 filesystem patterns, 5 MCP middleware types, git hooks, and AI conversation analysis—capturing decisions automatically so you don't have to.
Monitors 14 architectural file patterns (package.json, tsconfig, Docker, CI/CD) with git-aware diff calculation
Pre-commit hooks detect architectural changes and prompt for decision logging or add to debt tracker. Blocks commit if more than 5 decisions are unlogged.
Intercepts AI tool calls to detect 5 decision patterns: research-based, continuity-informed, iterative, config, dependency
AI-powered extraction using Claude analyzes conversation logs to find missed decisions, surfacing only results with confidence scoring above 60%
Contextual reminders and accountability metrics shown to AI tools via MCP protocol
How Synthetic Memory Works
Permanent storage that works across all your AI tools. Log once, remember forever.
Make a decision once, every AI tool remembers it forever.Context windows reset. Synthetic memory doesn't.