How Continuity Works
Install the extension and your AI tools have persistent memory. Zero configuration, zero friction — it just works.
Install — zero-click setup
Install from the VS Code Marketplace and open a project. Continuity auto-detects supported AI clients and configures MCP automatically where the client allows it. No manual JSON editing for the common setup path.
- •Auto-detects Claude Desktop, Claude Code, Cursor, Cline, Roo Code, GitHub Copilot, Gemini CLI
- •Smart Node.js path resolution for nvm, fnm, volta, and Homebrew
- •14-day Pro trial starts immediately — all features unlocked, no credit card required
# Install Continuity from VS Code Marketplace
1. Open VS Code
2. Install "Continuity" extension
3. Open any project folder
# What happens automatically:
✅ Detects your AI tools (Cursor, Claude, Cline...)
✅ Auto-configures supported MCP clients when the client allows it
✅ Resolves Node.js path (nvm, fnm, volta, Homebrew)
✅ Ready in seconds
For supported clients, setup is zero-click. Others may still need a quick reload.
# First launch — your project is instantly understood
Project Scanner found:
✅ Tech stack: Next.js 15, TypeScript, Tailwind CSS
✅ Database: Prisma + PostgreSQL
✅ Testing: Vitest + Playwright
✅ Deployment: Docker + GitHub Actions
✅ API style: REST with Zod validation
Git seeding found:
✅ 23 architectural decisions from recent commits
✅ Draft decisions created — review and confirm
.continuity/
├── decisions.json # All your decisions
├── SESSION_HANDOFF.md # Generated handoff context
└── embeddings.json # Semantic search (Pro)
Everything is local JSON. Commit to git, share with team.
Your project is instantly understood
On first launch, the Project Scanner analyzes your codebase and creates draft decisions for your tech stack, dependencies, database, testing, deployment, API style, and project structure. Git seeding pulls decisions from your recent commit history.
- •Project Scanner creates draft decisions automatically
- •Git seeding pre-populates from recent history — sidebar is never empty
- •Everything stored locally as plain JSON in
.continuity/ - •Commit to git and share with your team
Decisions are captured automatically
Five automated detection layers with 19 detection points work in the background. File changes, git commits, AI conversations, and MCP tool calls are all monitored. You code — Continuity remembers.
- •File system monitoring — 14 architectural file patterns with git-aware diffs
- •Git hooks — pre-commit audit blocks if >5 unlogged, post-commit auto-logs stubs
- •MCP middleware — detects 5 decision patterns from AI tool calls
- •AI conversation analysis — extracts missed decisions with 60%+ confidence
- •Tool availability depends on the client and active MCP profile
# 5 detection layers capture decisions automatically
Layer 1 — File System Detection
Monitors 14 architectural file patterns
(package.json, tsconfig, Docker, CI/CD)
Git-aware diffs with 5-second debouncing
Layer 2 — Git Hook Integration
Pre-commit: blocks if >5 unlogged decisions
Post-commit: auto-logs stub decisions
Layer 3 — MCP Middleware
Intercepts AI tool calls
Detects 5 decision patterns automatically
Layer 4 — Conversation Analysis
AI-powered extraction with 60%+ confidence
Finds decisions you forgot to log
Layer 5 — Prompt + Context Handoff
Generated handoff files and repo-local context
Honest fallback when a client is in degraded mode
Result: 19 detection points working in the background.
You focus on coding. Continuity remembers everything.
# Verify it works — ask your AI:
You: "Do you have access to Continuity? Which memory
tools or MCP profile are available here?"
AI: "Continuity is connected in this client.
I can see tools like:
• log_decision — save architectural choices
• search_decisions — semantic + keyword search
• get_quick_context — load project history
• update_session_notes — keep session context current
• read_session_notes — recover the active thread
Tool availability depends on the client and the
active MCP profile. If Continuity is not mounted
here, I should say so and fall back to repo-local
context instead of pretending the tools are loaded."
✅ Working. Your AI either reports connected tools or
honestly tells you it's in degraded mode.
If AI says "I don't have those tools yet":
→ Reload VS Code (Cmd+Shift+P → Reload Window)
→ Check Output panel (View → Output → Continuity)
→ Confirm the client supports workspace MCP
Verify it in 30 seconds
Ask your AI: "Do you have access to Continuity?" A healthy client should report the tools or profile it can see. If the MCP connection is degraded, it should say that and fall back to repo-local context instead of bluffing.
- •AI confirms access to key memory tools and may report the active MCP profile
- •If the client is degraded, it should say so and use repo-local Continuity files instead
- •Decision lifecycle: active, draft, outdated, deprecated, or superseded
- •If it doesn't work, reload VS Code or check the Output panel
Works with Your Favorite AI Tools
Cursor IDE
Stop Cursor from forgetting your architecture every session. Continuity can wire in persistent repo memory through MCP when the client exposes it.
- ✓Auto-configured on supported installs
- ✓Works in both chat and composer
- ✓No .cursorrules needed — Continuity is better
Claude Desktop
Claude Desktop forgets past conversations. Continuity gives it synthetic memory across all sessions.
- ✓Remembers decisions from last week
- ✓Auto-configured on supported installs
- ✓Works with all Claude models
Claude Code (CLI)
Terminal-based Claude with full Continuity integration. Perfect for developers who live in the command line.
- ✓Auto-configured on supported installs
- ✓Built-in MCP support
- ✓Zero configuration required
Cline (VS Code)
Cline users get automatic context loading. No more manually pasting architectural notes into chat.
- ✓Works in VS Code sidebar
- ✓Shares same .continuity folder
- ✓Multi-file edits remember context
Roo Code
Roo Code + Continuity = powerful pair programming with memory. Your decisions persist across refactoring sessions.
- ✓MCP support built-in
- ✓Perfect for large refactors
- ✓Team decision sharing via git
GitHub Copilot
Enhance Copilot with architectural context. Your decisions guide suggestions automatically.
- ✓Better code suggestions
- ✓Respects your tech choices
- ✓Works in VS Code chat panel
Gemini CLI
Google's Gemini CLI with full synthetic memory. Same decisions, different model.
- ✓Auto-configured via MCP
- ✓Shares memory with all other tools
- ✓Switch models without losing context
One extension. Profile-based MCP tooling. Supported AI clients can share the same repo-local decision history through Continuity. That's the power of the Model Context Protocol when the client mounts it.
Ready to Keep Going?
Your 14-day Pro trial includes everything — unlimited decisions and the full feature set. After the trial, your decision history stays preserved, and you can keep limited read/search access plus capped logging or upgrade to Pro for unlimited use.
Pro Annual
$89/yr
- •Unlimited decisions
- •3 device activations
- •Semantic search
- •Knowledge graph (1,500 nodes)
Best value for solo developers
Lifetime
$199
- •All Pro features forever
- •All future updates included
- •One-time payment
- •$16.58/year amortized
Pay once, use forever
Monthly
$9/mo
- •All Pro features
- •3 device activations
- •Cancel anytime
- •No commitment
Flexible month-to-month