Introducing Open Context: Your AI Memory, Portable and Private
Every time you switch AI assistants, you lose everything — your preferences, your history, your context. Open Context fixes that with a 100% local, open-source solution that makes your AI memory truly yours.
Aviskaar Team
Aviskaar AI Research
TL;DR
Open Context is a free, MIT-licensed CLI + dashboard + MCP server that imports your ChatGPT history, analyzes it locally with Ollama, and generates portable memory files — preferences.md, memory.md, user-profile.md — that work with Claude, ChatGPT, and Gemini. Zero cloud. Zero API calls. Your data stays on your machine.
Why does AI feel so forgetful?
According to the Stack Overflow Developer Survey 2025, 84% of developers now use or plan to use AI tools — up from 76% the year before. Yet 46% actively distrust AI output, and 66% cite "solutions that are almost right, but not quite" as their biggest frustration. The models aren't getting worse. The problem is context.
Every conversation starts from zero. Claude doesn't know how you prefer to receive feedback. ChatGPT doesn't remember that you're building a Rust monorepo. Gemini has never heard of the team conventions you spent months explaining to another assistant. When you switch tools — or even start a new session — all of that accumulated understanding disappears.
This isn't a model quality problem. It's a memory architecture problem.
"Context engineering is the delicate art and science of filling the context window with just the right information for the next step." — Andrej Karpathy, via Simon Willison, 2025. The insight is simple but underappreciated: what the model knows at inference time matters as much as how the model was trained. Structured, portable context is the missing layer between AI tools and truly productive workflows.
What is Open Context?
Open Context is an open-source tool that solves the AI memory problem at the infrastructure level. It imports your full conversation history from ChatGPT, runs local AI analysis using Ollama, and produces a set of structured, portable context files that any AI assistant can consume immediately. The entire pipeline runs on your hardware — no external API calls, no cloud uploads, no subscriptions.
Open Context is a free, MIT-licensed CLI, dashboard, and MCP server that imports ChatGPT conversation history, analyzes it locally with Ollama, and generates three portable memory files —preferences.md,memory.md, anduser-profile.md— compatible with Claude, ChatGPT, and Gemini. No API keys or cloud accounts required.
Import from ChatGPT today. Export to Claude, ChatGPT, or Gemini. Your memory follows you.
Ollama runs the analysis on your machine. No data leaves your hardware — ever.
Persistent memory for Claude via Model Context Protocol. Works with Claude Code and Claude Desktop.
Dashboard privacy toggle blurs personal data. JSON store lives at ~/.opencontext/.
How does it work?
The pipeline has three stages. Getting started takes one command.
# Clone and run (Docker recommended)
$ git clone https://github.com/aviskaar/open-context
$ docker-compose up
# Or add persistent memory to Claude via MCP
$ open-context mcp start
✓ Dashboard running at localhost:3000
✓ MCP server ready for Claude Code & Claude Desktop
Stage 1: Import
Export your conversation history from ChatGPT (Settings → Data Controls → Export Data). Open Context ingests the resulting conversations.json, parses complex conversation trees including attachments, images, and multi-turn threads, and converts everything into readable markdown files.
Stage 2: Analyze
Ollama runs the analysis locally — no cloud inference, no API keys. The default model is gpt-oss:20b (13 GB). It reads your conversation patterns and generates three structured outputs:
preferences.mdYour communication style and AI interaction preferences, ready to paste into Claude system settings.memory.mdFactual context about you — work background, current projects, expertise areas.user-profile.mdAccount metadata and key topics of focus, identified from conversation patterns.
Stage 3: Export
The generated files are formatted for your target AI. For Claude, they map directly to system prompt and memory files. For ChatGPT, they populate custom instructions. For Gemini, they form the system prompt. The MCP server goes further — it gives Claude a persistent, queryable memory store across all future sessions.
How does MCP give Claude persistent, local memory?
The Model Context Protocol has reached serious ecosystem momentum — over 97 million monthly SDK downloads and more than 5,500 registered servers as of October 2025 (MCP Manager, 2025). OpenAI adopted it across ChatGPT desktop and the Agents SDK in March 2025. It's becoming the standard interface between AI assistants and external data sources.
Open Context's built-in MCP server uses this standard to give Claude something it doesn't have natively: long-term memory that persists across sessions, is stored locally, and can be searched, tagged, and updated at any time. Unlike custom API integrations, MCP tools are exposed to the model directly — which means Claude can save a note, recall a preference, or search your memory store in the middle of a conversation without any extra tooling on your end.
MCP Memory Tools
save_memory
Persist a note or context for future sessions
recall_memory
Search and retrieve stored context by query
tag_memory
Organize memories by project, topic, or date
list_memories
Browse everything Claude has saved locally
Why does 100% local processing matter for your data?
"Privacy-first" is a phrase every AI product uses. Open Context makes it structural rather than a policy. There are no external API calls in the analysis pipeline. No telemetry. No account required. Ollama runs entirely on your hardware, which means your conversations, your inferred preferences, and your memory files never touch a remote server.
The data store lives at ~/.opencontext/ — a plain JSON directory you can inspect, back up, or delete at any time. The dashboard includes a privacy toggle that blurs all personal data in the UI, useful for screen sharing or public demos.
This matters for a specific audience: developers who work with proprietary code, healthcare professionals, lawyers, and anyone whose conversations contain information they wouldn't want leaving their machine. The local-first design isn't a limitation — it's the point.
Open Context's local processing model means no telemetry, no remote inference, and no account required. Data lives in ~/.opencontext/ — a plain JSON directory you own entirely. According to the Stack Overflow Developer Survey 2025, 46% of developers actively distrust AI output, with data privacy a leading concern. Open Context is structurally designed for that 46%.Where does Open Context fit in the AI ecosystem?
The AI assistant market has fragmented in the best possible way. Different tools are genuinely better at different tasks — Claude for reasoning and long-form writing, ChatGPT for broad general use, Gemini for Google Workspace integration. Power users don't pick one and stop there. They use all of them, and they're constantly rebuilding context from scratch.
Open Context is for that user. It's not a replacement for any AI assistant. It's the infrastructure layer between them — the place where your preferences, history, and working context live, independent of any single platform.
Open Context is the portable memory layer between AI assistants — not a replacement for Claude, ChatGPT, or Gemini, but the infrastructure that carries your preferences and history across all of them. With the Model Context Protocol now exceeding 97 million monthly SDK downloads and 5,500+ registered servers (MCP Manager, 2025), the standard for AI interoperability is already forming. Open Context is built for that world.
Our perspective
The AI tool fatigue developers are experiencing isn't about the models themselves. It's about the lack of a portable identity layer. The same way SSH keys and dotfiles follow you across machines, your AI context should follow you across assistants. Open Context is an early, practical implementation of that idea.
The project is MIT-licensed and built on a stack that experienced developers will find familiar: Node.js, TypeScript, Express, React + Vite, and Commander.js for the CLI. Contributions are welcome. Gemini import support is on the roadmap. If you're looking to go further — running entire organizational functions with AI or validating agent reliability before shipping — see our posts on Open Org and ARA.
Frequently Asked Questions
Do I need a paid OpenAI or Anthropic account to use Open Context?
No. Open Context uses Ollama to run models locally on your hardware — no API keys or paid subscriptions required. The import step uses your ChatGPT conversation export, which any free or paid ChatGPT account can generate.
Which local models are supported?
The default is gpt-oss:20b (13 GB). You can also use qwen2.5:32b (20 GB), llama3:70b (40 GB), or llama3:8b (5 GB, fastest, lowest memory). Any Ollama-compatible model works — pick based on your hardware.
Does it work with Claude Code or just Claude Desktop?
Both. The MCP server connects to either Claude Code or Claude Desktop via the standard MCP stdio transport. Run open-context mcp start and follow the configuration instructions in the README.
What happens to my data if I uninstall?
Everything is stored in ~/.opencontext/ — a plain directory on your machine. Delete that folder and Open Context leaves no trace. No cloud account to close, no data deletion request to submit.
Is Gemini import supported?
Not yet — it's on the roadmap. Currently you can import from ChatGPT and export to Claude, ChatGPT, or Gemini. Watch the GitHub repository for updates.
Getting started with Open Context
Open Context is live on GitHub and available now. The quickest path is Docker — clone the repo, run docker-compose up, and the dashboard is running at localhost:3000 within minutes.
If you want persistent memory for Claude specifically, start with open-context mcp start and follow the MCP configuration guide in the README. The context migration pipeline — import, analyze, export — takes about ten minutes to run end-to-end on a modern machine.
Try Open Context
Free, open source, and runs entirely on your hardware. Bring your AI memory with you — across assistants, across sessions, always stored locally.