NHacker Next
- new
- past
- show
- ask
- show
- jobs
- submit
login
The key difference is that it works across projects. While working on project A, I can ask: “How does project B send messages?” and have that context retrieved and applied, without manually opening or loading docs.
Linggen is a local-first memory layer that gives AI persistent context across repos, docs, and time. It integrates with Cursor / Zed via MCP and keeps everything on-device.
I built this because I kept re-explaining the same context to AI across multiple projects. Happy to answer any questions.
When using Claude Desktop, it connects to Linggen via a local MCP server (localhost), so indexing and memory stay on-device. The LLM can query that local context, but Linggen doesn’t push your data to the cloud.
Claude’s web UI doesn’t support local MCP today — if it ever does, it would just be a localhost URL.
The distinction I’m trying to make is that Linggen itself doesn’t sync or store project data in the cloud; retrieval and indexing stay local, and exposure to the LLM is scoped and intentional.
In particular, I don't know which parts of my data might get sent to Claude, so even if I hope it's only a small fraction, anything could in principle be transmitted.