You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Deep code indexing MCP server for AI agents. 25 tools: hybrid FTS5 + embedding search, call graphs, git blame/hotspots, build system analysis. Multi-repo workspaces, GPU-accelerated semantic search, 10 languages via tree-sitter. Fully local, zero cloud dependencies.
MCP Server for persistent code indexing. Gives AI assistants (Claude, Gemini, Copilot, Cursor) instant access to your codebase. 50x less context than grep.
Memory-aware context engine for AI coding agents — up to 91% fewer tokens, 17/18 rank-1 across 6 OSS projects. MCP-native, multi-repo, with persistent observations & decisions.
Python application to index code locally and support running server with indexed repos. Works with VoyageAI to power semantic searching a large codebase, enabling AI optimized code navigation. Supports FTS searching, and indexing git log. Experimental support for SCIP indexing.
Claude re-reads your code every session. Make it stop. Save 70%+ on tokens. Local MCP server with AST indexing, hybrid search, and cross-session memory.
Offline-first coding agent for local LLMs (LM Studio + MCP). Project-aware context, memory, and filesystem tooling for real coding workflows directly in your codebase.
An AI-powered system for intelligent code search, moving beyond keywords to semantic understanding. It offers multi-dimensional search capabilities across files, classes/interfaces, and methods, each with optimized AI-generated embeddings. Get precise, context-aware results to natural language queries quickly and efficiently.
Pack 40+ files at 5 depth levels into any LLM context window. Keyword, semantic, and graph resolution. 100% recall at 1% of repo. Drop-in for any AI agent.