Your machine,
mapped.

AI tools navigate your work blind — no knowledge of your projects, no map of how files connect, no understanding of what anything means. Veld changes that. Index everything. Query anything. Your AI finally knows where to look.

you "pull up everything from the auth project"
veld ~/projects/auth/ · 18 files · handler, middleware, config · 3 linked docs · last active 4 days ago
L2 — context knowledge graph L1 — chunk semantic index L0 — file cortex
Works with every AI tool
Claude Desktop
Claude Code
Cursor
Windsurf
VS Code Copilot
ChatGPT
Zed AI
Aider
Continue.dev
Codeium
OpenAI API
Any MCP Client
Claude Desktop
Claude Code
Cursor
Windsurf
VS Code Copilot
ChatGPT
Zed AI
Aider
Continue.dev
Codeium
OpenAI API
Any MCP Client
#02 — the problem

Your AI is
navigating
blind.

You've got projects, docs, code, and notes scattered across your machine. Your AI has none of that context. Every query starts from zero. Every session, you're the map.

  • 01
    No project awareness
    "Pull up the auth project" means nothing without a map. Your AI has no index of what projects exist, where they live, or what they contain. You paste paths manually. Every time.
  • 02
    No semantic map
    Files that belong together aren't linked. The RFC that spawned an implementation, the config that governs six modules, the note that explains a decision — your AI can't see any of it.
  • 03
    No understanding of intent
    A planning doc and the code it describes look identical to your AI. It can't distinguish purpose. Ask for "the architecture decision" and it returns files that contain those words — not the one you meant.
#03 — architecture
How it works

A cortex for
your machine.

Veld runs as a silent daemon, building a three-layer map of everything on your machine. Any AI tool queries the index in natural language and gets back instant, precise, structured context.

L2 — Context
Knowledge graph
Projects, topics, and entities that span multiple files. "The auth project" resolves instantly — not from a search, from a pre-built map. Typed edges connect every file to everything it relates to.
L1 — Chunk
Semantic index
Every paragraph, function, and code block has its own embedding vector. Queries match on meaning, not keywords. The right content surfaces even when the exact words don't match.
L0 — File
Intent + metadata
Every file tagged with what it is and why it exists. Planning doc, implementation, reference, config. Ask for the architecture decision and get the RFC — not the code it generated.
#04 — features
What changes

Stop searching.
Start knowing.

01
project lookup
Instant project resolution
Name a project in natural language. Veld resolves it — directory, files, linked docs, last active, what it connects to. No paths, no manual context.
// name it, get everything get_project({ name: "auth system" }) // path · files · docs · graph
02
natural language
Query in plain English
Search by meaning, not filename. "The function that handles retries", "the planning doc for the DB layer", "everything we touched last week" — all resolved precisely.
// meaning-based search search({ q: "error handling in the API layer", intent: "implementation" })
03
knowledge graph
See what connects
Every file linked to everything it relates to — imports, co-accessed files, semantic neighbors, docs that describe it. Full context around any file in one query.
// full context, one call get_graph({ path: "./src/handler.rs", depth: 2 }) // typed subgraph returned
04
temporal context
Time-aware queries
"What were we building last Tuesday" returns files grouped by project, sorted by how long you worked on them. Context isn't just spatial — it's temporal.
// time is context too query_timeline({ when: "last Tuesday", group_by: "project" }) // sorted by activity
05
privacy
Zero egress. Always.
Every model runs on-device. No outbound network — ever. SSH keys, API tokens, wallet seeds are blocked before they enter the pipeline. Your machine stays yours.
# blocked by default paths = [ "~/.ssh", "~/.aws", "*.pem", "*.wallet" ] env_files = "keys_only"
06
universal
One index. Every tool.
MCP server for Claude, Cursor, Windsurf. REST API for anything else. CLI for scripting. VS Code extension. Same index, accessible from everything you use.
// one config, everywhere { "mcpServers": { "veld": { "command": "veld", "args": ["mcp"] } } }
#05 — cli
Terminal-first

Fast from the
command line.

Every operation available as a CLI command. Natural language search, project lookup, graph traversal, daemon control. Pipe it, script it, hook it into anything.

$ veld search "error handling in payments"
$ veld project auth
$ veld graph ./src/handler.rs
$ veld summary ./
$ veld daemon status
~ veld search
$ veld search "error handling in the API layer"

  SCORE   INTENT      PATH

  0.96   impl        ~/projects/api/src/errors/handler.rs
         "fn handle_error(err: ApiError) -> Response..."
         modified 2 days ago · imports: types.rs, logger.rs

  0.89   reference   ~/projects/api/docs/error-codes.md
         "all 4xx responses include a structured body..."
         linked to: handler.rs, middleware.rs

  0.83   impl        ~/projects/api/src/middleware/retry.rs
         "const MAX_RETRIES: u32 = 3; fn should_retry..."
         co-accessed with: handler.rs · 6 sessions

  0.77   planning    ~/projects/api/docs/error-strategy.md
         "we decided on structured errors over panic..."
         modified 3 weeks ago · spawned: handler.rs

  4 results · 41ms · 87,342 files indexed
#06 — compatibility

One index.
Every tool.

Claude
Desktop, Code, Cowork — MCP
Cursor
MCP + REST, zero config
VS Code
Native extension + Copilot
Windsurf
MCP server, drop-in
Aider
REST API + shell hooks
Continue.dev
Context provider plugin
OpenAI API
Any client via REST
Any MCP Client
Open protocol, self-host
#07 — performance
By the numbers

Fast enough to
disappear.

Built in Rust on SQLite. No separate processes, no network stack, no config. Just a binary that runs and stays out of your way.

43ms
Semantic query latency
p95 across 100k files on Apple M-series. Hybrid keyword + vector in a single SQLite query.
<0.1%
CPU at idle
FSEvents-driven. Zero polling. The daemon costs nothing when nothing is changing.
~500mb
Index size for 100k files
Full metadata + chunk embeddings (384-dim, INT8 quantized). One SQLite file.
800ms
Debounce re-index time
60 autosaves = 1 index operation. Hot files indexed in under a second.
Capability Veld MCP Filesystem No index
Natural language project lookup
Semantic search across all files
Cross-file knowledge graph
Intent detection per file
Temporal / timeline queries
Works with any MCP client
100% local, zero egress
#08 — faq
Common questions

Everything you
want to know.

Does it slow down my machine? +
No. Veld uses FSEvents — Apple's kernel-level file system event stream. There's zero polling. At idle, CPU usage is under 0.1%. Indexing only kicks in when files actually change, and it throttles automatically when your battery is low or CPU is under load.
Is my data safe? Does anything leave my machine? +
Nothing leaves your machine. Ever. The embedding models run locally via ONNX Runtime. The REST API binds to 127.0.0.1 only. SSH keys, wallet seeds, .env values, and private keys are blocked at the watcher level — they never enter the indexing pipeline at all.
What file types does it index? +
Everything. Source code (with tree-sitter AST parsing for 40+ languages), Markdown, plain text, PDFs, JSON, TOML, YAML, and a fallback splitter for everything else. Images get basic metadata now — visual embeddings via CLIP are on the roadmap.
How big is the index? Will it fill my disk? +
Around 500MB for a 100,000 file machine. That includes full metadata, FTS5 keyword index, and 384-dim INT8 quantized chunk embeddings — all in a single SQLite file at ~/.veld/. Content-addressed deduplication means identical files are only indexed once.
What happens when I close my laptop or the machine sleeps? +
Veld registers for IOKit sleep/wake notifications. On sleep it checkpoints and pauses. On wake it runs a differential mtime scan to catch anything that changed — including hard shutdowns and external drives reconnecting. You never need to manually re-index.
How do I connect it to Claude / Cursor / my AI tool? +
Add veld to your MCP config — one JSON block, shown in the docs. For REST-based tools, Veld runs a local server at 127.0.0.1:9842. The VS Code extension installs in one click. That's it — no API keys, no accounts, no cloud setup.
Is it open source? Can I audit the code? +
Fully open source under MIT. The entire codebase is on GitHub — daemon, MCP server, REST API, CLI, everything. No telemetry, no phone-home, no licensing server. Fork it, audit it, run it on air-gapped machines.
#09 — get started

Your AI knows
where to look.

Open source. Runs locally. Built in Rust.
Star the repo and follow the build.