Search & recall

The vault has two ways to find things, with different tradeoffs.

Full-text search (FTS5)

Every conversation is indexed in SQLite's FTS5 module the moment it's written to the vault. The desktop app's search bar (top of the window, also reachable with ⌘K / Ctrl+K) hits this index.

Query syntax

FTS5 supports:

postgres rls           # any document with both terms
"row-level security"   # exact phrase
postgres OR mysql      # union
postgres NOT mysql     # exclude
postgres NEAR rls      # tokens close together
plat*                  # prefix (matches "platform", "plat-form", etc.)

Filters stack on top of the query (each is a UI control, not a syntax token):

  • Platform — limit to chatgpt / claude / gemini / grok / kimi
  • Date range — last week, last month, custom
  • Message count>20, >50, etc. (filter out brief throwaway threads)
  • Has attachments — only conversations with PDFs or images

Results return in sub-second time on archives in the tens of thousands of conversations. The conversation reader opens with the matched terms highlighted in context.

When FTS is the right tool

  • You remember an exact phrase or a specific symbol/identifier
  • You want to scope by platform or date and grep
  • You want results fast and don't need fuzzy understanding

Smart Recall (semantic)

When you don't remember exact words but you remember what the conversation was about, Smart Recall is the right surface. It's a plain-English query bar that returns the most relevant passages with citations back to the source conversation and message.

Examples

What did I decide about auth last quarter?
Where did I land on the Postgres RLS rollout strategy?
Show me everything related to the Tauri migration.
What was that prompt I used for code review?

Smart Recall builds an embedding index over message-level chunks (not whole conversations) and ranks against your query. Each answer cites:

  • The source conversation file
  • The specific message offset within it
  • The platform it came from

Click a citation to open the conversation at that message.

What runs where

  • The embedding model runs against whichever provider you've configured (BYOK — see Use the agent). With OpenAI, Anthropic, or OpenRouter as the provider, queries leave your machine for that vendor — same trust boundary as a normal API call you'd already make. With Ollama, everything runs on-device.
  • The embedding store is local. CozoDB, in ~/.kept/kg.db/. It never leaves your machine.

When to use which

QuestionTool
"Did I ever use the term idempotency in a Stripe context?"FTS
"What did I figure out about webhook deduplication?"Smart Recall
"Find the conversation from last Tuesday."FTS + date filter
"What's the consensus across all my chats about JWT vs sessions?"Smart Recall
"Which conversations had >50 messages and mention Postgres?"FTS + filter
"Summarize what I learned about caching."Smart Recall (or use Digests)

You can also pipe FTS results into Smart Recall: filter to a platform + date range, then ask a semantic question scoped to that subset.

From the command line

ripgrep just works on the vault directly because it's plain markdown. Useful for one-offs:

rg -l 'postgres' ~/.kept/vault/
rg -l --type md '\bRLS\b' ~/.kept/vault/claude/

For semantic search outside the desktop app, the MCP server exposes a search tool that any MCP-aware client can call.