Local, natural language search in your editor
code-search
provides an API and accompanying VS Code extension for natural language searching on your codebase.
You need both the API and the VS Code extension (this repository)
- Extract the backend
- Create a venv
- Install backend dependencies (
pip install -r requirements.txt
) - Start the server with
python main.py
- Open the extracted frontend in VS Code
- Install dependencies (
npm ci
) - Press F5 to start a debugging window
Note that the model will be downloaded after the extension is started in a workspace.
The VS Code extension automatically indexes your workspace once started. It will also automatically update the index on saves, renames, and deletions.
If you notice that the index has gotten out-of-sync, you may reset the index by running "Reset indices and embeddings" from the command palette.
Initial indexing may be slow especially on large codebases. You may adjust the included/excluded files in the VS Code settings.
The ONNX runtime is compatible with multiple NPU backends (execution providers). However, they have yet to be implemented.
- Package frontend/backend and upload to VS Code marketplace
- NPU/GPU backends
- Add more models (currently only supports the Jina model)
- Improve UX of search sidebar
- Re-implement caching for newly saved files
- Consider
transformers.js
to remove the need for a backend (currently facing VS Code's RAM limit on extensions) - Implement findFiles2 API to share VS Code's built-in file inclusion/exclusion policies