- Read the documentation to see what SubQuery AI can do
- Install dependencies
manifest.ts
- This file defines key configuration options for your app. Note: This is converted to JSON when publishing.project.ts
- This is where the code for your app lives. It registers any tools and your system prompt.
There are 2 ways to run your app locally. Both require you having access to an Ollama RPC, if you have a sufficiently powerful computer you can run Ollama locally.
To start your app: subql-ai -p ./manifest.ts
To chat with your app using a cli, in another terminal you can run
subql-ai repl
To run your project in docker there is a provided docker-compose.yml
file.
This will start your app as well as a simple chat web UI.
To start everything: docker compose up
.
To use the web UI, head to http://localhost:8080
and create a new chat. From
the list of models select subql-ai
and begin chatting.
Once your app is ready you can publish it to IPFS to distribute it. This will bundle up the code and any vector data and upload it to IPFS. Then the app can be run from IPFS
subql-ai publish -p ./manifest.ts
Because the docs change regularly you can rebuild the db with the following command. You just need to update the input path to a locally checked out version of https://github.com/subquery/documentation
subql-ai embed-mdx -i /path/to/subql/documentation -o db -t subql-docs --overwrite=true