Clone the repo:
git https://github.com/pratapyash/local-rag-qa-engine
cd local-rag-qa-engine
Install the dependencies (requires Poetry):
poetry install
Fetch your LLM (llama3.2:1b by default):
ollama pull llama3.2:1b
Run the Ollama server
ollama serve
Start RagBase:
poetry run streamlit run app.py
Extracts text from PDF documents and creates chunks (using semantic and character splitter) that are stored in a vector databse
Given a query, searches for similar documents, reranks the result and applies LLM chain filter before returning the response.
Combines the LLM with the retriever to answer a given user question