Skip to content

A secure document question-answering system leveraging local LLMs for private, efficient, and scalable knowledge retrieval

Notifications You must be signed in to change notification settings

pratapyash/local-rag-qa-engine

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Local RAG Based Secure Document QA Engine

Installation

Clone the repo:

git https://github.com/pratapyash/local-rag-qa-engine
cd local-rag-qa-engine

Install the dependencies (requires Poetry):

poetry install

Fetch your LLM (llama3.2:1b by default):

ollama pull llama3.2:1b

Run the Ollama server

ollama serve

Start RagBase:

poetry run streamlit run app.py

Ingestor

Extracts text from PDF documents and creates chunks (using semantic and character splitter) that are stored in a vector databse

Retriever

Given a query, searches for similar documents, reranks the result and applies LLM chain filter before returning the response.

QA Chain

Combines the LLM with the retriever to answer a given user question

Tech Stack

About

A secure document question-answering system leveraging local LLMs for private, efficient, and scalable knowledge retrieval

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages