This is just the simple summarization tool for evaluating the LLama3 model's performance using cli commands.
git clone https://github.com/pereira90-ai/llama3-cli-summarizor
cd llaba3-cli-summarizor
- Install conda environment from
yaml
file:
conda env create -f environment.yaml -n summarizor
conda activate summarizor
- Install cuda enabled torch on the conda environment:
pip uninstall torch -y
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
First of all make sure you are in the project directory: llama3-cli-summarizor
And then:
python download_model.py
You can use text or PDF file as an input
python app.py <your_pdf.txt> <output_file.txt> <max_token_len>
Sample usage:
python app.py meeting_summary.pdf out.txt 256
Here your_pdf.txt
is the path of your meeting log file to summarize while output_file.txt
is the path of the result.
And max_token_len
is the optional parameter to change the length of the summary length.