diff --git a/demo_apps/README.md b/demo_apps/README.md index bea21d682..104123a30 100644 --- a/demo_apps/README.md +++ b/demo_apps/README.md @@ -1,4 +1,4 @@ -# Llama 2 Demo Apps +# Llama 2 Demo Apps This folder contains a series of Llama 2-powered apps: * Quickstart Llama deployments and basic interactions with Llama @@ -29,7 +29,7 @@ conda activate llama-demo-apps pip install jupyter cd git clone https://github.com/facebookresearch/llama-recipes -cd llama-recipes/llama-demo-apps +cd llama-recipes/demo-apps jupyter notebook ``` @@ -40,7 +40,7 @@ You can also upload the notebooks to Google Colab. The first three demo apps show: * how to run Llama2 locally on a Mac, in the Google Colab notebook, and in the cloud using Replicate; * how to use [LangChain](https://github.com/langchain-ai/langchain), an open-source framework for building LLM apps, to ask Llama general questions in different ways; -* how to use LangChain to load a recent PDF doc - the Llama2 paper pdf - and ask questions about it. This is the well known RAG method to let LLM such as Llama2 be able to answer questions about the data not publicly available when Llama2 was trained, or about your own data. RAG is one way to prevent LLM's hallucination. +* how to use LangChain to load a recent PDF doc - the Llama2 paper pdf - and ask questions about it. This is the well known RAG method to let LLM such as Llama2 be able to answer questions about the data not publicly available when Llama2 was trained, or about your own data. RAG is one way to prevent LLM's hallucination. * how to ask follow up questions to Llama by sending previous questions and answers as the context along with the new question, hence performing multi-turn chat or conversation with Llama. ### [Running Llama2 Locally on Mac](HelloLlamaLocal.ipynb) @@ -56,7 +56,7 @@ python convert.py ### [Running Llama2 Hosted in the Cloud](HelloLlamaCloud.ipynb) The HelloLlama cloud version uses LangChain with Llama2 hosted in the cloud on [Replicate](https://replicate.com). The demo shows how to ask Llama general questions and follow up questions, and how to use LangChain to ask Llama2 questions about **unstructured** data stored in a PDF. -**Note on using Replicate** +**Note on using Replicate** To run some of the demo apps here, you'll need to first sign in with Replicate with your github account, then create a free API token [here](https://replicate.com/account/api-tokens) that you can use for a while. After the free trial ends, you'll need to enter billing info to continue to use Llama2 hosted on Replicate - according to Replicate's [Run time and cost](https://replicate.com/meta/llama-2-13b-chat) for the Llama2-13b-chat model used in our demo apps, the model "costs $0.000725 per second. Predictions typically complete within 10 seconds." This means each call to the Llama2-13b-chat model costs less than $0.01 if the call completes within 10 seconds. If you want absolutely no costs, you can refer to the section "Running Llama2 locally on Mac" above or the "Running Llama2 in Google Colab" below. ### [Running Llama2 in Google Colab](https://colab.research.google.com/drive/1-uBXt4L-6HNS2D8Iny2DwUpVS4Ub7jnk?usp=sharing) @@ -71,7 +71,7 @@ This tutorial shows how to use Llama 2 with [vLLM](https://github.com/vllm-proje This demo app uses Llama2 to return a text summary of a YouTube video. It shows how to retrieve the caption of a YouTube video and how to ask Llama to summarize the content in four different ways, from the simplest naive way that works for short text to more advanced methods of using LangChain's map_reduce and refine to overcome the 4096 limit of Llama's max input token size. ## [NBA2023-24](StructuredLlama.ipynb): Ask Llama2 about Structured Data -This demo app shows how to use LangChain and Llama2 to let users ask questions about **structured** data stored in a SQL DB. As the 2023-24 NBA season is around the corner, we use the NBA roster info saved in a SQLite DB to show you how to ask Llama2 questions about your favorite teams or players. +This demo app shows how to use LangChain and Llama2 to let users ask questions about **structured** data stored in a SQL DB. As the 2023-24 NBA season is around the corner, we use the NBA roster info saved in a SQLite DB to show you how to ask Llama2 questions about your favorite teams or players. ## [LiveData](LiveData.ipynb): Ask Llama2 about Live Data This demo app shows how to perform live data augmented generation tasks with Llama2 and [LlamaIndex](https://github.com/run-llama/llama_index), another leading open-source framework for building LLM apps: it uses the [You.com serarch API](https://documentation.you.com/quickstart) to get live search result and ask Llama2 about them. @@ -106,4 +106,4 @@ Then enter your question, click Submit. You'll see in the notebook or a browser ![](llama2-gradio.png) ### [RAG Chatbot Example](RAG_Chatbot_example/RAG_Chatbot_Example.ipynb) -A complete example of how to build a Llama 2 chatbot hosted on your browser that can answer questions based on your own data. \ No newline at end of file +A complete example of how to build a Llama 2 chatbot hosted on your browser that can answer questions based on your own data.