diff --git a/3p-integrations/togetherai/README.md b/3p-integrations/togetherai/README.md index d0e6373a4..296dc6c95 100644 --- a/3p-integrations/togetherai/README.md +++ b/3p-integrations/togetherai/README.md @@ -14,7 +14,7 @@ While the code examples are primarily written in Python/JS, the concepts can be | -------- | ----------- | ---- | | [MultiModal RAG with Nvidia Investor Slide Deck](https://github.com/meta-llama/llama-recipes/blob/main/recipes/3p_integrations/togetherai/multimodal_RAG_with_nvidia_investor_slide_deck.ipynb) | Multimodal RAG using Nvidia investor slides. | [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/meta-llama/llama-cookbook/blob/main/3p-integrations/togetherai/multimodal_RAG_with_nvidia_investor_slide_deck.ipynb) [![](https://uohmivykqgnnbiouffke.supabase.co/storage/v1/object/public/landingpage/youtubebadge.svg)](https://youtu.be/IluARWPYAUc?si=gG90hqpboQgNOAYG)| | [Llama Contextual RAG](./llama_contextual_RAG.ipynb) | Implementation of Contextual Retrieval using Llama models. | [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/meta-llama/llama-cookbook/blob/main/3p-integrations/togetherai/llama_contextual_RAG.ipynb) | -| [Llama PDF to podcast](./pdf_to_podcast_using_llama_on_together.ipynb) | Generate a podcast from PDF content using Llama. | [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.comgithub/meta-llama/llama-cookbook/blob/main/3p-integrations/togetherai/pdf_to_podcast_using_llama_on_together.ipynb) | +| [Llama PDF to podcast](./pdf_to_podcast_using_llama_on_together.ipynb) | Generate a podcast from PDF content using Llama. | [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/meta-llama/llama-cookbook/blob/main/3p-integrations/togetherai/pdf_to_podcast_using_llama_on_together.ipynb) | | [Knowledge Graphs with Structured Outputs](./knowledge_graphs_with_structured_outputs.ipynb) | Get Llama to generate knowledge graphs. | [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/meta-llama/llama-cookbook/blob/main/3p-integrations/togetherai/knowledge_graphs_with_structured_outputs.ipynb) | | [Structured Text Extraction from Images](./structured_text_extraction_from_images.ipynb) | Extract structured text from images. | [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/meta-llama/llama-cookbook/blob/main/3p-integrations/togetherai/structured_text_extraction_from_images.ipynb) | | [Text RAG](./text_RAG_using_llama_on_together.ipynb) | Implement text-based Retrieval-Augmented Generation. | [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/meta-llama/llama-cookbook/blob/main/3p-integrations/togetherai/text_RAG_using_llama_on_together.ipynb) | diff --git a/end-to-end-use-cases/Multi-Modal-RAG/README.md b/end-to-end-use-cases/Multi-Modal-RAG/README.md index cc4a0dcd5..c941ebf4b 100644 --- a/end-to-end-use-cases/Multi-Modal-RAG/README.md +++ b/end-to-end-use-cases/Multi-Modal-RAG/README.md @@ -13,7 +13,7 @@ This is a complete workshop on how to label images using the new Llama 3.2-Visio Before we start: 1. Please grab your HF CLI Token from [here](https://huggingface.co/settings/tokens) -2. Git clone [this dataset](https://huggingface.co/datasets/Sanyam/MM-Demo) inside the Multi-Modal-RAG folder: `git clone https://huggingface.co/datasets/Sanyam/MM-Demo` +2. Git clone [this dataset](https://huggingface.co/datasets/Sanyam/MM-Demo) inside the Multi-Modal-RAG folder: `git clone https://huggingface.co/datasets/Sanyam/MM-Demo` (Remember to thank the original author by upvoting [Kaggle Dataset](https://www.kaggle.com/datasets/agrigorev/clothing-dataset-full)) 3. Make sure you grab a together.ai token [here](https://www.together.ai) ## Detailed Outline for running: @@ -32,7 +32,7 @@ Here's the detailed outline: In this step we start with an unlabeled dataset and use the image captioning capability of the model to write a description of the image and categorize it. -[Notebook for Step 1](./notebooks/Part_1_Data_Preperation.ipynb) and [Script for Step 1](./scripts/label_script.py) +[Notebook for Step 1](./notebooks/Part_1_Data_Preparation.ipynb) and [Script for Step 1](./scripts/label_script.py) To run the script (remember to set n): ``` diff --git a/end-to-end-use-cases/Multi-Modal-RAG/notebooks/Part_1_Data_Preperation.ipynb b/end-to-end-use-cases/Multi-Modal-RAG/notebooks/Part_1_Data_Preparation.ipynb similarity index 99% rename from end-to-end-use-cases/Multi-Modal-RAG/notebooks/Part_1_Data_Preperation.ipynb rename to end-to-end-use-cases/Multi-Modal-RAG/notebooks/Part_1_Data_Preparation.ipynb index 2d346d6e2..80b22139a 100644 --- a/end-to-end-use-cases/Multi-Modal-RAG/notebooks/Part_1_Data_Preperation.ipynb +++ b/end-to-end-use-cases/Multi-Modal-RAG/notebooks/Part_1_Data_Preparation.ipynb @@ -5,9 +5,9 @@ "id": "01af3b74-b3b9-4c1f-b41d-2911e7f19ffe", "metadata": {}, "source": [ - "## Data Preperation Notebook\n", + "## Data Preparation Notebook\n", "\n", - "To make the experience consistent, we will use [this link]() for getting access to our dataset. To credit, thanks to the author [here]() for making it available. \n", + "To make the experience consistent, we will use [this link](https://huggingface.co/datasets/Sanyam/MM-Demo) for getting access to our dataset. To credit, thanks to the author [here](https://www.kaggle.com/datasets/agrigorev/clothing-dataset-full) for making it available. \n", "\n", "As thanks to original author-Please upvote the dataset version on Kaggle if you enjoy this course." ]