From 785f213abdfb8f55b7b5a97b7cb5e0251e2ebdd7 Mon Sep 17 00:00:00 2001 From: Monireh2 Date: Thu, 31 Oct 2024 10:01:44 -0700 Subject: [PATCH 1/3] Update README.md Updated quickstart README with intro to Llama 3.2 notebook intro --- recipes/quickstart/README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/recipes/quickstart/README.md b/recipes/quickstart/README.md index 326cbdb29..04d378ee8 100644 --- a/recipes/quickstart/README.md +++ b/recipes/quickstart/README.md @@ -2,6 +2,7 @@ If you are new to developing with Meta Llama models, this is where you should start. This folder contains introductory-level notebooks across different techniques relating to Meta Llama. +* The [Build_with_Llama 3.2](./build_with_Llama_3_2.ipynb) notebook showcases a comprehensive walkthrough of the new capabilities of LLaMA 3.2 models, including multimodal use cases, function/tool calling, LLaMA Stack, and LLaMA on edge. * The [Running_Llama_Anywhere](./Running_Llama3_Anywhere/) notebooks demonstrate how to run Llama inference across Linux, Mac and Windows platforms using the appropriate tooling. * The [Prompt_Engineering_with_Llama](./Prompt_Engineering_with_Llama_3.ipynb) notebook showcases the various ways to elicit appropriate outputs from Llama. Take this notebook for a spin to get a feel for how Llama responds to different inputs and generation parameters. * The [inference](./inference/) folder contains scripts to deploy Llama for inference on server and mobile. See also [3p_integrations/vllm](../3p_integrations/vllm/) and [3p_integrations/tgi](../3p_integrations/tgi/) for hosting Llama on open-source model servers. From 9591f21a41460dad0fddf01ca8d5b981b7ff5aaf Mon Sep 17 00:00:00 2001 From: Monireh2 Date: Thu, 31 Oct 2024 13:38:49 -0700 Subject: [PATCH 2/3] Update README.md fixed Llama spelling --- recipes/quickstart/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/recipes/quickstart/README.md b/recipes/quickstart/README.md index 04d378ee8..8eadf42d4 100644 --- a/recipes/quickstart/README.md +++ b/recipes/quickstart/README.md @@ -2,7 +2,7 @@ If you are new to developing with Meta Llama models, this is where you should start. This folder contains introductory-level notebooks across different techniques relating to Meta Llama. -* The [Build_with_Llama 3.2](./build_with_Llama_3_2.ipynb) notebook showcases a comprehensive walkthrough of the new capabilities of LLaMA 3.2 models, including multimodal use cases, function/tool calling, LLaMA Stack, and LLaMA on edge. +* The [Build_with_Llama 3.2](./build_with_Llama_3_2.ipynb) notebook showcases a comprehensive walkthrough of the new capabilities of LlaMA 3.2 models, including multimodal use cases, function/tool calling, LlaMA Stack, and LlaMA on edge. * The [Running_Llama_Anywhere](./Running_Llama3_Anywhere/) notebooks demonstrate how to run Llama inference across Linux, Mac and Windows platforms using the appropriate tooling. * The [Prompt_Engineering_with_Llama](./Prompt_Engineering_with_Llama_3.ipynb) notebook showcases the various ways to elicit appropriate outputs from Llama. Take this notebook for a spin to get a feel for how Llama responds to different inputs and generation parameters. * The [inference](./inference/) folder contains scripts to deploy Llama for inference on server and mobile. See also [3p_integrations/vllm](../3p_integrations/vllm/) and [3p_integrations/tgi](../3p_integrations/tgi/) for hosting Llama on open-source model servers. From e814d7d672626b69653e88d066399305c1c1c0e5 Mon Sep 17 00:00:00 2001 From: Monireh2 Date: Thu, 31 Oct 2024 14:13:10 -0700 Subject: [PATCH 3/3] Update README.md fixed spelling --- recipes/quickstart/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/recipes/quickstart/README.md b/recipes/quickstart/README.md index 8eadf42d4..a48c63436 100644 --- a/recipes/quickstart/README.md +++ b/recipes/quickstart/README.md @@ -2,7 +2,7 @@ If you are new to developing with Meta Llama models, this is where you should start. This folder contains introductory-level notebooks across different techniques relating to Meta Llama. -* The [Build_with_Llama 3.2](./build_with_Llama_3_2.ipynb) notebook showcases a comprehensive walkthrough of the new capabilities of LlaMA 3.2 models, including multimodal use cases, function/tool calling, LlaMA Stack, and LlaMA on edge. +* The [Build_with_Llama 3.2](./build_with_Llama_3_2.ipynb) notebook showcases a comprehensive walkthrough of the new capabilities of Llama 3.2 models, including multimodal use cases, function/tool calling, Llama Stack, and Llama on edge. * The [Running_Llama_Anywhere](./Running_Llama3_Anywhere/) notebooks demonstrate how to run Llama inference across Linux, Mac and Windows platforms using the appropriate tooling. * The [Prompt_Engineering_with_Llama](./Prompt_Engineering_with_Llama_3.ipynb) notebook showcases the various ways to elicit appropriate outputs from Llama. Take this notebook for a spin to get a feel for how Llama responds to different inputs and generation parameters. * The [inference](./inference/) folder contains scripts to deploy Llama for inference on server and mobile. See also [3p_integrations/vllm](../3p_integrations/vllm/) and [3p_integrations/tgi](../3p_integrations/tgi/) for hosting Llama on open-source model servers.