From 7de46e40f96a3aaf7e000d80b19acccaf8ff3272 Mon Sep 17 00:00:00 2001 From: Bakunga Bronson <51344005+BakungaBronson@users.noreply.github.com> Date: Sat, 25 Jan 2025 06:45:43 +0800 Subject: [PATCH] Fixed multiple typos (#878) # What does this PR do? In short, provide a summary of what this PR does and why. Usually, the relevant context should be present in a linked issue. - [ ] Addresses issue (#issue) ## Test Plan Please describe: - tests you ran to verify your changes with result summaries. - provide instructions so it can be reproduced. ## Sources Please link relevant resources if necessary. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests. --- docs/source/building_applications/index.md | 2 +- docs/source/building_applications/tools.md | 2 +- docs/source/distributions/selection.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/source/building_applications/index.md b/docs/source/building_applications/index.md index 55485ddbc8..45dca5a1cc 100644 --- a/docs/source/building_applications/index.md +++ b/docs/source/building_applications/index.md @@ -4,7 +4,7 @@ Llama Stack provides all the building blocks needed to create sophisticated AI a The best way to get started is to look at this notebook which walks through the various APIs (from basic inference, to RAG agents) and how to use them. -**Notebook**: [Building AI Applications](docs/notebooks/Llama_Stack_Building_AI_Applications.ipynb) +**Notebook**: [Building AI Applications](https://github.com/meta-llama/llama-stack/blob/main/docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb) Here are some key topics that will help you build effective agents: diff --git a/docs/source/building_applications/tools.md b/docs/source/building_applications/tools.md index 81b4ab68e4..c4229b64db 100644 --- a/docs/source/building_applications/tools.md +++ b/docs/source/building_applications/tools.md @@ -142,7 +142,7 @@ config = AgentConfig( ) ``` -Refer to [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/blob/main/examples/agents/e2e_loop_with_custom_tools.py) for an example of how to use client provided tools. +Refer to [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/blob/main/examples/agents/e2e_loop_with_client_tools.py) for an example of how to use client provided tools. ## Tool Structure diff --git a/docs/source/distributions/selection.md b/docs/source/distributions/selection.md index 08c3e985a5..aaaf246eec 100644 --- a/docs/source/distributions/selection.md +++ b/docs/source/distributions/selection.md @@ -16,7 +16,7 @@ Which templates / distributions to choose depends on the hardware you have for r - {dockerhub}`distribution-tgi` ([Guide](self_hosted_distro/tgi)) - {dockerhub}`distribution-nvidia` ([Guide](self_hosted_distro/nvidia)) -- **Are you running on a "regular" desktop or laptop ?** We suggest using the ollama templte for quick prototyping and get started without having to worry about needing GPUs. +- **Are you running on a "regular" desktop or laptop ?** We suggest using the ollama template for quick prototyping and get started without having to worry about needing GPUs. - {dockerhub}`distribution-ollama` ([link](self_hosted_distro/ollama)) - **Do you have an API key for a remote inference provider like Fireworks, Together, etc.?** If so, we suggest: