Skip to content

Commit

Permalink
Update ExecuTorch tutorial link and change Llama spelling
Browse files Browse the repository at this point in the history
  • Loading branch information
jakmro committed Nov 7, 2024
1 parent 3cc0159 commit 02b1ca8
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 4 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ https://docs.swmansion.com/react-native-executorch

## Examples 📲

We currently host a single example demonstrating a chat app built with the latest **LLaMa 3.2 1B/3B** model. If you'd like to run it, navigate to `examples/llama` from the repository root and install the dependencies with:
We currently host a single example demonstrating a chat app built with the latest **Llama 3.2 1B/3B** model. If you'd like to run it, navigate to `examples/llama` from the repository root and install the dependencies with:

```bash
yarn
Expand Down
6 changes: 3 additions & 3 deletions docs/docs/guides/running-llms.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,11 @@ title: Running LLMs
sidebar_position: 1
---

React Native ExecuTorch supports LLaMa 3.2 models, including quantized versions. Before getting started, you’ll need to obtain the .pte binary—a serialized model—and the tokenizer. There are various ways to accomplish this:
React Native ExecuTorch supports Llama 3.2 models, including quantized versions. Before getting started, you’ll need to obtain the .pte binary—a serialized model—and the tokenizer. There are various ways to accomplish this:

- For your convienience, it's best if you use models exported by us, you can get them from our hugging face repository. You can also use [constants](https://github.com/software-mansion/react-native-executorch/tree/main/src/modelUrls.ts) shipped with our library.
- If you want to export model by yourself,you can use a Docker image that we've prepared. To see how it works, check out [exporting LLaMa](./exporting-llama.mdx)
- Follow the official [tutorial](https://github.com/pytorch/executorch/blob/cbfdf78f8/examples/demo-apps/android/LlamaDemo/docs/delegates/xnnpack_README.md) made by ExecuTorch team to build the model and tokenizer yourself
- If you want to export model by yourself,you can use a Docker image that we've prepared. To see how it works, check out [exporting Llama](./exporting-llama.mdx)
- Follow the official [tutorial](https://github.com/pytorch/executorch/blob/fe20be98c/examples/demo-apps/android/LlamaDemo/docs/delegates/xnnpack_README.md) made by ExecuTorch team to build the model and tokenizer yourself

## Initializing

Expand Down

0 comments on commit 02b1ca8

Please sign in to comment.