Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update runtime tutorial to promode Module APIs in the beginning. #6198

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/source/extension-module.md
Original file line number Diff line number Diff line change
Expand Up @@ -240,6 +240,6 @@ if (auto* etdump = dynamic_cast<ETDumpGen*>(module.event_tracer())) {
}
```

# Conclusion
## Conclusion

The `Module` APIs provide a simplified interface for running ExecuTorch models in C++, closely resembling the experience of PyTorch's eager mode. By abstracting away the complexities of the lower-level runtime APIs, developers can focus on model execution without worrying about the underlying details.
6 changes: 2 additions & 4 deletions docs/source/running-a-model-cpp-tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,7 @@

**Author:** [Jacob Szwejbka](https://github.com/JacobSzwejbka)

In this tutorial, we will cover the APIs to load an ExecuTorch model,
prepare the MemoryManager, set inputs, execute the model, and retrieve outputs.
In this tutorial, we will cover how to run an ExecuTorch model in C++ using the more detailed, lower-level APIs: prepare the `MemoryManager`, set inputs, execute the model, and retrieve outputs. However, if you’re looking for a simpler interface that works out of the box, consider trying the [Module Extension Tutorial](extension-module.md).

For a high level overview of the ExecuTorch Runtime please see [Runtime Overview](runtime-overview.md), and for more in-depth documentation on
each API please see the [Runtime API Reference](executorch-runtime-api-reference.rst).
Expand Down Expand Up @@ -153,5 +152,4 @@ assert(output.isTensor());

## Conclusion

In this tutorial, we went over the APIs and steps required to load and perform an inference with an ExecuTorch model in C++.
Also, check out the [Simplified Runtime APIs Tutorial](extension-module.md).
This tutorial demonstrated how to run an ExecuTorch model using low-level runtime APIs, which offer granular control over memory management and execution. However, for most use cases, we recommend using the Module APIs, which provide a more streamlined experience without sacrificing flexibility. For more details, check out the [Module Extension Tutorial](extension-module.md).
Loading