Skip to content

Commit

Permalink
more updates for pip package
Browse files Browse the repository at this point in the history
  • Loading branch information
driedler committed Jun 25, 2021
1 parent 3d181ad commit 6388a81
Show file tree
Hide file tree
Showing 19 changed files with 208 additions and 98 deletions.
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,5 @@
rpi-toolchain.tar.gz
/build
/.venv
__pycache__
__pycache__
*.whl
69 changes: 37 additions & 32 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,45 +1,50 @@
<!--ts-->
* [TensorFlow Lite for Microcontrollers](#tensorflow-lite-for-microcontrollers)
* [Build Status](#build-status)
* [Official Builds](#official-builds)
* [Community Supported Builds](#community-supported-builds)
* [Additional Documentation](#additional-documentation)
tflite_micro_runtime
========================

<!-- Added by: advaitjain, at: Thu 29 Apr 2021 12:53:08 PM PDT -->
This allows for running TF-Lite models on a RaspberryPi Zero using the Tensorflow-Lite Micro (TFLM) interpreter.

<!--te-->
This provides the Python package:`tflite_micro_runtime` which uses the same API as `tflite_runtime`.
The main difference is `tflite_micro_runtime` uses the Tensorflow-Lite Micro interpreter instead of the
Tensorflow-Lite interpreter.

# TensorFlow Lite for Microcontrollers
Using the Tensorflow-Lite Micro interpeter provides __~8x improvement__ on inference time.

The TFLM code is currently in the process of being refactored out of the
[Tensorflow github
repository](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/micro)
into a standalone repo. This refactoring is currently in the initial stages and
is expected to be completed towards the end of June 2021.

# Contributing
See our [contribution documentation](CONTRIBUTING.md).
More details on the `tflite_runtime` Python package here:
https://www.tensorflow.org/lite/guide/python

# Build Status

* [GitHub Status](https://www.githubstatus.com/)
More details on the Tensorflow-Lite Micro interpreter here:
https://github.com/tensorflow/tflite-micro

## Official Builds
__NOTE:__ This repo is a fork of the `tflite-micro` repo.

Build Type | Status |
----------- | --------------|
CI (Linux) | [![CI](https://github.com/tensorflow/tflite-micro/actions/workflows/ci.yml/badge.svg?event=schedule)](https://github.com/tensorflow/tflite-micro/actions/workflows/ci.yml?query=event%3Aschedule) |
Code Sync | [![Sync from Upstream TF](https://github.com/tensorflow/tflite-micro/actions/workflows/sync.yml/badge.svg)](https://github.com/tensorflow/tflite-micro/actions/workflows/sync.yml) |

## Community Supported Builds
Build Type | Status |
----------- | --------------|
Arduino | [![Arduino](https://github.com/tensorflow/tflite-micro/actions/workflows/arduino.yml/badge.svg)](https://github.com/tensorflow/tflite-micro/actions/workflows/arduino.yml) [![Antmicro](https://github.com/antmicro/tensorflow-arduino-examples/actions/workflows/test_examples.yml/badge.svg)](https://github.com/antmicro/tensorflow-arduino-examples/actions/workflows/test_examples.yml) |
Cortex-M | [![Cortex-M](https://github.com/tensorflow/tflite-micro/actions/workflows/cortex_m.yml/badge.svg)](https://github.com/tensorflow/tflite-micro/actions/workflows/cortex_m.yml) |
Sparkfun Edge | [![Sparkfun Edge](https://github.com/tensorflow/tflite-micro/actions/workflows/sparkfun_edge.yml/badge.svg)](https://github.com/tensorflow/tflite-micro/actions/workflows/sparkfun_edge.yml) |
Xtensa | [![Xtensa](https://github.com/tensorflow/tflite-micro/actions/workflows/xtensa.yml/badge.svg?event=schedule)](https://github.com/tensorflow/tflite-micro/actions/workflows/xtensa.yml?query=event%3Aschedule) [![Xtensa](https://raw.githubusercontent.com/advaitjain/tflite-micro/local-continuous-builds/tensorflow/lite/micro/docs/local_continuous_builds/xtensa-build-status.svg)](https://github.com/advaitjain/tflite-micro/tree/local-continuous-builds/tensorflow/lite/micro/docs/local_continuous_builds/xtensa.md#summary) |

# Install

# Additional Documentation
To install the `tflite_micro_runtime` Python package, run the PIP command on the RPI0:

* [Continuous Integration](docs/continuous_integration.md)
```bash
pip3 install https://github.com/driedler/tflite-micro-rpi0/releases/download/1.0.0/tflite_micro_runtime-1.0.0-cp37-cp37m-linux_armv6l.whl
```

# Build

To build the `tflite_micro_runtime` Python package, run the bash scripts in a Linux environment:

```bash
# Install Python3.7, numpy, and pybind11
./tensorflow/lite/micro/tools/rpi0_pip_package/install_python.sh

# Build tflite_micro_runtime wheel
./tensorflow/lite/micro/tools/rpi0_pip_package/build_pip_package.sh
```


# Runtime Comparison Script

A runtime comparsion script is available the compares the `tflite_micro_runtime` and `tflite_runtime`
packages at: [./tensorflow/lite/micro/python/runtime_comparison.py](./tensorflow/lite/micro/python/runtime_comparison.py)

Refer to the comments at the top of the script for more details.
4 changes: 0 additions & 4 deletions tensorflow/lite/micro/python/__init__.py

This file was deleted.

4 changes: 2 additions & 2 deletions tensorflow/lite/micro/python/image_transform.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@


try:
from tflite_micro_runtime import _pywrap_tensorflow_interpreter_wrapper as _interpreter_wrapper
from tflite_micro_runtime import _pywrap_tflm_interpreter_wrapper as _interpreter_wrapper
except:
import _pywrap_tensorflow_interpreter_wrapper as _interpreter_wrapper
import _pywrap_tflm_interpreter_wrapper as _interpreter_wrapper



Expand Down
4 changes: 2 additions & 2 deletions tensorflow/lite/micro/python/interpreter.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,9 @@

# This file is part of tflite_runtime package.
try:
from tflite_micro_runtime import _pywrap_tensorflow_interpreter_wrapper as _interpreter_wrapper
from tflite_micro_runtime import _pywrap_tflm_interpreter_wrapper as _interpreter_wrapper
except:
import _pywrap_tensorflow_interpreter_wrapper as _interpreter_wrapper
import _pywrap_tflm_interpreter_wrapper as _interpreter_wrapper



Expand Down
10 changes: 5 additions & 5 deletions tensorflow/lite/micro/python/interpreter_wrapper/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ message(STATUS "NUMPY_INCLUDE=${NUMPY_INCLUDE}")
add_subdirectory(${CMAKE_CURRENT_LIST_DIR}/../../../experimental/image_transform ${CMAKE_CURRENT_BINARY_DIR}/image_transform)


add_library(_pywrap_tensorflow_interpreter_wrapper SHARED EXCLUDE_FROM_ALL)
target_sources(_pywrap_tensorflow_interpreter_wrapper
add_library(_pywrap_tflm_interpreter_wrapper SHARED EXCLUDE_FROM_ALL)
target_sources(_pywrap_tflm_interpreter_wrapper
PUBLIC
${CMAKE_CURRENT_LIST_DIR}/interpreter_wrapper.cc
${CMAKE_CURRENT_LIST_DIR}/interpreter_wrapper_pybind11.cc
Expand All @@ -30,17 +30,17 @@ PUBLIC
)

# # To remove "lib" prefix.
set_target_properties(_pywrap_tensorflow_interpreter_wrapper PROPERTIES PREFIX "")
set_target_properties(_pywrap_tflm_interpreter_wrapper PROPERTIES PREFIX "")

target_include_directories(_pywrap_tensorflow_interpreter_wrapper
target_include_directories(_pywrap_tflm_interpreter_wrapper
PUBLIC
${TENSORFLOW_SOURCE_DIR}
${PYTHON_INCLUDE}
${PYBIND11_INCLUDE}
${NUMPY_INCLUDE}
)

target_link_libraries(_pywrap_tensorflow_interpreter_wrapper
target_link_libraries(_pywrap_tflm_interpreter_wrapper
PUBLIC
tensorflow-lite-micro
image_transform
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ TfLiteStatus GetSizeOfType(TfLiteContext* context, const TfLiteType type,
} // namespace


InterpreterWrapper::InterpreterWrapper(std::unique_ptr<Model> model,
MicroInterpreterWrapper::MicroInterpreterWrapper(std::unique_ptr<Model> model,
std::unique_ptr<PythonErrorReporter> error_reporter,
MicroInterpreter *interpreter,
AllOpsResolver* resolver,
Expand All @@ -151,17 +151,17 @@ InterpreterWrapper::InterpreterWrapper(std::unique_ptr<Model> model,
interpreter_(interpreter),
tensor_arena_(tensor_arena) {}

InterpreterWrapper::~InterpreterWrapper() {
MicroInterpreterWrapper::~MicroInterpreterWrapper() {
delete interpreter_;
}

PyObject* InterpreterWrapper::AllocateTensors() {
PyObject* MicroInterpreterWrapper::AllocateTensors() {
TFLITE_PY_ENSURE_VALID_INTERPRETER();
TFLITE_PY_CHECK(interpreter_->AllocateTensors());
Py_RETURN_NONE;
}

PyObject* InterpreterWrapper::Invoke() {
PyObject* MicroInterpreterWrapper::Invoke() {
TFLITE_PY_ENSURE_VALID_INTERPRETER();

// Release the GIL so that we can run multiple interpreters in parallel
Expand All @@ -176,29 +176,29 @@ PyObject* InterpreterWrapper::Invoke() {
Py_RETURN_NONE;
}

PyObject* InterpreterWrapper::InputIndices() const {
PyObject* MicroInterpreterWrapper::InputIndices() const {
TFLITE_PY_ENSURE_VALID_INTERPRETER();
PyObject* np_array = PyArrayFromIntVector(interpreter_->inputs().data(),
interpreter_->inputs().size());

return PyArray_Return(reinterpret_cast<PyArrayObject*>(np_array));
}

PyObject* InterpreterWrapper::OutputIndices() const {
PyObject* MicroInterpreterWrapper::OutputIndices() const {
PyObject* np_array = PyArrayFromIntVector(interpreter_->outputs().data(),
interpreter_->outputs().size());

return PyArray_Return(reinterpret_cast<PyArrayObject*>(np_array));
}

int InterpreterWrapper::NumTensors() const {
int MicroInterpreterWrapper::NumTensors() const {
if (!interpreter_) {
return 0;
}
return interpreter_->tensors_size();
}

std::string InterpreterWrapper::TensorName(int i) const {
std::string MicroInterpreterWrapper::TensorName(int i) const {
if (!interpreter_ || i >= this->NumTensors() || i < 0) {
return "";
}
Expand All @@ -207,7 +207,7 @@ std::string InterpreterWrapper::TensorName(int i) const {
return tensor->name ? tensor->name : "";
}

PyObject* InterpreterWrapper::TensorType(int i) const {
PyObject* MicroInterpreterWrapper::TensorType(int i) const {
TFLITE_PY_ENSURE_VALID_INTERPRETER();
TFLITE_PY_TENSOR_BOUNDS_CHECK(i);

Expand All @@ -225,7 +225,7 @@ PyObject* InterpreterWrapper::TensorType(int i) const {
return PyArray_TypeObjectFromType(code);
}

PyObject* InterpreterWrapper::TensorSize(int i) const {
PyObject* MicroInterpreterWrapper::TensorSize(int i) const {
TFLITE_PY_ENSURE_VALID_INTERPRETER();
TFLITE_PY_TENSOR_BOUNDS_CHECK(i);

Expand All @@ -240,14 +240,14 @@ PyObject* InterpreterWrapper::TensorSize(int i) const {
return PyArray_Return(reinterpret_cast<PyArrayObject*>(np_array));
}

PyObject* InterpreterWrapper::TensorQuantization(int i) const {
PyObject* MicroInterpreterWrapper::TensorQuantization(int i) const {
TFLITE_PY_ENSURE_VALID_INTERPRETER();
TFLITE_PY_TENSOR_BOUNDS_CHECK(i);
const TfLiteTensor* tensor = interpreter_->tensor(i);
return PyTupleFromQuantizationParam(tensor->params);
}

PyObject* InterpreterWrapper::TensorQuantizationParameters(int i) const {
PyObject* MicroInterpreterWrapper::TensorQuantizationParameters(int i) const {
TFLITE_PY_ENSURE_VALID_INTERPRETER();
TFLITE_PY_TENSOR_BOUNDS_CHECK(i);
const TfLiteTensor* tensor = interpreter_->tensor(i);
Expand Down Expand Up @@ -281,7 +281,7 @@ PyObject* InterpreterWrapper::TensorQuantizationParameters(int i) const {
return result;
}

PyObject* InterpreterWrapper::SetTensor(int i, PyObject* value) {
PyObject* MicroInterpreterWrapper::SetTensor(int i, PyObject* value) {
TFLITE_PY_ENSURE_VALID_INTERPRETER();
TFLITE_PY_TENSOR_BOUNDS_CHECK(i);

Expand Down Expand Up @@ -384,7 +384,7 @@ PyObject* CheckGetTensorArgs(MicroInterpreter* interpreter_, int tensor_index,

} // namespace

PyObject* InterpreterWrapper::GetTensor(int i) const {
PyObject* MicroInterpreterWrapper::GetTensor(int i) const {
// Sanity check accessor
TfLiteTensor* tensor = nullptr;
int type_num = 0;
Expand Down Expand Up @@ -432,7 +432,7 @@ PyObject* InterpreterWrapper::GetTensor(int i) const {
}
}

PyObject* InterpreterWrapper::tensor(PyObject* base_object, int i) {
PyObject* MicroInterpreterWrapper::tensor(PyObject* base_object, int i) {
// Sanity check accessor
TfLiteTensor* tensor = nullptr;
int type_num = 0;
Expand All @@ -452,10 +452,10 @@ PyObject* InterpreterWrapper::tensor(PyObject* base_object, int i) {
return PyArray_Return(np_array);
}

InterpreterWrapper* InterpreterWrapper::CreateWrapperCPPFromFile(
MicroInterpreterWrapper* MicroInterpreterWrapper::CreateWrapperCPPFromFile(
const char* model_path, size_t tensor_arena_size, std::string* error_msg) {
std::unique_ptr<PythonErrorReporter> error_reporter(new PythonErrorReporter);
std::unique_ptr<InterpreterWrapper::Model> model =
std::unique_ptr<MicroInterpreterWrapper::Model> model =
Model::BuildFromFile(model_path, error_reporter.get());
if(tensor_arena_size == 0 || tensor_arena_size > 64*1024*1024) {
*error_msg = "Invalid tensor_arena_size";
Expand All @@ -479,15 +479,15 @@ InterpreterWrapper* InterpreterWrapper::CreateWrapperCPPFromFile(
error_reporter.get()
);

auto wrapper = new InterpreterWrapper(
auto wrapper = new MicroInterpreterWrapper(
std::move(model), std::move(error_reporter),
interpreter, resolver, tensor_arena);

return wrapper;
}


PyObject* InterpreterWrapper::ResetVariableTensors() {
PyObject* MicroInterpreterWrapper::ResetVariableTensors() {
TFLITE_PY_ENSURE_VALID_INTERPRETER();
TFLITE_PY_CHECK(interpreter_->ResetVariableTensors());
Py_RETURN_NONE;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,15 +41,15 @@ namespace interpreter_wrapper {

class PythonErrorReporter;

class InterpreterWrapper {
class MicroInterpreterWrapper {
public:
using Model = FlatBufferModel;

// SWIG caller takes ownership of pointer.
static InterpreterWrapper* CreateWrapperCPPFromFile(
static MicroInterpreterWrapper* CreateWrapperCPPFromFile(
const char* model_path, size_t tensor_arena_size, std::string* error_msg);

~InterpreterWrapper();
~MicroInterpreterWrapper();
PyObject* AllocateTensors();
PyObject* Invoke();

Expand Down Expand Up @@ -80,18 +80,18 @@ class InterpreterWrapper {

private:

InterpreterWrapper(std::unique_ptr<Model> model,
MicroInterpreterWrapper(std::unique_ptr<Model> model,
std::unique_ptr<PythonErrorReporter> error_reporter,
MicroInterpreter *interpreter,
AllOpsResolver* resolver,
uint8_t* tensor_arena);

// InterpreterWrapper is not copyable or assignable.
InterpreterWrapper() = delete;
InterpreterWrapper(const InterpreterWrapper& rhs) = delete;
// MicroInterpreterWrapper is not copyable or assignable.
MicroInterpreterWrapper() = delete;
MicroInterpreterWrapper(const MicroInterpreterWrapper& rhs) = delete;


// The public functions which creates `InterpreterWrapper` should ensure all
// The public functions which creates `MicroInterpreterWrapper` should ensure all
// these member variables are initialized successfully. Otherwise it should
// report the error and return `nullptr`.
MicroInterpreter* interpreter_;
Expand Down
Loading

0 comments on commit 6388a81

Please sign in to comment.