Skip to content

Commit

Permalink
release 0.5.0
Browse files Browse the repository at this point in the history
  • Loading branch information
grimoire committed Aug 29, 2021
1 parent 6659898 commit 7a48317
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 8 deletions.
12 changes: 5 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,11 @@

This is a branch of [torch2trt](https://github.com/NVIDIA-AI-IOT/torch2trt) with dynamic input support

Not all layers support dynamic input such as `torch.split()` etc...

You can create a custom layer from nvinfer1::IPluginV2DynamicExt to implement it.
Note that not all layers support dynamic input such as `torch.split()` etc...

## Usage

Below are some usage examples
Here are some examples

### Convert

Expand Down Expand Up @@ -37,7 +35,7 @@ model_trt = torch2trt_dynamic(model, [x], fp16_mode=False, opt_shape_param=opt_s

### Execute

We can execute the returned ``TRTModule`` just like the original PyTorch model
We can execute the returned `TRTModule` just like the original PyTorch model

```python
x = torch.rand(1,3,256,256).cuda()
Expand Down Expand Up @@ -79,11 +77,11 @@ python setup.py develop

### Set plugins(optional)

Some layers such as `GN` and `repeat` need c++ plugins. Install the plugin project below
Some layers such as `GN` need c++ plugins. Install the plugin project below

[amirstan_plugin](https://github.com/grimoire/amirstan_plugin)

remember to export the environment variable AMIRSTAN_LIBRARY_PATH
**DO NOT FORGET** to export the environment variable `AMIRSTAN_LIBRARY_PATH`

## How to add (or override) a converter

Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ def run(self):

setup(
name='torch2trt_dynamic',
version='0.4.1',
version='0.5.0',
description='An easy to use PyTorch to TensorRT converter' +
' with dynamic shape support',
cmdclass={
Expand Down

0 comments on commit 7a48317

Please sign in to comment.