Skip to content

Commit

Permalink
Update README and docs post-20.07 release
Browse files Browse the repository at this point in the history
  • Loading branch information
dzier authored and deadeyegoodwin committed Jul 31, 2020
1 parent 6bc9149 commit 8359b74
Show file tree
Hide file tree
Showing 7 changed files with 19 additions and 19 deletions.
16 changes: 8 additions & 8 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,16 +32,16 @@ NVIDIA Triton Inference Server

**LATEST RELEASE: You are currently on the master branch which
tracks under-development progress towards the next release. The
latest release of the Triton Inference Server is 2.0.0 and
is available on branch** `r20.06
<https://github.com/NVIDIA/triton-inference-server/tree/r20.06>`_.
latest release of the Triton Inference Server is 2.1.0 and
is available on branch** `r20.07
<https://github.com/NVIDIA/triton-inference-server/tree/r20.07>`_.

**Triton V2: Starting with the 20.06 release, Triton moves to
version 2. The master branch currently tracks V2 development and
is likely to be more unstable than usual due to the significant
changes during the transition from V1 to V2. A legacy V1 version
of Triton will be released from the master-v1 branch. The V1
version of Triton is deprecated and no releases beyond 20.06 are
version of Triton is deprecated and no releases beyond 20.07 are
planned. More information on the V1 and V2 transition is available
in** `Roadmap
<https://github.com/NVIDIA/triton-inference-server/blob/master/README.rst#roadmap>`_.
Expand Down Expand Up @@ -131,11 +131,11 @@ features:

.. overview-end-marker-do-not-remove
The current release of the Triton Inference Server is 2.0.0 and
corresponds to the 20.06 release of the tensorrtserver container on
The current release of the Triton Inference Server is 2.1.0 and
corresponds to the 20.07 release of the tensorrtserver container on
`NVIDIA GPU Cloud (NGC) <https://ngc.nvidia.com>`_. The branch for
this release is `r20.06
<https://github.com/NVIDIA/triton-inference-server/tree/r20.06>`_.
this release is `r20.07
<https://github.com/NVIDIA/triton-inference-server/tree/r20.07>`_.

Backwards Compatibility
-----------------------
Expand Down
2 changes: 1 addition & 1 deletion deploy/single_server/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
replicaCount: 1

image:
imageName: nvcr.io/nvidia/tritonserver:20.06-py3
imageName: nvcr.io/nvidia/tritonserver:20.07-py3
pullPolicy: IfNotPresent
modelRepositoryPath: gs://triton-inference-server-repository/model_repository
numGpus: 1
Expand Down
4 changes: 2 additions & 2 deletions docs/build.rst
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ to the root of the repo and checkout the release version of the branch
that you want to build (or the master branch if you want to build the
under-development version)::

$ git checkout r20.06
$ git checkout r20.07

Then use docker to build::

Expand Down Expand Up @@ -104,7 +104,7 @@ the root of the repo and checkout the release version of the branch
that you want to build (or the master branch if you want to build the
under-development version)::

$ git checkout r20.06
$ git checkout r20.07

Next you must build or install each framework backend you want to
enable in Triton, configure the build to enable the desired features,
Expand Down
4 changes: 2 additions & 2 deletions docs/client_library.rst
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ you want to build (or the master branch if you want to build the
under-development version). The branch you use for the client build
should match the version of Triton you are using::

$ git checkout r20.06
$ git checkout r20.07

Then, issue the following command to build the C++ client library and
the Python wheel files for the Python client library::
Expand Down Expand Up @@ -104,7 +104,7 @@ of the repo and checkout the release version of the branch that you
want to build (or the master branch if you want to build the
under-development version)::

$ git checkout r20.06
$ git checkout r20.07

Ubuntu 18.04
............
Expand Down
4 changes: 2 additions & 2 deletions docs/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ the most recent version of CUDA, Docker, and nvidia-docker.
After performing the above setup, you can pull the Triton container
using the following command::

docker pull nvcr.io/nvidia/tritonserver:20.06-py3
docker pull nvcr.io/nvidia/tritonserver:20.07-py3

Replace *20.06* with the version of inference server that you want to
Replace *20.07* with the version of inference server that you want to
pull.
4 changes: 2 additions & 2 deletions docs/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ following prerequisite steps:
be sure to select the r<xx.yy> release branch that corresponds to
the version of Triton you want to use::

$ git checkout r20.06
$ git checkout r20.07

* Create a model repository containing one or more models that you
want Triton to serve. An example model repository is included in the
Expand Down Expand Up @@ -109,7 +109,7 @@ the GitHub repo and checkout the release version of the branch that
you want to build (or the master branch if you want to build the
under-development version)::

$ git checkout r20.06
$ git checkout r20.07

Then use docker to build::

Expand Down
4 changes: 2 additions & 2 deletions docs/run.rst
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ sure to checkout the release version of the branch that corresponds to
the server you are using (or the master branch if you are using a
server build from master)::

$ git checkout r20.06
$ git checkout r20.07
$ cd docs/examples
$ ./fetch_models.sh

Expand Down Expand Up @@ -105,7 +105,7 @@ you pulled from NGC or built locally::
$ docker run --gpus=1 --rm -p8000:8000 -p8001:8001 -p8002:8002 -v/path/to/model/repository:/models <tritonserver image name> tritonserver --model-repository=/models

Where *<tritonserver image name>* will be something like
**nvcr.io/nvidia/tritonserver:20.06-py3** if you :ref:`pulled the
**nvcr.io/nvidia/tritonserver:20.07-py3** if you :ref:`pulled the
container from the NGC registry <section-installing-triton>`, or
**tritonserver** if you :ref:`built it from source
<section-building>`.
Expand Down

0 comments on commit 8359b74

Please sign in to comment.