The Inception model is a TensorFlow model for image recognition. You can automatically categorize image based on trained data. For more information check this link
The TensorFlow Inception docker image allows easily exporting inception data models and querying a TensorFlow server serving the Inception model. For example, it is very easy to start using the already trained data from the ImageNet image database.
https://www.tensorflow.org/tutorials/image_recognition
Before running the docker image you first need to download the Inception model training checkpoint so it will be available for the TensorFlow Serving server.
$ mkdir /tmp/model-data
$ curl -o '/tmp/model-data/inception-v3-2016-03-01.tar.gz' 'http://download.tensorflow.org/models/image/imagenet/inception-v3-2016-03-01.tar.gz'
$ cd /tmp/model-data
$ tar xzf inception-v3-2016-03-01.tar.gz
$ curl -LO https://raw.githubusercontent.com/bitnami/bitnami-docker-tensorflow-inception/master/docker-compose.yml
$ docker-compose up
WARNING: This is a beta configuration, currently unsupported.
Get the raw URL pointing to the kubernetes.yml manifest and use kubectl to create the resources on your Kubernetes cluster like so:
$ kubectl create -f https://raw.githubusercontent.com/bitnami/bitnami-docker-tensorflow-inception/master/kubernetes.yml
- Bitnami closely tracks upstream source changes and promptly publishes new versions of this image using our automated systems.
- With Bitnami images the latest bug fixes and features are available as soon as possible.
- Bitnami containers, virtual machines and cloud images use the same components and configuration approach - making it easy to switch between formats based on your project needs.
- Bitnami images are built on CircleCI and automatically pushed to the Docker Hub.
- All our images are based on minideb a minimalist Debian based container image which gives you a small base container image and the familiarity of a leading linux distribution.
To run this application you need Docker Engine 1.10.0. Docker Compose is recommended with a version 1.6.0 or later.
Running TensorFlow Inception client with the TensorFlow Serving server is the recommended way. You can either use docker-compose or run the containers manually.
This is the recommended way to run TensorFlow Inception client. You can use the following docker-compose.yml
template:
version: '2'
services:
tensorflow-serving:
image: 'bitnami/tensorflow-serving:latest'
ports:
- '9000:9000'
volumes:
- 'tensorflow_serving_data:/bitnami/tensorflow-serving'
- '/tmp/model-data/:/bitnami/model-data'
tensorflow-inception:
image: 'bitnami/tensorflow-inception:latest'
volumes:
- 'tensorflow_inception_data:/bitnami/tensorflow-inception'
- '/tmp/model-data/:/bitnami/model-data'
depends_on:
- tensorflow-serving
volumes:
tensorflow_serving_data:
driver: local
tensorflow_inception_data:
driver: local
If you want to run the application manually instead of using docker-compose, these are the basic steps you need to run:
- Create a new network for the application and the database:
$ docker network create tensorflow-tier
- Start a Tensorflow Serving server in the network generated:
$ docker run -d -v /tmp/model-data:/bitnami/model-data -p 9000:9000 --name tensorflow-serving --net tensorflow-tier bitnami/tensorflow-serving:latest
Note: You need to give the container a name in order to TensorFlow Inception client to resolve the host
- Run the TensorFlow Inception client container:
$ docker run -d -v /tmp/model-data:/bitnami/model-data --name tensorflow-inception --net tensorflow-tier bitnami/tensorflow-inception:latest
If you remove every container and volume all your data will be lost, and the next time you run the image the application will be reinitialized. To avoid this loss of data, you should mount a volume that will persist even after the container is removed.
For persistence of the TensorFlow Inception client deployment, the above examples define docker volumes namely tensorflow_serving_data
and tensorflow_inception_data
. The TensorFlow Inception client application state will persist as long as these volumes are not removed.
To avoid inadvertent removal of these volumes you can mount host directories as data volumes. Alternatively you can make use of volume plugins to host the volume data.
Note! If you have already started using your application, follow the steps on backing up to pull the data from your running container down to your host.
This requires a minor change to the docker-compose.yml
template previously shown:
version: '2'
services:
tensorflow-serving:
image: 'bitnami/tensorflow-serving:latest'
ports:
- '9000:9000'
volumes:
- '/path/to/tensorflow-serving-persistence:/bitnami/tensorflow-serving'
tensorflow-inception:
image: 'bitnami/tensorflow-inception:latest'
depends_on:
- tensorflow-serving
volumes:
- '/path/to/tensorflow-inception-persistence:/bitnami/tensorflow-inception'
- Create a network (if it does not exist):
$ docker network create tensorflow-tier
- Create a Tensorflow-Serving container with host volume:
$ docker run -d --name tensorflow-serving -p 9000:9000 \
--net tensorflow-tier \
--volume /path/to/tensorflow-serving-persistence:/bitnami/tensorflow-serving \
--volume /path/to/model_data:/bitnami/model-data \
bitnami/tensorflow-serving:latest
Note: You need to give the container a name in order to TensorFlow Inception client to resolve the host
- Create the TensorFlow Inception client container with host volumes:
$ docker run -d --name tensorflow-inception \
--net tensorflow-tier \
--volume /path/to/tensorflow-inception-persistence:/bitnami/tensorflow-inception \
--volume /path/to/model_data:/bitnami/model-data \
bitnami/tensorflow-inception:latest
Bitnami provides up-to-date versions of Tensorflow-Serving and TensorFlow Inception client, including security patches, soon after they are made upstream. We recommend that you follow these steps to upgrade your container. We will cover here the upgrade of the TensorFlow Inception client container. For the Tensorflow-Serving upgrade see https://github.com/bitnami/bitnami-docker-tensorflow-serving/blob/master/README.md#upgrade-this-image
- Get the updated images:
$ docker pull bitnami/tensorflow-inception:latest
- Stop your container
- For docker-compose:
$ docker-compose stop tensorflow-inception
- For manual execution:
$ docker stop tensorflow-inception
-
(For non-compose execution only) Create a backup if you have not mounted the tensorflow-inception folder in the host.
-
Remove the currently running container
- For docker-compose:
$ docker-compose rm tensorflow-inception
- For manual execution:
$ docker rm tensorflow-inception
- Run the new image
- For docker-compose:
$ docker-compose start tensorflow-inception
- For manual execution (mount the directories if needed):
docker run --name tensorflow-inception bitnami/tensorflow-inception:latest
When you start the tensorflow-inception image, you can adjust the configuration of the instance by passing one or more environment variables either on the docker-compose file or on the docker run command line. If you want to add a new environment variable:
- For docker-compose add the variable name and value under the application section:
tensorflow-inception:
image: bitnami/tensorflow-inception:latest
environment:
- TENSORFLOW_INCEPTION_MODEL_INPUT_DATA_NAME=my_custom_data
volumes_from:
- tensorflow_inception_data
- For manual execution add a
-e
option with each variable and value:
$ docker run -d --name tensorflow-inception \
--net tensorflow-tier \
--volume /path/to/tensorflow-inception-persistence:/bitnami/tensorflow-inception \
bitnami/tensorflow-inception:latest
Available variables:
TENSORFLOW_SERVING_HOST
: Hostname for Tensorflow-Serving server. Default: tensorflow-servingTENSORFLOW_SERVING_PORT_NUMBER
: Port used by Tensorflow-Serving server. Default: 9000TENSORFLOW_INCEPTION_MODEL_INPUT_DATA_NAME
: Folder containing the data model to export. Default: inception-v3
To backup your application data follow these steps:
- Stop the running container:
- For docker-compose:
$ docker-compose stop tensorflow-inception
- For manual execution:
$ docker stop tensorflow-inception
- Copy the TensorFlow Inception client data folder in the host:
$ docker cp /path/to/tensorflow-inception-persistence:/bitnami/tensorflow-inception
To restore your application using backed up data simply mount the folder with TensorFlow Inception client data in the container. See persisting your application section for more info.
We'd love for you to contribute to this container. You can request new features by creating an issue, or submit a pull request with your contribution.
If you encountered a problem running this container, you can file an issue. For us to provide better support, be sure to include the following information in your issue:
- Host OS and version
- Docker version (
$ docker version
) - Output of
$ docker info
- Version of this container (
$ echo $BITNAMI_IMAGE_VERSION
inside the container) - The command you used to run the container, and any relevant output you saw (masking any sensitive information)
Most real time communication happens in the #containers
channel at bitnami-oss.slack.com; you can sign up at slack.oss.bitnami.com.
Discussions are archived at bitnami-oss.slackarchive.io.
Copyright (c) 2017 Bitnami
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.