Helm chart to deploy Watson NLP services to Kubernetes, for embedding into another application.
This chart deploys the container images required to provide embedded Watson NLP libraries which can be called from another application.
At a minimum a Watson NLP runtime image is required. The NLP runtime container runs in the Watson NLP pod at runtime. Additional "model images" are necessary for different functionality provided by Watson NLP. There are two types of model images:
- Pretrained models provided by IBM
- Custom models provided by consumers
The model containers run as Kubernetes init containers. They are triggered when pods are created. Their purpose is to put the model artifacts onto pod storage so that the Watson NLP runtime container can access them. Once they have done this, these containers terminate.
This chart deploys and Watson NLP runtime and one pretrained model image, the Syntax model.
A OpenShift or Kubernetes cluster is required.
Install instructions for Helm can be found here.
Container images for both runtime and models are located the IBM Entitled Registry, which requires an entitlement key.
Instructions for obtaining your IBM entitlement API key.
The chart is dependent on a pull secret for the registry or registries containing the Watson Runtime and Model images.
Explanation of the values.yaml
-
"componentName" - The Deployment and Services will be named using a combination of the Helm release, and this property.
-
"serviceType" - The type of Kubernetes Service used to expose the watson-runtime container. Valid values are according to the Kuberenetes specification.
-
"registries" - A list of all registries assosiated with the Deployment. At a minimum, there will be a registry from which to pull the watson-runtime container and IBM provided pretrained models. Additionally, there could be a separate registry containing custom models.
-
"imagePullSecrets" - A list of pull secret names that the Deployment will reference. At a minimum, the pull secret for the IBM entitled registry/Artifactory should be provided. Additional pull secrets can be specified if there is a separate registry for custom models.
-
"runtime" - Specifies which of the defined registries should be used to pull the watson-runtime container, and its image name/tag.
-
"models" - A list of models to include in the Deployment, each specifies which of the defined registries should be used and the image names/tags.
-
Login and create namespace
oc login --token=sha256~xxx --server=https://xxx oc new-project watson-demo
-
Create a Secret with credentials to the entitled registry.
oc create secret docker-registry \ --docker-server=cp.icr.io \ --docker-username=cp \ --docker-password=<your IBM Entitlement Key> \ ibm-entitlement-key
-
Install the chart
Begin by adding the repo with the chart to your helm client
helm repo add toolkit-charts https://charts.cloudnativetoolkit.dev helm repo update
Clone the watson-automation repo to use a provided sample values.yaml.
git clone https://github.com/IBM/watson-automation
You must edit the 1 file to accept the Watson NLP license by setting the following property:
acceptLicense: true
Copy your sample
values.yaml
and install the chart.cd terraform-gitops-watson-nlp/chart/watson-nlp helm install watson-embedded -f values.yaml toolkit-charts/watson-nlp
Verify the following components were created:
oc get deployment/watson-embedded-watson-nlp oc get svc/watson-embedded-watson-nlp
Wait for the pod to become ready, this can take approximately 5 minutes.
Follow these instructions for usage testing
Delete all resources with:
helm delete watson-embedded
For further information about Watson NLP, refer to the Watson Runtime documentation. TO BE DONE