Skip to content
This repository has been archived by the owner on Jan 12, 2021. It is now read-only.

Commit

Permalink
Use Secrets to store Provider API access keys and Update docs (#9)
Browse files Browse the repository at this point in the history
* Update Readme instructions and yaml files for cluster-manager deployment

* Update Readme instructions and yaml files for cluster-manager deployment

* Update format or wercker instruction readme doc

* Update deploy readme and wercker readme for some corrections

* Update deploy readme and wercker readme for some corrections

* Store Provider API Access Keys into a Secret to secure the data

* Update docs
  • Loading branch information
klustria authored and vitaliyzinchenko committed Dec 14, 2017
1 parent f1bf014 commit 65174cf
Show file tree
Hide file tree
Showing 8 changed files with 154 additions and 75 deletions.
130 changes: 74 additions & 56 deletions deploy/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,34 +14,54 @@ These instructions will get you a copy of the project up and running on your loc
- If helm is being used, the project must be cloned so that helm templates are available for the relevant commands.


## Deployment using ClusterManager.yaml
## Deployment using ClusterManagerTemplate.yaml

1. Use `kubectl` to create a deployment using [`ClusterManager.yaml`](../examples/ClusterManager.yaml). Set the required
environment variables.
1. Create a yaml deployment file using [`ClusterManagerTemplate.yaml`](../examples/ClusterManagerTemplate.yaml). For example,
you can use a bash script to set the required environment variables and generate a new file called `ClusterManager.yaml`.

```
#!/bin/bash

# Set environment variables
export FEDERATION_HOST=fedhost
export FEDERATION_CONTEXT=akube
export FEDERATION_NAMESPACE=federation-system
export CLUSTER_MANAGER_IMAGE=/docker.io/somewhere/cluster-manager:tagversion
export IMAGE_REPOSITORY=someregistry.io/somewhere
export IMAGE_VERSION=tagversion
export DOMAIN=something.net
export KOPS_STATE_STORE=s3://state-store.something.net
export AWS_ACCESS_KEY_ID=awsaccesskeyid
export AWS_SECRET_ACCESS_KEY=awssecretaccesskey
export OKE_BEARER_TOKEN=werckerclustersbearertoken
export OKE_AUTH_GROUP=werckerclustersauthgroup
export OKE_CLOUD_AUTH_ID=werckerclusterscloudauthid
export DOMAIN=something.net
export KOPS_STATE_STORE=state-store.something.net
# Convert API access keys to base64 for Kubernetes secret storage.
export AWS_ACCESS_KEY_ID_BASE64=$(echo -n "$AWS_ACCESS_KEY_ID" | base64)
export AWS_SECRET_ACCESS_KEY_BASE64=$(echo -n "$AWS_SECRET_ACCESS_KEY" | base64)
export OKE_BEARER_TOKEN_BASE64=$(echo -n "$OKE_BEARER_TOKEN" | base64)
export OKE_AUTH_GROUP_BASE64=$(echo -n "$OKE_AUTH_GROUP" | base64)
export OKE_CLOUD_AUTH_ID_BASE64=$(echo -n "$OKE_CLOUD_AUTH_ID" | base64)
# Generate a new ClusterManager.yaml by using ClusterManagerTemplate.yaml and environment variables.
eval "cat <<EOF
$(<ClusterManagerTemplate.yaml)
EOF" > ClusterManager.yaml
```

1. Use `kubectl` to deploy the generated `ClusterManager.yaml` file.

```
kubectl --context $FEDERATION_HOST create -f ClusterManager.yaml
```
2. Verify if Cluster Manager is installed.
1. Verify if Cluster Manager is installed.
```
kubectl --context $FEDERATION_HOST get pods --all-namespaces | grep cluster-manager
```
3. (Optional) Uninstall Cluster Manager.
1. (Optional) Uninstall Cluster Manager.
```
kubectl --context $FEDERATION_HOST delete -f ClusterManager.yaml
Expand Down Expand Up @@ -74,29 +94,29 @@ deploy directory or may refer to the helm chart in the deploy directory.
--set federationContext="$FEDERATION_CONTEXT" \
--set federationNamespace="$FEDERATION_NAMESPACE" \
--set domain="something.fed.net" \
--set image.repository="docker.io/somewhere/" \
--set image.repository="someregistry.io/somewhere" \
--set image.tag="v1" \
--set okeApiHost="api.cluster.us-ashburn-1.oracledx.com" \
--set statestore="s3://clusters-state" \
--kube-context "$FEDERATION_HOST"
```
Where,
* `FEDERATION_NAMESPACE` - Namespace where federation was installed via kubefed init
* `FEDERATION_HOST` - Federation host name
* `FEDERATION_CONTEXT` - Federation context name
* `awsAccessKeyId` - AWS access key ID
* `awsSecretAccessKey` - AWS secret access ID
* `okeBearerToken`- Wercker Clusters bearer token
* `okeAuthGroup` - Wercker Clusters auth group
* `okeCloudAuthId` - Wercker Clusters cloud auth ID
* `federationEndpoint` - Federation API server endpoint
* `federationContext` - Federation name
* `federationNamespace` - Similar to FEDERATION_NAMESPACE. Defaults to federation-system if not set.
* `domain` - AWS domain name
* `image.repository` - Repository where Cluster Manager is stored
* `image.tag` - Cluster Manager image tag/version no.
* `okeApiHost` - Wercker Clusters API host endpoint
* `statestore` - AWS cluster S3 state store
Where,
* `FEDERATION_NAMESPACE` - Namespace where federation was installed via kubefed init
* `FEDERATION_HOST` - Federation host name
* `FEDERATION_CONTEXT` - Federation context name
* `awsAccessKeyId` - AWS access key ID
* `awsSecretAccessKey` - AWS secret access ID
* `okeBearerToken`- Wercker Clusters bearer token
* `okeAuthGroup` - Wercker Clusters auth group
* `okeCloudAuthId` - Wercker Clusters cloud auth ID
* `federationEndpoint` - Federation API server endpoint
* `federationContext` - Federation name
* `federationNamespace` - Similar to FEDERATION_NAMESPACE. Defaults to federation-system if not set.
* `domain` - AWS domain name
* `image.repository` - Repository where Cluster Manager is stored
* `image.tag` - Cluster Manager image tag/version no.
* `okeApiHost` - Wercker Clusters API host endpoint
* `statestore` - AWS cluster S3 state store
3. Verify if Cluster Manager is installed.
Expand All @@ -122,11 +142,11 @@ examples.
```
This command deploys the cluster in an offline state. To customize the Wercker cluster, change the parameters in `cluster-manager.n6s.io/cluster.config` annotation of cluster. Before deploying, you can modify *ClusterOke.yaml* or use `kubectl` annotate command. The supported parameters are
- K8Version - Kubernetes Version. Valid values are 1.7.4, 1.7.9, or 1.8.0. Default value is 1.7.4.
- nodeZones - Available Domains. Valid values are AD-1, AD-2, AD-3. This field can be set to any combination of the set of values.
- shape - Valid values are VM.Standard1.1, VM.Standard1.2, VM.Standard1.4, VM.Standard1.8, VM.Standard1.16, VM.DenseIO1.4, VM.DenseIO1.8, VM.DenseIO1.16, BM.Standard1.36, BM.HighIO1.36 and BM.DenseIO1.36.
- workersPerAD - No. of worker nodes per availability domain.
- compartment - Wercker Cluster Compartment ID where worker nodes will be instantiated.
- K8Version - Kubernetes Version. Valid values are 1.7.4, 1.7.9, or 1.8.0. Default value is 1.7.4.
- nodeZones - Available Domains. Valid values are AD-1, AD-2, AD-3. This field can be set to any combination of the set of values.
- shape - Valid values are VM.Standard1.1, VM.Standard1.2, VM.Standard1.4, VM.Standard1.8, VM.Standard1.16, VM.DenseIO1.4, VM.DenseIO1.8, VM.DenseIO1.16, BM.Standard1.36, BM.HighIO1.36 and BM.DenseIO1.36.
- workersPerAD - No. of worker nodes per availability domain.
- compartment - Wercker Cluster Compartment ID where worker nodes will be instantiated.
2. Initiate provisioning. **Note:** Do not perform these steps if you are using Navarkos. Navarkos determines on which cluster to perform the operation depending on demand and supply.
- Update annotation `n6s.io/cluster.lifecycle.state` with value `pending-provision`.
Expand All @@ -150,16 +170,16 @@ from examples.
`cluster-manager.n6s.io/cluster.config` which is used to customize the cluster, please refer to the
[AWS Documentation](https://aws.amazon.com/documentation):
- region - Region where the cluster will be instantiated.
- masterZones - List of zones where master nodes will be instantiated.
- nodeZones - List of zones where worker nodes will be instantiated.
- masterSize - Master node(s) instance size
- nodeSize - Worker node(s) instance size.
- numberOfMasters - No. of master nodes to instantiate.
- numberOfNodes - No. of worker nodes to instantiate.
- sshkey - SSH public access key to the cluster nodes.
2. Initiate provisioning. **Note:** Do not perform this step if you are using Navarkos. Navarkos determines on which cluster to perform the operation depending on demand and supply.
- region - Region where the cluster will be instantiated.
- masterZones - List of zones where master nodes will be instantiated.
- nodeZones - List of zones where worker nodes will be instantiated.
- masterSize - Master node(s) instance size
- nodeSize - Worker node(s) instance size.
- numberOfMasters - No. of master nodes to instantiate.
- numberOfNodes - No. of worker nodes to instantiate.
- sshkey - SSH public access key to the cluster nodes.
1. Initiate provisioning. **Note:** Do not perform this step if you are using Navarkos. Navarkos determines on which cluster to perform the operation depending on demand and supply.
- Update annotation `n6s.io/cluster.lifecycle.state` with value `pending-provision`.
- At the end of the provisioning, the value of `n6s.io/cluster.lifecycle.state` will be changed either to `ready` if successful or `failed-up` if failed.
Expand All @@ -170,17 +190,16 @@ from examples.
### Scaling up an already provisioned cluster
**Note:** Do not perform these steps if you are using Navarkos. Navarkos determines on which cluster to perform the operation depending on demand and supply.
- If there is more demand, you can scale up an already provisioned cluster to support more load.
Update the annotation `n6s.io/cluster.lifecycle.state` with value `pending-up`.
- If there is more demand, you can scale up an already provisioned cluster to support more load. Update the annotation `n6s.io/cluster.lifecycle.state` with value `pending-up`.
- You can configure the scale up size using the annotation `cluster-manager.n6s.io/cluster.scale-up-size`, otherwise it uses the default value 1.
- At the end of the provisioning, the value of `n6s.io/cluster.lifecycle.state` will be changed either to `ready` if successful or `failed-up` if failed.
Here is an example of scaling up a previously provisioned `akube-us-east-2` AWS cluster by 5
```
kubectl --context=$FEDERATION_CONTEXT annotate cluster akube-us-east-2 cluster-manager.n6s.io/cluster.scale-up-size=5 --overwrite && \
kubectl --context=$FEDERATION_CONTEXT annotate cluster akube-us-east-2 n6s.io/cluster.lifecycle.state=pending-up --overwrite
```
```
kubectl --context=$FEDERATION_CONTEXT annotate cluster akube-us-east-2 cluster-manager.n6s.io/cluster.scale-up-size=5 --overwrite && \
kubectl --context=$FEDERATION_CONTEXT annotate cluster akube-us-east-2 n6s.io/cluster.lifecycle.state=pending-up --overwrite
```
### Scaling down an already provisioned cluster
**Note:** Do not perform these steps if you are using Navarkos. Navarkos determines on which cluster to perform the operation depending on demand and supply.
Expand All @@ -192,23 +211,22 @@ Here is an example of scaling up a previously provisioned `akube-us-east-2` AWS
Here is an example of scaling down a previously provisioned `akube-us-east-2` AWS cluster by 5
```
kubectl --context=$FEDERATION_CONTEXT annotate cluster akube-us-east-2 cluster-manager.n6s.io/cluster.scale-down-size=5 --overwrite && \
kubectl --context=$FEDERATION_CONTEXT annotate cluster akube-us-east-2 n6s.io/cluster.lifecycle.state=pending-down --overwrite
```
```
kubectl --context=$FEDERATION_CONTEXT annotate cluster akube-us-east-2 cluster-manager.n6s.io/cluster.scale-down-size=5 --overwrite && \
kubectl --context=$FEDERATION_CONTEXT annotate cluster akube-us-east-2 n6s.io/cluster.lifecycle.state=pending-down --overwrite
```
### Shutting down a provisioned cluster
**Note:** Do not perform these steps if you are using Navarkos. Navarkos determines on which cluster to perform the operation depending on the demand and supply.
- You can shut down a provisioned cluster when it is not in use.
Update annotation `n6s.io/cluster.lifecycle.state` with value `pending-shutdown`.
- You can shut down a provisioned cluster when it is not in use. Update annotation `n6s.io/cluster.lifecycle.state` with value `pending-shutdown`.
- At the end of the provisioning, the value of `n6s.io/cluster.lifecycle.state` will be changed to `offline`.
Here is an example of shutting down a previously provisioned `akube-us-east-2` AWS cluster
```
kubectl --context=$FEDERATION_CONTEXT annotate cluster akube-us-east-2 n6s.io/cluster.lifecycle.state=pending-shutdown --overwrite
```
```
kubectl --context=$FEDERATION_CONTEXT annotate cluster akube-us-east-2 n6s.io/cluster.lifecycle.state=pending-shutdown --overwrite
```
### Checking cluster status
Expand Down
2 changes: 1 addition & 1 deletion deploy/WerckerClustersParameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ If you are using [Wercker Clusters](http://devcenter.wercker.com/docs/getting-st
1. When you use Wercker Clusters to create a new Kubernetes cluster, you specify Cloud Credentials to specify where you want to create the cluster. If you already have *Cloud Auth ID* and OKE_AUTH_GROUP in the **Clusters > Cloud Credentials** page in the Wercker cluster, you can use those credentials. If you have to create those IDs, then follow these steps:
<ol type="a">
<li>Go to https://app.wercker.com/clusters/cloud-credentials and select the Organization.</li>
<li>Click <B>New Cloud Credential Button</B>. </li>
<li>Click <B>New Cloud Credential</B>. </li>
<li>Enter a name.</li>
<li>Enter all your Oracle Cloud Infrastructure (OCI) tenancy specific information (User OCID, Tenancy OCID). You can get those information by logging in to [OCI](https://console.us-phoenix-1.oraclecloud.com/) and then by changing the Region, for example, change the region to *us-ashburn-1*. Navigate to Identity and note down the User OCID and also copy the Tenancy ID, which you will find at the bottom of any page in OCI.</li>
<li>For <B>Key Fingerprint</B> and <B>API Private Key (PEM Format)</B>, follow the instructions in https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm.</li>
Expand Down
27 changes: 21 additions & 6 deletions deploy/helm/cluster-manager/templates/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,19 +12,34 @@ spec:
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}cluster-manager:{{ .Values.image.tag }}"
image: "{{ .Values.image.repository }}/cluster-manager:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: AWS_ACCESS_KEY_ID
value: "{{ .Values.awsAccessKeyId }}"
valueFrom:
secretKeyRef:
name: {{ template "name" . }}-provider-access-keys
key: awsAccessKeyId
- name: AWS_SECRET_ACCESS_KEY
value: "{{ .Values.awsSecretAccessKey }}"
valueFrom:
secretKeyRef:
name: {{ template "name" . }}-provider-access-keys
key: awsSecretAccessKey
- name: OKE_BEARER_TOKEN
value: "{{ .Values.okeBearerToken }}"
valueFrom:
secretKeyRef:
name: {{ template "name" . }}-provider-access-keys
key: okeBearerToken
- name: OKE_AUTH_GROUP
value: "{{ .Values.okeAuthGroup }}"
valueFrom:
secretKeyRef:
name: {{ template "name" . }}-provider-access-keys
key: okeAuthGroup
- name: OKE_CLOUD_AUTH_ID
value: "{{ .Values.okeCloudAuthId }}"
valueFrom:
secretKeyRef:
name: {{ template "name" . }}-provider-access-keys
key: okeCloudAuthId
command:
- /cluster-manager
{{- if (not (empty .Values.glogLevel)) }}
Expand Down
22 changes: 22 additions & 0 deletions deploy/helm/cluster-manager/templates/secret.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ template "name" . }}-provider-access-keys
namespace: {{ .Values.federationNamespace }}
type: Opaque
data:
{{ if .Values.awsAccessKeyId }}
awsAccessKeyId: {{ .Values.awsAccessKeyId | b64enc | quote }}
{{ end }}
{{ if .Values.awsSecretAccessKey }}
awsSecretAccessKey: {{ .Values.awsSecretAccessKey | b64enc | quote }}
{{ end }}
{{ if .Values.okeBearerToken }}
okeBearerToken: {{ .Values.okeBearerToken | b64enc | quote }}
{{ end }}
{{ if .Values.okeAuthGroup }}
okeAuthGroup: {{ .Values.okeAuthGroup | b64enc | quote }}
{{ end }}
{{ if .Values.okeCloudAuthId }}
okeCloudAuthId: {{ .Values.okeCloudAuthId | b64enc | quote }}
{{ end }}
2 changes: 1 addition & 1 deletion deploy/helm/cluster-manager/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ federationNamespace:
domain: domain.change.me
statestore:

# OKE connection paramters
# OKE connection parameters
okeApiHost:

# Log Level
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,16 @@
apiVersion: v1
kind: Secret
metadata:
name: cluster-manager-provider-access-keys
namespace: $FEDERATION_NAMESPACE
type: Opaque
data:
awsAccessKeyId: "$AWS_ACCESS_KEY_ID_BASE64"
awsSecretAccessKey: "$AWS_SECRET_ACCESS_KEY_BASE64"
okeBearerToken: "$OKE_BEARER_TOKEN_BASE64"
okeAuthGroup: "$OKE_AUTH_GROUP_BASE64"
okeCloudAuthId: "$OKE_CLOUD_AUTH_ID_BASE64"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
Expand All @@ -12,19 +25,34 @@ spec:
spec:
containers:
- name: cluster-manager
image: $CLUSTER_MANAGER_IMAGE
image: ${IMAGE_REPOSITORY}/cluster-manager:${IMAGE_VERSION}
imagePullPolicy: Always
env:
- name: AWS_ACCESS_KEY_ID
value: "$AWS_ACCESS_KEY_ID"
valueFrom:
secretKeyRef:
name: cluster-manager-provider-access-keys
key: awsAccessKeyId
- name: AWS_SECRET_ACCESS_KEY
value: "$AWS_SECRET_ACCESS_KEY"
valueFrom:
secretKeyRef:
name: cluster-manager-provider-access-keys
key: awsSecretAccessKey
- name: OKE_BEARER_TOKEN
value: "$OKE_BEARER_TOKEN"
valueFrom:
secretKeyRef:
name: cluster-manager-provider-access-keys
key: okeBearerToken
- name: OKE_AUTH_GROUP
value: "$OKE_AUTH_GROUP"
valueFrom:
secretKeyRef:
name: cluster-manager-provider-access-keys
key: okeAuthGroup
- name: OKE_CLOUD_AUTH_ID
value: "$OKE_CLOUD_AUTH_ID"
valueFrom:
secretKeyRef:
name: cluster-manager-provider-access-keys
key: okeCloudAuthId
command:
- /cluster-manager
- --v=2
Expand Down
3 changes: 1 addition & 2 deletions pkg/controller/cluster/provider/kopsAws/kops.go
Original file line number Diff line number Diff line change
Expand Up @@ -143,10 +143,9 @@ func init() {
}

func osLookup(keyId string) error {
if value, ok := os.LookupEnv(keyId); !ok {
if _, ok := os.LookupEnv(keyId); !ok {
return errors.Errorf("%s env is not set, kops provider will be disabled", keyId)
} else {
glog.Infof("Using %s=%v", keyId, value)
return nil
}
}
Expand Down
3 changes: 0 additions & 3 deletions pkg/controller/cluster/provider/oke/okebmc.go
Original file line number Diff line number Diff line change
Expand Up @@ -119,20 +119,17 @@ func NewOke(config *options.ClusterControllerOptions) (*Oke, error) {

if bearer, ok := os.LookupEnv(OkeBearerToken); ok {
oke.okeBearer = OkeBearerPrefix + bearer
glog.Infof("Using OKE_BEARER_TOKEN=%v", oke.okeBearer)
} else {
return nil, errors.Errorf("Env var %v is required by OKE provider, OKE clusters will not be provisioned", OkeBearerToken)
}

if authGroup, ok := os.LookupEnv(OkeAuthGroup); ok {
oke.okeAuthGroup = authGroup
glog.Infof("Using OKE_AUTH_GROUP=%v", oke.okeAuthGroup)
} else {
return nil, errors.Errorf("Env var %v is required by OKE provider, OKE clusters will not be provisioned", OkeAuthGroup)
}
if defaultCloudAuthId, ok := os.LookupEnv(OkeDefaultCloudAuthId); ok {
oke.okeDefaultCloudAuthId = defaultCloudAuthId
glog.Infof("Using OKE_CLOUD_AUTH_ID=%v", defaultCloudAuthId)
}

return oke, nil
Expand Down

0 comments on commit 65174cf

Please sign in to comment.