Skip to content

Commit

Permalink
Merge pull request #2 from jramsdale/jenkinsify
Browse files Browse the repository at this point in the history
Jenkinsify
  • Loading branch information
chrislovecnm authored Aug 24, 2018
2 parents af9b92c + c4df1f4 commit e4cb5dd
Show file tree
Hide file tree
Showing 15 changed files with 347 additions and 159 deletions.
6 changes: 4 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
account.json
__pycache__/
venv/
.vscode/
.terraform/
terraform.tfvars
.terraform.tfstate*
terraform.tfstate*
45 changes: 35 additions & 10 deletions Jenkinsfile
Original file line number Diff line number Diff line change
Expand Up @@ -33,13 +33,13 @@ metadata:
spec:
containers:
- name: k8s-node
image: gcr.io/pso-helmsman-cicd/jenkins-k8s-node:1.0.0
image: gcr.io/pso-helmsman-cicd/jenkins-k8s-node:1.0.1
imagePullPolicy: Always
command:
- cat
tty: true
volumeMounts:
# Mount the docker.sock file so we can communicate wth the local docker
# Mount the docker.sock file so we can communicate with the local docker
# daemon
- name: docker-sock-volume
mountPath: /var/run/docker.sock
Expand All @@ -64,27 +64,51 @@ spec:
}
}

environment {
GOOGLE_APPLICATION_CREDENTIALS = '/home/jenkins/dev/jenkins-deploy-dev-infra.json'
}


stages {
stage('Setup access') {
stage('Lint') {
steps {
container('k8s-node') {
sh "make lint"
}
}
}

stage('Setup') {
steps {
container('k8s-node') {
script {
env.KEYFILE = "/home/jenkins/dev/jenkins-deploy-dev-infra.json"
}
env.ZONE = "${ZONE}"
env.PROJECT_ID = "${PROJECT_ID}"
env.REGION = "${REGION}"
env.KEYFILE = GOOGLE_APPLICATION_CREDENTIALS
}
// Setup gcloud service account access
sh "gcloud auth activate-service-account --key-file=${env.KEYFILE}"
sh "gcloud config set compute/zone ${env.ZONE}"
sh "gcloud config set core/project ${env.PROJECT_ID}"
sh "gcloud config set compute/region ${env.REGION}"

}
}
}

stage('makeall') {
stage('Create') {
steps {
container('k8s-node') {
// Checkout code from repository
checkout scm
sh "make create"
}
}
}

sh "make all"
stage('Validate') {
steps {
container('k8s-node') {
sh "make validate"
}
}
}
Expand All @@ -93,7 +117,8 @@ spec:
post {
always {
container('k8s-node') {
sh 'gcloud auth revoke'
sh "make teardown"
sh "gcloud auth revoke"
}
}
}
Expand Down
22 changes: 19 additions & 3 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,25 @@
# Make will use bash instead of sh
SHELL := /usr/bin/env bash

# All is the first target in the file so it will get picked up when you just run 'make' on its own
all: check_shell check_python check_golang check_terraform check_docker check_base_files check_headers check_trailing_whitespace
ROOT := ${CURDIR}

# lint is the first target in the file so it will get picked up when you just
# run 'make' on its own
lint: check_shell check_shebangs check_python check_golang check_terraform \
check_docker check_base_files check_headers check_trailing_whitespace

# create/delete/validate is for CICD
.PHONY: create
create:
@source scripts/create.sh

.PHONY: validate
validate:
@source scripts/validate.sh

.PHONY: teardown
teardown:
@source scripts/teardown.sh

# The .PHONY directive tells make that this isn't a real target and so
# the presence of a file named 'check_shell' won't cause this target to stop
Expand Down Expand Up @@ -57,4 +74,3 @@ check_trailing_whitespace:
check_headers:
@echo "Checking file headers"
@python test/verify_boilerplate.py

121 changes: 70 additions & 51 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@
* [Introduction](#introduction)
* [Architecture](#architecture)
* [Prerequisites](#prerequisites)
* [Enable GCP APIs](#enable-gcp-apis)
* [Install Cloud SDK](#install-cloud-sdk)
* [Install Terraform](#install-terraform)
* [Configure Authentication](#configure-authentication)
* [Enable GCP APIs](#enable-gcp-apis)
* [Deployment](#deployment)
* [Introduction to Terraform](#introduction-to-terraform)
* [Running Terraform](#running-terraform)
Expand Down Expand Up @@ -81,24 +81,9 @@ available in the Stackdriver Trace Console.
## Prerequisites

The steps described in this document require the installation of several tools
and the proper configuration of authentication to allow them to access your
and the proper configuration of authentication and APIs to allow access your
GCP resources.

### Enable GCP APIs

The following APIs need to be enabled:
* Kubernetes Engine API
* Stackdriver Trace API

A script is provided in the /scripts folder named **enable-apis.sh** that will
enable these three API's. Follow these steps to execute the script:
1. In the GCP console, change to the project you want to enable the API's for.
2. Click on the **Activate Cloud Shell Console** Visit the **APIs & Services**
section of the GCP Console.
3. Upload the **enable-apis.sh** script in the **Cloud Shell** window.
4. Execute the script.


### Install Cloud SDK

The Google Cloud SDK is used to interact with your GCP resources.
Expand Down Expand Up @@ -130,6 +115,18 @@ In order to interact with GCP from your system you will need to authenticate:
```console
gcloud auth application-default login
```
### Enable GCP APIs

The following APIs need to be enabled:
* Kubernetes Engine API
* Stackdriver Trace API

The following commands will enable these APIs:

```console
gcloud services enable container.googleapis.com
gcloud services enable cloudtrace.googleapis.com
```


## Deployment
Expand Down Expand Up @@ -159,51 +156,71 @@ demo app will produce trace events that are visible in the

### Running Terraform

There are three Terraform files provided with this example. The first one,
`main.tf`, is the starting point for Terraform. It describes the features that
will be used, the resources that will be manipulated, and the outputs that will
result. The second file is `provider.tf`, which indicates which cloud provider
and version will be the target of the Terraform commands--in this case GCP. The
final file is `variables.tf`, which contains a list of variables that are used
as inputs into Terraform. Any variables referenced in the `main.tf` that do not
have defaults configured in `variables.tf` will result in prompts to the user
at runtime.
There are three Terraform files provided with this example, located in the `/terraform` subdirectory of the project. The first one, `main.tf`, is the starting point for Terraform. It describes the features that will be used, the resources that will be manipulated, and the outputs that will result. The second file is `provider.tf`, which indicates which cloud provider and version will be the target of the Terraform commands--in this case GCP. The final file is `variables.tf`, which contains a list of variables that are used as inputs into Terraform. Any variables referenced in the `main.tf` that do not have defaults configured in `variables.tf` will result in prompts to the user at runtime.

#### Initialization
Given that authentication was [configured](#configure-authentication) above, we
are now ready to deploy the infrastructure. Run the following command to do the
deploy:
are now ready to deploy the infrastructure. Run the following command from the root directory of the project:

```console
cd terraform
```

Once there, Terraform needs to be initialized. This will download the dependencies that Terraform requires to function. Enter:
```console
terraform init
```

For this demo, Terraform needs two pieces of information in order to run: the GCP _project_ and the GCP _zone_ to which the demo application should be deployed. Terraform will prompt for these values if it does not know them already. By default, it will look for a file called `terraform.tfvars` or files with a suffix of `.auto.tfvars` in the current directory to obtain those values. This demo provides a convenience script to prompt for project and zone and persist them in a `terraform.tfvars` file. Run:

```console
../scripts/generate-tfvars.sh
```

If the file already exists you will receive an error.

The script uses previously-configured values from the `gcloud` command. If they have not been configured, the error message will indicate how they should be set. The existing values can be viewed with the following command:

```console
gcloud config list
```

If the displayed values don't correspond to where you intend to run the demo application, change the values in `terraform.tfvars` to your preferred values.

#### Deployment

Having initialized Terraform you can see the work that Terraform will perform with the following command:

```console
terraform plan
```

This command can be used to visually verify that settings are correct and Terraform will inform you if it detects any errors. While not necessary, it is a good practice to run it every time prior to changing infrastructure using Terraform.

After verification, tell Terraform to set up the necessary infrastructure:

```console
./deploy.sh
terraform apply
```

This script will manage the deployment by doing the following things:
It display the changes that will be made and asks you to confirm with `yes`.

1. Generate the Terraform variable values using the `generate-tfvars.sh` script
2. Ensure Terraform is initialized using `terraform init`
3. Deploy the Terraform resources using `terraform apply`
4. Deploy the Kubernetes resources using
`kubectl apply -f ./tracing-demo-deployment.yaml`
After a few minutes you should find your [Kubernetes Cluster](https://console.cloud.google.com/kubernetes) and [Pub/Sub Topic and Subscription](https://console.cloud.google.com/cloudpubsub) in the GCP console.

You will need to enter any variables again that don't have defaults provided.
If no errors are displayed then after a few minutes you should see your
Kubernetes Engine cluster in the
[GCP Console](https://console.cloud.google.com/kubernetes) with the sample
application deployed.
Now, deploy the demo application using Kubernetes's `kubectl` command:

At this point you should find a Kubernetes cluster has been deployed to GCP and
a Pub/Sub topic will have be been created as well a subscription to the topic.
The final thing you should see is a Kubernetes deployment in the
[Workload](https://console.cloud.google.com/kubernetes/workload) tab of the
Kubernetes Engine Console
```console
kubectl apply -f tracing-demo-deployment.yaml
```

Once the app has been deployed, it can be viewed in the [Workload](https://console.cloud.google.com/kubernetes/workload) tab of the Kubernetes Engine Console. You can also see the load balancer that was created for the application in the [Services](https://console.cloud.google.com/kubernetes/discovery) section of the console.

Once the app has been deployed, it can be viewed in the
[Workload](https://console.cloud.google.com/kubernetes/workload) tab of the
Kubernetes Engine Console. You can also see the load balancer that was created
for the application in the
[Services](https://console.cloud.google.com/kubernetes/discovery) section of
the console.
Incidentally, the endpoint can be programmatically acquired using the following command:

```console
echo http://$(kubectl get svc tracing-demo -n default \
-ojsonpath='{.status.loadBalancer.ingress[0].ip}')
```

## Validation

Expand Down Expand Up @@ -303,6 +320,8 @@ that were created so that you avoid accruing charges:
terraform destroy
```

As with `apply`, Terraform will prompt for a `yes` to confirm your intent.

Since Terraform tracks the resources it created it is able to tear down the
cluster, the Pub/Sub topic, and the Pub/Sub subscription.

Expand Down
29 changes: 0 additions & 29 deletions deploy.sh

This file was deleted.

56 changes: 56 additions & 0 deletions scripts/common.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
#!/bin/bash -e

# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# "---------------------------------------------------------"
# "- -"
# "- Common commands for all scripts -"
# "- -"
# "---------------------------------------------------------"

# gcloud and kubectl are required for this POC
command -v gcloud >/dev/null 2>&1 || { \
echo >&2 "I require gcloud but it's not installed. Aborting."; exit 1; }

command -v kubectl >/dev/null 2>&1 || { \
echo >&2 "I require kubectl but it's not installed. Aborting."; exit 1; }

# Get the default zone and use it or die
ZONE=$(gcloud config get-value compute/zone)
if [ -z "${ZONE}" ]; then
echo "gcloud cli must be configured with a default zone." 1>&2
echo "run 'gcloud config set compute/zone ZONE'." 1>&2
echo "replace 'ZONE' with the zone name like us-west1-a." 1>&2
exit 1;
fi

#Get the default region and use it or die
REGION=$(gcloud config get-value compute/region)
if [ -z "${REGION}" ]; then
echo "gcloud cli must be configured with a default region." 1>&2
echo "run 'gcloud config set compute/region REGION'." 1>&2
echo "replace 'REGION' with the region name like us-west1." 1>&2
exit 1;
fi

# Get a comma separated list of zones from the default region
ZONESINREGION=""
for FILTEREDZONE in $(gcloud compute zones list --filter="region:$REGION" \
--format="value(name)" --limit 2)
do
ZONESINREGION+="$FILTEREDZONE,"
done
#Remove the last comma from the starting
ZONESINREGION=${ZONESINREGION%?}
Loading

0 comments on commit e4cb5dd

Please sign in to comment.