Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/development' into dev-robert
Browse files Browse the repository at this point in the history
  • Loading branch information
rjzondervan committed May 27, 2020
2 parents d6c419c + 873d1ab commit 21fe8a7
Show file tree
Hide file tree
Showing 11 changed files with 317 additions and 243 deletions.
44 changes: 6 additions & 38 deletions INSTALLATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,41 +2,15 @@
This document dives a little bit deeper into installing your component on a kubernetes cluster, looking for information on setting up your component on a local machine? Take a look at the [tutorial](TUTORIAL.md) instead.

## Setting up helm



## Setting up tiller
Create the tiller service account:

```CLI
$ kubectl -n kube-system create serviceaccount tiller --kubeconfig="api/helm/kubeconfig.yaml"
```

Next, bind the tiller service account to the cluster-admin role:
```CLI
$ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller --kubeconfig="api/helm/kubeconfig.yaml"
```

Now we can run helm init, which installs Tiller on our cluster, along with some local housekeeping tasks such as downloading the stable repo details:
We first need to be sure the stable repository of helm and kubernetes is added. We do this using the following command:
```CLI
$ helm init --service-account tiller --kubeconfig="kubeconfig.yaml"
$ helm repo list
```

To verify that Tiller is running, list the pods in the kube-system namespace:
```CLI
$ kubectl get pods --namespace kube-system --kubeconfig="kubeconfig.yaml"
```

The Tiller pod name begins with the prefix tiller-deploy-.

Now that we've installed both Helm components, we're ready to use helm to install our first application.

Or all in one go
If in the output there is no repository 'stable' we need to add it:

```CLI
$ kubectl -n kube-system create serviceaccount tiller --kubeconfig="kubeconfig.yaml"
$ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller --kubeconfig="kubeconfig.yaml"
$ helm init --service-account tiller --kubeconfig="kubeconfig.yaml"
$ helm repo add stable https://kubernetes-charts.storage.googleapis.com
```

## Setting up ingress
Expand All @@ -56,19 +30,14 @@ $ kubectl describe ingress pc-dev-ingress -n=kube-system --kubeconfig="kubeconfi
After we installed helm and tiller we can easily use both to install kubernetes dashboard

```CLI
$ helm install stable/kubernetes-dashboard --name dashboard --kubeconfig="kubeconfig.yaml" --namespace="kube-system"
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml --kubeconfig=kubeconfig.yaml
```

But before we can login to tiller we need a token, we can get one of those trough the secrets. Get yourself a secret list by running the following command
```CLI
$ kubectl -n kube-system get secret --kubeconfig="kubeconfig.yaml"
```

Because we just bound tiller to our admin account and use tiller (trough helm) to manage our code deployment it makes sense to use the tiller token, lets look at the tiller secret (it should look something like "tiller-token-XXXXX" and ask for the corresponding token.

```CLI
$ kubectl -n kube-system describe secrets tiller-token-xxxxx --kubeconfig="kubeconfig.yaml"
```

This should return the token, copy it to somewhere save (just the token not the other returned information) and start up a dashboard connection

Expand All @@ -86,15 +55,14 @@ http://localhost:8001/api/v1/namespaces/kube-system/services/https:dashboard-kub
https://cert-manager.io/docs/installation/kubernetes/

```CLI
$ kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.12/deploy/manifests/00-crds.yaml --kubeconfig="kubeconfig.yaml"
$ kubectl create namespace cert-manager --kubeconfig="kubeconfig.yaml"
```

The we need tp deploy the cert manager to our cluster

```CLI
$ helm repo add jetstack https://charts.jetstack.io
$ helm install --name cert-manager --namespace cert-manager --version v0.12.0 \ jetstack/cert-manager --kubeconfig="kubeconfig.yaml"
$ helm install cert-manager --namespace cert-manager --version v0.15.0 jetstack/cert-manager --set installCRDS=true --kubeconfig="kubeconfig.yaml"
```

lets check if everything is working
Expand Down
1 change: 0 additions & 1 deletion api/.dockerignore
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,3 @@ bin/*
!bin/console
docker/db/data/
var/
vendor/
105 changes: 7 additions & 98 deletions api/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,91 +1,16 @@
# the different stages of this Dockerfile are meant to be built into separate images
# https://docs.docker.com/develop/develop-images/multistage-build/#stop-at-a-specific-build-stage
# https://docs.docker.com/compose/compose-file/#target


# https://docs.docker.com/engine/reference/builder/#understand-how-arg-and-from-interact
ARG PHP_VERSION=7.3
ARG NGINX_VERSION=1.17
ARG VARNISH_VERSION=6.2


#############################
# "php" stage #
#############################
# The base stage for all our stages
FROM php:${PHP_VERSION}-fpm-alpine AS api_platform_php

# Note: Latest version of kubectl may be found at:
# https://github.com/kubernetes/kubernetes/releases
ENV KUBE_LATEST_VERSION="v1.17.3"
# Note: Latest version of helm may be found at:
# https://github.com/kubernetes/helm/releases
ENV HELM_VERSION="v2.14.1"

RUN apk add --no-cache ca-certificates bash git openssh curl \
&& wget -q https://storage.googleapis.com/kubernetes-release/release/${KUBE_LATEST_VERSION}/bin/linux/amd64/kubectl -O /usr/local/bin/kubectl \
&& chmod +x /usr/local/bin/kubectl \
&& wget -q https://get.helm.sh/helm-${HELM_VERSION}-linux-amd64.tar.gz -O - | tar -xzO linux-amd64/helm > /usr/local/bin/helm \
&& chmod +x /usr/local/bin/helm

# persistent / runtime deps
RUN apk add --no-cache \
acl \
file \
gettext \
git \
;

ARG APCU_VERSION=5.1.17
RUN set -eux; \
apk add --no-cache --virtual .build-deps \
$PHPIZE_DEPS \
icu-dev \
libzip-dev \
libpng-dev \
postgresql-dev \
zlib-dev \
; \
\
docker-php-ext-configure zip --with-libzip; \
docker-php-ext-install -j$(nproc) \
intl \
pdo_pgsql \
zip \
mysqli \
pdo_mysql \
pcntl \
gd \
; \
pecl install \
apcu-${APCU_VERSION} \
redis \
; \
rm -rf /tmp/pear \
pecl clear-cache; \
docker-php-ext-enable \
apcu \
opcache \
mysqli \
redis \
; \
\
runDeps="$( \
scanelf --needed --nobanner --format '%n#p' --recursive /usr/local/lib/php/extensions \
| tr ',' '\n' \
| sort -u \
| awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' \
)"; \
apk add --no-cache --virtual .api-phpexts-rundeps $runDeps; \
\
apk del .build-deps

COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
RUN ln -s $PHP_INI_DIR/php.ini-production $PHP_INI_DIR/php.ini
COPY docker/php/conf.d/api-platform.ini $PHP_INI_DIR/conf.d/api-platform.ini

FROM conduction/pc-php:prod AS api_platform_php


# https://getcomposer.org/doc/03-cli.md#composer-allow-superuser
ENV COMPOSER_ALLOW_SUPERUSER=1

# install Symfony Flex globally to speed up download of Composer packages (parallelized prefetching)
RUN set -eux; \
composer global require "symfony/flex" --prefer-dist --no-progress --no-suggest --classmap-authoritative; \
Expand All @@ -104,12 +29,8 @@ RUN set -eux; \
composer install --prefer-dist --no-dev --no-scripts --no-progress --no-suggest; \
composer clear-cache

# do not use .env files in production
COPY .env ./
RUN composer dump-env prod; \
rm .env

# copy only specifically what we need
COPY .env ./
COPY helm helm/
COPY bin bin/
COPY config config/
Expand All @@ -132,34 +53,22 @@ RUN chmod +x /usr/local/bin/docker-entrypoint
ENTRYPOINT ["docker-entrypoint"]
CMD ["php-fpm"]

# Let update the docs to show the latest chages
# RUN bin/console api:swagger:export --output=/srv/api/public/schema/openapi.yaml --yaml --spec-version=3
# RUN bin/console app:publiccode:update --location=/srv/api/public/schema/ --spec-version=0.2

#############################
# "nginx" stage #
#############################
# depends on the "php" stage above, and with an litle bit of help from https://github.com/shiphp/nginx-env
FROM shiphp/nginx-env AS api_platform_nginx
FROM conduction/pc-nginx:prod AS api_platform_nginx

# Due to our config we need a copy of the public folder for serving static content
COPY docker/nginx/conf.d/default.conf.template /etc/nginx/conf.d/default.conf
WORKDIR /srv/api
COPY --from=api_platform_php /srv/api/public public/

# Old code
#FROM nginx:${NGINX_VERSION}-alpine AS api_platform_nginx
#COPY docker/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf
#WORKDIR /srv/api
#COPY --from=api_platform_php /srv/api/public public/

#############################
# "varnish" stage #
#############################
# does not depend on any of the above stages, but placed here to keep everything in one Dockerfile
#FROM cooptilleuls/varnish:${VARNISH_VERSION}-alpine AS api_platform_varnish
FROM eeacms/varnish AS api_platform_varnish
#FROM varnish:6.3 AS api_platform_varnish
FROM conduction/pc-varnish:prod AS api_platform_varnish

COPY docker/varnish/conf/default.vcl /etc/varnish/conf.d/
# Lets install envsubst
Expand Down
Loading

0 comments on commit 21fe8a7

Please sign in to comment.