We are going to configure Hashicorp Vault as our application secret backend. A single vault pod instance was deployed as part of the plaform-base helm chart we installed earlier. We need to configure this for our team to use.
-
We configured ArgoCD with a service account name to use when connecting to Vault. Let's create the Service Account now.
oc login --server=https://api.${CLUSTER_DOMAIN##apps.}:6443 -u admin -p ${ADMIN_PASSWORD}
-
To bootstrap ArgoCD, we will manually create a secret for ArgoCD to connect to GitLab, later on we will add this via GitOps and Vault instead.
cat <<EOF | oc -n ${TEAM_NAME}-ci-cd apply -f - apiVersion: v1 data: password: "$(echo -n ${GITLAB_PAT} | base64)" username: "$(echo -n ${GITLAB_USER} | base64)" kind: Secret metadata: annotations: tekton.dev/git-0: https://${GIT_SERVER} name: git-auth type: kubernetes.io/basic-auth EOF
Refresh the list in Argocd > Settings > Respoitories and we should see our credentials and GitLab repo OK.
-
We need to unseal Hashi Vault to be able to start using it. Initialize the vault - we only do this once.
oc -n rainforest exec -ti platform-base-vault-0 -- vault operator init -key-threshold=1 -key-shares=1
You should see the following sort of debug, copy these somewhere safe!
Unseal Key 1: <unseal key> Initial Root Token: <root token>
Export them in the terminal for now as well.
export UNSEAL_KEY=<unseal key>
export ROOT_TOKEN=<root token>
-
Unseal the vault.
oc -n rainforest exec -ti platform-base-vault-0 -- vault operator unseal $UNSEAL_KEY
⛷️ TIP ⛷️ - If the vault pod is restarted when your cluster is restarted, you will need to run the unseal command. This can easily be run from a k8s cronjob.
-
Login to Hashi Vault UI using the ROOT_TOKEN. This is useful for debugging and visually understanding Vault. It uses a self signed certificate, so accepts TLS warnings in the browser.
echo https://$(oc get route platform-base-vault --template='{{ .spec.host }}' -n rainforest)
-
We will integrate Vault with our IPA/ldap authentication that we set up as part of the cluster installation post-install steps.
export VAULT_ROUTE=vault.${CLUSTER_DOMAIN} export VAULT_ADDR=https://${VAULT_ROUTE} export VAULT_SKIP_VERIFY=true
-
Login to Vault fom the command line using the root token.
vault login token=${ROOT_TOKEN}
- Enable LDAP auth method in vault
vault auth enable ldap
- Create auth config using our ldap admin account credentials.
export LDAP_ADMIN_PASSWORD=<LDAP_ADMIN_PASSWORD>
vault write auth/ldap/config \
url="ldap://ipa.ipa.svc.cluster.local:389" \
binddn="uid=ldap_admin,cn=users,cn=accounts,dc=redhatlabs,dc=dev" \
bindpass="${LDAP_ADMIN_PASSWORD}" \
userdn="cn=users,cn=accounts,dc=redhatlabs,dc=dev" \
userattr="uid" \
groupdn="cn=student,cn=groups,cn=accounts,dc=redhatlabs,dc=dev" \
groupattr="cn"
In the Vault UI you should see the ldap/ entry under Access > Auth Methods.
-
Create Vault application policy for our <TEAM_NAME>-ci-cd and student user group. We allow CRUD access to secrets for user's in the student group.
export APP_NAME=vault export TEAM_GROUP=student export PROJECT_NAME=<TEAM_NAME>-ci-cd
vault policy write $TEAM_GROUP-$PROJECT_NAME -<<EOF path "kv/data/{{identity.groups.names.$TEAM_GROUP.name}}/$PROJECT_NAME/*" { capabilities = [ "create", "update", "read", "delete", "list" ] } path "auth/$CLUSTER_DOMAIN-$PROJECT_NAME/*" { capabilities = [ "create", "update", "read", "delete", "list" ] } EOF
In the Vault UI you should see this created under Policies > ACL Policies.
-
Test that ldap login works for USER_NAME. Use your USER_PASSWORD when prompted.
vault login -method=ldap username=${USER_NAME}
-
We want to bind the user entity id mapping to our ACL. Log back in with the root token and grab the user's entity id.
vault login token=${ROOT_TOKEN}
export ENTITY_ID=$(vault list -format json identity/entity/id | jq -r '.[]')
Note: that for multiple users in our team, the member_entity_ids entry would contain a list of id's. We only have one for now.
vault write identity/group name="$TEAM_GROUP" \ policies="$TEAM_GROUP-$PROJECT_NAME" \ member_entity_ids=$ENTITY_ID \ metadata=team="$TEAM_GROUP"
-
ArgoCD will connect with a different Vault authentication mechanism. Enable Kubernetes auth in vault.
vault auth enable -path=$CLUSTER_DOMAIN-${PROJECT_NAME} kubernetes
- We need the Vault mount accessor name to create our ACL Policy for Kubernetes auth.
export MOUNT_ACCESSOR=$(vault auth list -format=json | jq -r ".\"$CLUSTER_DOMAIN-$PROJECT_NAME/\".accessor")
Create the Policy. Note that we only read and list secrets from ArgoCD, not write or update.
vault policy write $CLUSTER_DOMAIN-$PROJECT_NAME-kv-read -<< EOF
path "kv/data/$TEAM_GROUP/{{identity.entity.aliases.$MOUNT_ACCESSOR.metadata.service_account_namespace}}/*" {
capabilities=["read","list"]
}
EOF
In the Vault UI you should see this created under Policies > ACL Policies.
- We store our application secrets as kv2 format. Enable kv2 in vault.
vault secrets enable -path=kv/ -version=2 kv
- The rest of the steps could now be carried out by our data science sre team in a self-service manner. For now though, we bind the service account using the single admin user as we have not given our data science user permission into the ci-cd project.
Bind our k8s auth to the read ACL policy we created earlier.
vault write auth/$CLUSTER_DOMAIN-$PROJECT_NAME/role/$APP_NAME \
bound_service_account_names=$APP_NAME \
bound_service_account_namespaces=$PROJECT_NAME \
policies=$CLUSTER_DOMAIN-$PROJECT_NAME-kv-read \
period=120s
Configure the K8S auth method. It will automatically use the argocd-repo pod's own identity to authenticate with Kubernetes when querying the token review API.
vault write auth/$CLUSTER_DOMAIN-${PROJECT_NAME}/config \
kubernetes_host="$(oc whoami --show-server)"
- Create the team ArgoCD Vault Plugin Secret that is used to find the correct path to the k8s auth we just created.
Log back in as our USER_NAME to provision the ArgoCD Service Account token for k8s auth in vault.
vault login -method=ldap username=${USER_NAME}
We should see our user now has the correct policies set as well.
export AVP_TYPE=vault
export VAULT_ADDR=https://platform-base-vault.rainforest.svc:8200 # vault url
export AVP_AUTH_TYPE=k8s # kubernetes auth
export AVP_K8S_ROLE=vault # vault role/sa
export VAULT_SKIP_VERIFY=true
export AVP_MOUNT_PATH=auth/$CLUSTER_DOMAIN-$PROJECT_NAME
cat <<EOF | oc apply -n ${PROJECT_NAME} -f-
apiVersion: v1
stringData:
VAULT_ADDR: "${VAULT_ADDR}"
VAULT_SKIP_VERIFY: "${VAULT_SKIP_VERIFY}"
AVP_AUTH_TYPE: "${AVP_AUTH_TYPE}"
AVP_K8S_ROLE: "${AVP_K8S_ROLE}"
AVP_TYPE: "${AVP_TYPE}"
AVP_K8S_MOUNT_PATH: "${AVP_MOUNT_PATH}"
kind: Secret
metadata:
name: team-avp-credentials
namespace: ${PROJECT_NAME}
type: Opaque
EOF
- Create an example kv2 secret to test things out.
export VAULT_HELM_RELEASE=vault
export VAULT_ROUTE=${VAULT_HELM_RELEASE}.$CLUSTER_DOMAIN
export VAULT_ADDR=https://${VAULT_ROUTE}
export VAULT_SKIP_VERIFY=true
export APP_NAME=secret-test
export TEAM_GROUP=student
export PROJECT_NAME=<TEAM_NAME>-ci-cd
vault kv put kv/$TEAM_GROUP/$PROJECT_NAME/$APP_NAME \
app=$APP_NAME \
username=foo \
password=bar
vault kv get kv/$TEAM_GROUP/$PROJECT_NAME/$APP_NAME
In the Vault UI you should see this created under Secrets > kv/ > student/rainforest-ci-cd/secret-test
We have an encrypted file with all of the vault commands pre-baked to create our application secrets. We need to make some quick changes before running the script.
-
Using ansible-vault un-encrypt the Data Mesh vault-secrets file. The decryption key will be provided by your instructor.
ansible-vault decrypt /projects/rainforest/gitops/secrets/vault-rainforest
-
In your IDE, Globally replace these two matches across ALL files in the Data Mesh code base. This will modify some ~70 files in one go.
Use Replace All for the cluster domain in the code with our actual cluster domain (run this echo in your shell!)
foo.sandbox1234.opentlc.com -> echo ${CLUSTER_DOMAIN##apps.}
Use Replace All for the Github coordinates with our Gitlab ones.
github.com/eformat -> <GIT_SERVER>/<TEAM_NAME>
💥 DO NOT CHECK IN the files just yet !! 💥
-
Since we have a new cluster, we need to update the Trino trustore secret with our own cluster CA certs.
openssl s_client -showcerts -connect ipa.ipa.svc.cluster.local:636 </dev/null 2>/dev/null | awk '/BEGIN CERTIFICATE/,/END CERTIFICATE/ {print $0}' > /tmp/ipa.pem
openssl s_client -showcerts -connect api.${CLUSTER_DOMAIN##apps.}:6443 </dev/null 2>/dev/null | awk '/BEGIN CERTIFICATE/,/END CERTIFICATE/ {print $0}' > /tmp/oc.pem
cd /projects/rainforest/supply-chain/trino/trino-certs keytool -import -alias ca -file /projects/rainforest/supply-chain/trino/trino-certs/ca.crt -keystore truststore.jks -storepass password -trustcacerts -noprompt
keytool -import -alias oc -file /tmp/oc.pem -keystore truststore.jks -storepass password -trustcacerts -noprompt
keytool -import -alias ipa -file /tmp/ipa.pem -keystore truststore.jks -storepass password -trustcacerts -noprompt
oc -n <TEAM_NAME>-ci-cd create secret generic truststore --from-file=truststore.jks
Manually update the truststore.jks value variable - TRUSTSTORE="" in the #trino-truststore section of vault-rainforest secrets file.
oc -n <TEAM_NAME>-ci-cd get secret truststore -o=jsonpath='{.data}'
Remove the temporary secret
oc -n <TEAM_NAME>-ci-cd delete secret truststore
-
Manually update the value of the LDAP_BIND_PASSWORD in the #trino-truststore section of vault-rainforest secrets file.
-
(Optional) If your <TEAM_NAME> is not rainforest, Replace All across files:
rainforest-ci-cd -> <TEAM_NAME>-ci-cd
-
Create all the application secrets in vault. Run this script.
sh /projects/rainforest/gitops/secrets/vault-rainforest
You should see vault secrets being created.
You can also browse to these in the Vault UI.
-
Encrypt rainforest vault-secrets file and check all our changes into git.
ansible-vault encrypt /projects/rainforest/gitops/secrets/vault-rainforest
cd /projects/rainforest git add . git commit -am "🐙 ADD - cluster rename and vault secrets file 🐙" git push -u origin --all
🪄🪄 Now, let's carry on and Build and Deploy our Application images ... !🪄🪄