The files within the directory are used to deploy MongoDB. We will define all 3 clusters that are used within RHACM.
Shown below is the architecture definition for our MongoDB Cluster.
- There is a MongoDB pod running on each OpenShift Cluster, each pod has its own storage (provisioned by a StorageClass)
- The MongoDB pods are configured as a MongoDB ReplicaSet and communicate using TLS
- The OCP routes are configured as Passthrough, we need the connection to remain plain TLS (no HTTP headers involved)
- The MongoDB pods interact with each other using the OCP Routes (the nodes where these pods are running must be able to connect to the OCP routes)
- The apps consuming the MongoDB ReplicaSet have to use the proper MongoDB connection string URI
How is the MongoDB ReplicaSet configured?
Each OpenShift cluster has a MongoDB pod running. There is a service which routes the traffic received on port 27017/TCP to the MongoDB pod port 27017/TCP.
There is an OpenShift Route created on each cluster, the route has been configured to be passthrough (the HAProxy Router won't add HTTP headers to the connections). Each route listens on port 443/TCP and reverse proxies traffic received to the mongo service 27017/TCP.
The MongoDB pods have been configured to use TLS, so all connections will be made using TLS (a TLS certificate with routes and services DNS names configured as SANs is used by MongoDB pods).
Once the three pods are up and running, the MongoDB ReplicaSet is configured using the route hostnames as MongoDB endpoints. e.g:
Primary Replica: mongo-cluster1.apps.cluster1.example.com:443
Secondary Replicas: mongo-cluster2.apps.cluster2.example.com:443, mongo-cluster3.apps.cluster3.example.com:443
In case a MongoDB pod fails / is stopped, the ReplicaSet will reconfigure itself, and once the failed / stopped pod is back, the ReplicaSet will include it again. (MongoDB Quorum Algorithm will decide if the ReplicaSet is RO or RW based on the quorum).
NOTE: This configuration doesn't require the different cluster networks to be aware of each other. We could get the same functionality using:
A) LoadBalancer Services (pods will use the service ip instead of the OpenShift Route)
B) Site2Site VPN like solution (pods will connect to ClusterIP / NodePort services directly)
This demonstration uses MongoDB with TLS enabled. The example below will create a generic CA, key, and certificate.
Follow the steps below to create the required files in the mongo-yaml
directory:
-
Change directory to
lab-6-assets
cd rhacmgitopslab/lab-6-assets
-
Create the file
ca-config.json
:cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "8760h" }, "profiles": { "kubernetes": { "usages": ["signing", "key encipherment", "server auth", "client auth"], "expiry": "8760h" } } } } EOF
-
Create the file
ca-csr.json
cat > ca-csr.json <<EOF { "CN": "Kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Austin", "O": "Kubernetes", "OU": "TX", "ST": "Texas" } ] } EOF
-
Create the file
mongodb-csr.json
cat > mongodb-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Austin", "O": "Kubernetes", "OU": "TX", "ST": "Texas" } ] } EOF
We will use OpenShift Routes to provide connectivity between MongoDB Replicas. As we said before, MongoDB will be configured to use TLS communications, we need to generate certificates with proper hostnames configured within them.
Follow the instructions below to generate a PEM file which include:
- MongoDB's Certificate Private Key
- MongoDB's Certificate Public Key
-
Generate the CA
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
-
Export some variables with information that will be used for generate the certificates
# Define the `NAMESPACE` variable NAMESPACE=mongo # Define the `SERVICE_NAME` variable SERVICE_NAME=mongo # Define the variable of `ROUTE_CLUSTER1` ROUTE_CLUSTER1=mongo-cluster1.$(oc --context=cluster1 get ingresses.config.openshift.io cluster -o jsonpath='{ .spec.domain }') # Define the variable of `ROUTE_CLUSTER2` ROUTE_CLUSTER2=mongo-cluster2.$(oc --context=cluster2 get ingresses.config.openshift.io cluster -o jsonpath='{ .spec.domain }') # Define the variable of `ROUTE_CLUSTER3` ROUTE_CLUSTER3=mongo-cluster3.$(oc --context=cluster3 get ingresses.config.openshift.io cluster -o jsonpath='{ .spec.domain }') # Define the variable of `SAN` SANS="localhost,localhost.localdomain,127.0.0.1,${ROUTE_CLUSTER1},${ROUTE_CLUSTER2},${ROUTE_CLUSTER3},${SERVICE_NAME},${SERVICE_NAME}.${NAMESPACE},${SERVICE_NAME}.${NAMESPACE}.svc.cluster.local"
-
Generate the MongoDB Certificates
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -hostname=${SANS} -profile=kubernetes mongodb-csr.json | cfssljson -bare mongodb
-
Combine Key and Certificate
cat mongodb-key.pem mongodb.pem > mongo.pem
The lab content provides the required files for deploying the MongoDB cluster members in YAML format. We will modify specific values and then commit them to our git repo and use Argo CD to deploy them.
Before deploying MongoDB the yaml files need to be updated to define the certificates that were created and routing endpoints that will be used.
-
Configure
MongoDB's PEM
cp base/mongo-secret.yaml.backup base/mongo-secret.yaml # Place the value of the `mongodb.pem` into the `mongo-secret.yaml` sed -i "s/mongodb.pem: .*$/mongodb.pem: $(openssl base64 -A < mongo.pem)/" base/mongo-secret.yaml # Place the value of the `ca.pem` into the `mongo-secret.yaml` sed -i "s/ca.pem: .*$/ca.pem: $(openssl base64 -A < ca.pem)/" base/mongo-secret.yaml
-
Configure
MongoDB's Endpoints
# Place the value of `ROUTE_CLUSTER1` in the `mongo-rs-deployment.yaml` file cp base/mongo-rs-deployment.yaml.backup base/mongo-rs-deployment.yaml sed -i "s/primarynodehere/${ROUTE_CLUSTER1}:443/" base/mongo-rs-deployment.yaml # Place the value of `ROUTE_CLUSTER1`, `ROUTE_CLUSTER2, and `ROUTE_CLUSTER3` in the `mongo-rs-deployment.yaml` file sed -i "s/replicamembershere/${ROUTE_CLUSTER1}:443,${ROUTE_CLUSTER2}:443,${ROUTE_CLUSTER3}:443/" base/mongo-rs-deployment.yaml
-
Configure
MongoDB OpenShift Route Names
cp overlays/cluster1/mongo-route.yaml.backup overlays/cluster1/mongo-route.yaml cp overlays/cluster2/mongo-route.yaml.backup overlays/cluster2/mongo-route.yaml cp overlays/cluster3/mongo-route.yaml.backup overlays/cluster3/mongo-route.yaml # Replace the value of changme with `ROUTE_CLUSTER1` in the file `mongo-route.yaml` sed -i "s/mongocluster1route/${ROUTE_CLUSTER1}/" overlays/cluster1/mongo-route.yaml # Replace the value of changme with `ROUTE_CLUSTER2` in the file `mongo-route.yaml` sed -i "s/mongocluster2route/${ROUTE_CLUSTER2}/" overlays/cluster2/mongo-route.yaml # Replace the value of changme with `ROUTE_CLUSTER3` in the file `mongo-route.yaml` sed -i "s/mongocluster3route/${ROUTE_CLUSTER3}/" overlays/cluster3/mongo-route.yaml
-
Before committing our changes we need to update also the main Github repository in the file describe the "Channel" resource:
sed -i 's/ansonmez/YOUR_GITHUB_USERNAME/g' rhacmgitopslab/lab-6-acm/02_channel.yaml
-
Commit the changes
# Stage your changes to be sent to the git repository git commit -am 'MongoDB Certificates and MongoDB Routes' # Push your commits to the git repository git push origin master
Similar to the previous labs we need to define the app with RHACM. Be sure that lable your clusters in ACM with clustername: cluster1 ,clustername: cluster2, clustername: cluster3
Do these on RH ACM cluster
cd rhacmgitopslab/lab-6-acm
oc config use-context hubcluster
1- Create namespace
oc create -f 01_namespace.yaml
2- Create channel
oc create -f 02_channel.yaml
3- Create application
oc create -f 03_application_mongo.yaml
4- Create placement for each cluster
oc create -f 04_placement_cluster1.yaml
oc create -f 04_placement_cluster2.yaml
oc create -f 04_placement_cluster3.yaml
5- Create subscription for deployment
oc create -f 05_subscription_cluster1.yaml
oc create -f 05_subscription_cluster2.yaml
oc create -f 05_subscription_cluster3.yaml
Validate the namespace exists in the three clusters.
# The for loop below will show the status of the namespace on the three clusters
for cluster in cluster1 cluster2 cluster3; do oc --context $cluster get namespace mongo; done
NAME STATUS AGE
mongo Active 6s
NAME STATUS AGE
mongo Active 5s
NAME STATUS AGE
mongo Active 5s
Verify OpenShift Routes
creation.
# The for loop below will display the route on the three clusters
for cluster in cluster1 cluster2 cluster3; do oc --context $cluster -n mongo get route mongo; done
At this point we should have 3 independent MongoDB instances running, one on each cluster. Let's verify all replicas are up and running.
#### CHECK HERE
#Check for the pods to be in the Ready state
for cluster in cluster1 cluster2 cluster3; do oc --context $cluster -n mongo get pods ; done
NAME READY STATUS RESTARTS AGE
mongo-56d576cb44-gbz8k 1/1 Running 0 3m26s
NAME READY STATUS RESTARTS AGE
mongo-d5f96bb56-5rnkx 1/1 Running 0 3m34s
NAME READY STATUS RESTARTS AGE
mongo-85f4ff8f46-h4m29 1/1 Running 0 2m16s
Now that all replicas are up and running we are going to configure a MongoDB ReplicaSet so all three replicas work as a cluster.
This procedure has been automated and you only need to add a label to the MongoDB Pod you want to act as primary replica. We are going to use the pod running on cluster1
as primary replica.
# Select Primary MongoDB pod
MONGO_POD=$(oc --context=cluster1 -n mongo get pod --selector="name=mongo" --output=jsonpath='{.items..metadata.name}')
# Label primary pod
oc --context=cluster1 -n mongo label pod $MONGO_POD replicaset=primary
The MongoDB ReplicaSet is being configured now, let's wait for it to be configured and check the ReplicaSet Status to ensure it has been properly configured.
If you want to get a more detailed view of the configuration, you can run the following command and you will get a huge json output with the status of the ReplicaSet:
# Select Primary MongoDB pod
MONGO_POD=$(oc --context=cluster1 -n mongo get pod --selector="name=mongo" --output=jsonpath='{.items..metadata.name}')
# Get replicaset status
oc --context=cluster1 -n mongo exec $MONGO_POD \
-- bash -c 'mongo --norc --quiet --username=admin --password=$MONGODB_ADMIN_PASSWORD --host localhost admin --tls --tlsCAFile /opt/mongo-ssl/ca.pem --eval "rs.status()"'
Connecting to RHACM WebUI you should see a topology like this: