Skip to content

Latest commit

 

History

History
107 lines (71 loc) · 4.93 KB

CustomLogging.adoc

File metadata and controls

107 lines (71 loc) · 4.93 KB

OPTIONAL: Setting up custom logging

This documents setup of cluster logging within an OpenShift cluster and is intended for suitably sized clusters running OpenShift version 4.3 or later. This is a possible target for the Fuse integration in the main part of the demo, however it is not necessary as fuse can connect to other elastic search instances.

Most the the instructions here follow the official documentation on deploying custom logging.

Note

You will need cluster admin access to be able to run this script successfully.

Install Elastic Search Operator (in prescribed fashion)

We going to run a script that will install the elasticsearch operator on all projects on the cluster and then create a cluster logging instance in a project called "openshift-logging". This will gather all log information from all projects and will create a kibana instance that will give us a UI into the log data.

[TIP] If your cluster is small or only useful for a demo you can configure the Cluster Logging CR to require fewer resources including running Elasticsearch in Ephemeral Mode to reduce need for persistent volumes.

Run the following command to install:

$DEMO_HOME/scripts/05-optional-setup-custom-logging.sh

If the script has completed successfully you should see this at the end:

Kibana route is:
https://kibana-openshift-logging.apps.debezium.openshifttc.com/
Note
If the script seems to be stuck, quits with an error, or seems to have been running for a long time, check the troubleshooting section of this document.

Click on the route that is spit out at the end of the installation script. From there enter your OpenShift login details.

kibana oauth login

After successful login, you will be asked to allow kibana certain permissions. Click Allow selected permissions.

kibana allow access

If you accept these prompts, you should then be brought to the kibana dashboard for the elastic search instance for the cluster

kibana dashboard

Happy log splunking!

Appendix

Run Elasicsearch in Ephemeral Mode

You can configure Cluster Logging’s Elasticsearch to not require persistent volumes (with the tradeoff that logs won’t persist). This can be preferrable in a demo environment where resources are tight. (See also instructions here)

To do this, update the Cluster Logging CR to specify the emptyDir as follows:

cluster logging spec

Then either run the installation or update the cluster logging instance[1]:

oc apply -f $DEMO_HOME/kube/logging/customlogging-instance.yaml -n openshift-logging

Troubleshooting

Can’t access Kibana

If you can’t access Kibana via the route and the kibana pod is running without error, then check the kibana-proxy container’s logs. It’s possible some certificates are out of date or that oauth has somehow expired.

It’s unclear exactly how to fix this, as just restarting the kibana pod doesn’t help; must be something more persistent like data in the mounted secrets. However, deleting the ClusterLogging instance and re-running the setup script seems to restore accees.

Stalled Install

If the script doesn’t finish after about 5 minutes and you’re continuing to see lines like this:

deployment.extensions/cluster-logging-operator condition met
2 of 5 ready...
2 of 5 ready...
2 of 5 ready...
2 of 5 ready...
2 of 5 ready...
2 of 5 ready...
2 of 5 ready...
2 of 5 ready...
2 of 5 ready...
2 of 5 ready...
2 of 5 ready...
2 of 5 ready...
2 of 5 ready...
2 of 5 ready...
2 of 5 ready...

Then there may be a persistent issue blocking the installation. Here’s an example of a random issue with a failure to mount a persistent volume

pvc error

If the error looks to be a transient one, you can attempt to reinstall cluster logging by deleting the cluster logging CR instance, waiting a little bit for all the resources to be cleaned up (leaving only the operator pod in the openshift-logging namespace) and re-run the script

oc delete clusterlogging instance -n openshift-logging

Wait for the output of oc get pods to look something like this:

NAME                                            READY   STATUS    RESTARTS   AGE
cluster-logging-operator-66f77ffccb-fdptp       1/1     Running   0          38m

1. This has not yet been verified to auto-update on a pre-existing installation. You may need to first delete the ClusterLogging CR and then re-apply