diff --git a/demo1-xray-pipeline/manual_deployment/deployment/17_xray-source-init.yaml b/demo1-xray-pipeline/manual_deployment/deployment/17_xray-source-init.yaml index 98fceac..fd284d3 100644 --- a/demo1-xray-pipeline/manual_deployment/deployment/17_xray-source-init.yaml +++ b/demo1-xray-pipeline/manual_deployment/deployment/17_xray-source-init.yaml @@ -40,4 +40,4 @@ spec: configMapKeyRef: name: buckets-config key: bucket-source - restartPolicy: Never \ No newline at end of file + restartPolicy: Never diff --git a/demo1-xray-pipeline/manual_deployment/deployment/README.adoc b/demo1-xray-pipeline/manual_deployment/deployment/README.adoc index dae52ee..e3aa2cb 100644 --- a/demo1-xray-pipeline/manual_deployment/deployment/README.adoc +++ b/demo1-xray-pipeline/manual_deployment/deployment/README.adoc @@ -131,7 +131,7 @@ There are 2 parts to this: * The *KNative Serving* Service that will spawn the risk-assessment container. Deploy it by appplying the `10_service-risk-assessment.yaml` file. + Monitor the deployment in the OpenShift console under the Severless->Service menu (don't confuse it with Service under Netwoking!). It must reach the 3/3 or 2/3 state (last one meanining it's ready but scaled down to zero as there is nothing to process). -* The *KNative Eventing* component that will listen to our Kafka Topic, and call the previously defined Service when there is something to process. +* The *KNative Eventing* component that will listen to our Kafka Topic, and call the previously defined Service when there is something to process. ** Edit the file: `11_kafkasource-risk-assessment.yaml`. ** Subsitute the `REPLACE_ME` part in the bootstrapServers parameter by the name of your namespace. If you have followed the instructions, it should be `xraylab-0001`. diff --git a/demo2-smart-city/deployment/README.adoc b/demo2-smart-city/deployment/README.adoc index e1100b8..989bc9f 100644 --- a/demo2-smart-city/deployment/README.adoc +++ b/demo2-smart-city/deployment/README.adoc @@ -620,4 +620,3 @@ echo "https://$(oc get routes -n smartcity | grep grafana | awk '{ print $2 }')" ---- The default login is admin / secret. - diff --git a/demo2-smart-city/deployment/grafana/grafana-main-dashboard.yaml b/demo2-smart-city/deployment/grafana/grafana-main-dashboard.yaml index c11864c..9695f55 100644 --- a/demo2-smart-city/deployment/grafana/grafana-main-dashboard.yaml +++ b/demo2-smart-city/deployment/grafana/grafana-main-dashboard.yaml @@ -1629,4 +1629,4 @@ objects: "title": "Smart City - ULEV London", "uid": "HQZA6drGz", "version": 4 - } \ No newline at end of file + } diff --git a/demo2-smart-city/deployment/grafana/grafana-pipeline-ram-dashboard.yaml b/demo2-smart-city/deployment/grafana/grafana-pipeline-ram-dashboard.yaml index 1b81032..9ccfae9 100644 --- a/demo2-smart-city/deployment/grafana/grafana-pipeline-ram-dashboard.yaml +++ b/demo2-smart-city/deployment/grafana/grafana-pipeline-ram-dashboard.yaml @@ -236,4 +236,4 @@ spec: "title": "Pipeline Ops - RAM", "uid": "U5Pe6w6Ml", "version": 2 - } \ No newline at end of file + } diff --git a/demo2-smart-city/deployment/grafana/grafana-prometheus-datasource.yaml b/demo2-smart-city/deployment/grafana/grafana-prometheus-datasource.yaml index 78496d1..5d3e065 100644 --- a/demo2-smart-city/deployment/grafana/grafana-prometheus-datasource.yaml +++ b/demo2-smart-city/deployment/grafana/grafana-prometheus-datasource.yaml @@ -19,4 +19,3 @@ spec: tlsSkipVerify: true type: prometheus name: grafana-prometheus-datasource.yaml - diff --git a/demo2-smart-city/deployment/grafana/grafana.yaml b/demo2-smart-city/deployment/grafana/grafana.yaml index 46b53bf..c4cb9fb 100644 --- a/demo2-smart-city/deployment/grafana/grafana.yaml +++ b/demo2-smart-city/deployment/grafana/grafana.yaml @@ -21,4 +21,4 @@ spec: ingress: enabled: true termination: edge - tlsEnabled: true \ No newline at end of file + tlsEnabled: true diff --git a/demo3-industrial-condition-monitoring/README.adoc b/demo3-industrial-condition-monitoring/README.adoc index 645b2c7..ac129c7 100644 --- a/demo3-industrial-condition-monitoring/README.adoc +++ b/demo3-industrial-condition-monitoring/README.adoc @@ -1,8 +1,8 @@ -= Demo 3 - Industrial condition based monitoring with ML += Demo 3 - Industrial condition based monitoring with ML -This “AI/ML Industrial Edge” demo shows how condition based monitoring can be implemented using AI/ML. Machine inference-based anomaly detection on metric time-series sensor data at the edge, with a central data lake and ML model retraining. It also shows how hybrid deployments (cluster at the edge and in the cloud) can be managed, and how the CI/CD pipelines and Model Training/Execution flows can be implemented. +This “AI/ML Industrial Edge” demo shows how condition based monitoring can be implemented using AI/ML. Machine inference-based anomaly detection on metric time-series sensor data at the edge, with a central data lake and ML model retraining. It also shows how hybrid deployments (cluster at the edge and in the cloud) can be managed, and how the CI/CD pipelines and Model Training/Execution flows can be implemented. -This demo is using OpenShift, ACM, AMQ Streams, OpenDataHub, and other products from Red Hat’s portfolio +This demo is using OpenShift, ACM, AMQ Streams, OpenDataHub, and other products from Red Hat’s portfolio This is the frontend at the factory: @@ -34,4 +34,3 @@ https://github.com/redhat-edge-computing/industrial-edge-docs#preparing-for-depl == Source Code The source code for the various component is located in this github repo: https://github.com/redhat-edge-computing/manuela-dev - diff --git a/patterns/kafka-edge-to-core/README.adoc b/patterns/kafka-edge-to-core/README.adoc index 72a8684..681d29d 100644 --- a/patterns/kafka-edge-to-core/README.adoc +++ b/patterns/kafka-edge-to-core/README.adoc @@ -2,7 +2,7 @@ === Description -This pattern is about using AMQ Streams (Kafka) MirrorMaker to move data across two kafka clusters geographically separated from each other. +This pattern is about using AMQ Streams (Kafka) MirrorMaker to move data across two kafka clusters geographically separated from each other. ==== Kafka MirrorMaker @@ -13,7 +13,7 @@ Kafka MirrorMaker replicates data across two Kafka clusters, within or across da image::kafka-edge-to-core.png[Kafka-edge-to-Core] === Use cases -**Data collection at the edge and shipping it to the Core** +**Data collection at the edge and shipping it to the Core** Multiple kafka clusters are becoming a norm. There can be many reasons to create more than just one Kafka cluster in your organization. For example, you might have a core Kafka cluster that receives data from multiple edge Kafka clusters. You might also have a core Kafka cluster that receives data from multiple data centers. In this case, you can use MirrorMaker to replicate data from the edge Kafka clusters to the core Kafka cluster. Edge Kafka cluster can help to store the data locally on-site in real-time at scale. Plus, Kafka MirrorMaker can help in replicating the data to the data center or cloud to do further processing and analytics with the aggregated data from different sites/edges. diff --git a/patterns/kafka-edge-to-core/deployment/README.adoc b/patterns/kafka-edge-to-core/deployment/README.adoc index bae0752..fdd8c6b 100644 --- a/patterns/kafka-edge-to-core/deployment/README.adoc +++ b/patterns/kafka-edge-to-core/deployment/README.adoc @@ -26,7 +26,7 @@ oc get kafkatopics * To setup mirror maker edit `mirror-maker.yaml` with the folowing parameters as per your enviroment. -- Source kafka bootStrap server endpoint +- Source kafka bootStrap server endpoint - Target kafka bootstrap server endpoints - Topic Name under `topicsPattern` @@ -42,4 +42,4 @@ oc create -f mirror-maker.yaml; oc get kafkamirrormaker2 ---- -At this point Kafka Mirror Maker has been configured to asynchronously replicate messages from source kafka cluster to target kafka cluster. Thus providing you an architectural choice to connect various edge locations to a single core cluster with seamless data transfer. +At this point Kafka Mirror Maker has been configured to asynchronously replicate messages from source kafka cluster to target kafka cluster. Thus providing you an architectural choice to connect various edge locations to a single core cluster with seamless data transfer. diff --git a/patterns/kafka-edge-to-core/examples/README.adoc b/patterns/kafka-edge-to-core/examples/README.adoc index 6af951f..9a16a78 100644 --- a/patterns/kafka-edge-to-core/examples/README.adoc +++ b/patterns/kafka-edge-to-core/examples/README.adoc @@ -24,12 +24,12 @@ oc get kafkamirrormaker2 oc run kafkacat -i -t --image debezium/tooling --restart=Never ---- -* From the `kafkacat` container shell, generate a few kafka messages in the source Kafka cluster `lpr` topic(`edge-kafka-kafka-bootstrap`) +* From the `kafkacat` container shell, generate a few kafka messages in the source Kafka cluster `lpr` topic(`edge-kafka-kafka-bootstrap`) [source,bash] ---- for i in {1..50} ; do sleep 5 ; \ -echo '{"message":"Hello Red Hat Message-'$i'"}' | kafkacat -P -b edge-kafka-kafka-bootstrap -t lpr ; done +echo '{"message":"Hello Red Hat Message-'$i'"}' | kafkacat -P -b edge-kafka-kafka-bootstrap -t lpr ; done ---- * The above command will generate 50 Kafka messages every 5 seconds. The KafkaMirrorMaker2 which is configured to replicate topic messages from source (edge) to target (core) kafka topics, would move each message to the target kafka cluster. @@ -38,7 +38,7 @@ echo '{"message":"Hello Red Hat Message-'$i'"}' | kafkacat -P -b edge-kafka-kafk [source,bash] ---- oc rsh kafkacat -kafkacat -b core-kafka-kafka-bootstrap -t lpr +kafkacat -b core-kafka-kafka-bootstrap -t lpr ---- * On the target (core) Kafka cluster, you should able to consume every message pushed from source (edge) Kafka cluster.