Skip to content

Commit

Permalink
[pre-commit.ci] auto fixes from pre-commit.com hooks
Browse files Browse the repository at this point in the history
for more information, see https://pre-commit.ci
  • Loading branch information
pre-commit-ci[bot] committed Mar 28, 2023
1 parent c6f89f7 commit 3168046
Show file tree
Hide file tree
Showing 11 changed files with 15 additions and 18 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -40,4 +40,4 @@ spec:
configMapKeyRef:
name: buckets-config
key: bucket-source
restartPolicy: Never
restartPolicy: Never
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ There are 2 parts to this:

* The *KNative Serving* Service that will spawn the risk-assessment container. Deploy it by appplying the `10_service-risk-assessment.yaml` file. +
Monitor the deployment in the OpenShift console under the Severless->Service menu (don't confuse it with Service under Netwoking!). It must reach the 3/3 or 2/3 state (last one meanining it's ready but scaled down to zero as there is nothing to process).
* The *KNative Eventing* component that will listen to our Kafka Topic, and call the previously defined Service when there is something to process.
* The *KNative Eventing* component that will listen to our Kafka Topic, and call the previously defined Service when there is something to process.
** Edit the file: `11_kafkasource-risk-assessment.yaml`.
** Subsitute the `REPLACE_ME` part in the bootstrapServers parameter by the name of your namespace. If you have followed the instructions, it should be `xraylab-0001`.

Expand Down
1 change: 0 additions & 1 deletion demo2-smart-city/deployment/README.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -620,4 +620,3 @@ echo "https://$(oc get routes -n smartcity | grep grafana | awk '{ print $2 }')"
----

The default login is admin / secret.

Original file line number Diff line number Diff line change
Expand Up @@ -1629,4 +1629,4 @@ objects:
"title": "Smart City - ULEV London",
"uid": "HQZA6drGz",
"version": 4
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -236,4 +236,4 @@ spec:
"title": "Pipeline Ops - RAM",
"uid": "U5Pe6w6Ml",
"version": 2
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,3 @@ spec:
tlsSkipVerify: true
type: prometheus
name: grafana-prometheus-datasource.yaml

2 changes: 1 addition & 1 deletion demo2-smart-city/deployment/grafana/grafana.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,4 +21,4 @@ spec:
ingress:
enabled: true
termination: edge
tlsEnabled: true
tlsEnabled: true
7 changes: 3 additions & 4 deletions demo3-industrial-condition-monitoring/README.adoc
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
= Demo 3 - Industrial condition based monitoring with ML
= Demo 3 - Industrial condition based monitoring with ML

This “AI/ML Industrial Edge” demo shows how condition based monitoring can be implemented using AI/ML. Machine inference-based anomaly detection on metric time-series sensor data at the edge, with a central data lake and ML model retraining. It also shows how hybrid deployments (cluster at the edge and in the cloud) can be managed, and how the CI/CD pipelines and Model Training/Execution flows can be implemented.
This “AI/ML Industrial Edge” demo shows how condition based monitoring can be implemented using AI/ML. Machine inference-based anomaly detection on metric time-series sensor data at the edge, with a central data lake and ML model retraining. It also shows how hybrid deployments (cluster at the edge and in the cloud) can be managed, and how the CI/CD pipelines and Model Training/Execution flows can be implemented.

This demo is using OpenShift, ACM, AMQ Streams, OpenDataHub, and other products from Red Hat’s portfolio
This demo is using OpenShift, ACM, AMQ Streams, OpenDataHub, and other products from Red Hat’s portfolio

This is the frontend at the factory:

Expand Down Expand Up @@ -34,4 +34,3 @@ https://github.com/redhat-edge-computing/industrial-edge-docs#preparing-for-depl
== Source Code
The source code for the various component is located in this github repo:
https://github.com/redhat-edge-computing/manuela-dev

4 changes: 2 additions & 2 deletions patterns/kafka-edge-to-core/README.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

=== Description

This pattern is about using AMQ Streams (Kafka) MirrorMaker to move data across two kafka clusters geographically separated from each other.
This pattern is about using AMQ Streams (Kafka) MirrorMaker to move data across two kafka clusters geographically separated from each other.

==== Kafka MirrorMaker

Expand All @@ -13,7 +13,7 @@ Kafka MirrorMaker replicates data across two Kafka clusters, within or across da
image::kafka-edge-to-core.png[Kafka-edge-to-Core]

=== Use cases
**Data collection at the edge and shipping it to the Core**
**Data collection at the edge and shipping it to the Core**

Multiple kafka clusters are becoming a norm. There can be many reasons to create more than just one Kafka cluster in your organization. For example, you might have a core Kafka cluster that receives data from multiple edge Kafka clusters. You might also have a core Kafka cluster that receives data from multiple data centers. In this case, you can use MirrorMaker to replicate data from the edge Kafka clusters to the core Kafka cluster. Edge Kafka cluster can help to store the data locally on-site in real-time at scale. Plus, Kafka MirrorMaker can help in replicating the data to the data center or cloud to do further processing and analytics with the aggregated data from different sites/edges.

Expand Down
4 changes: 2 additions & 2 deletions patterns/kafka-edge-to-core/deployment/README.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ oc get kafkatopics

* To setup mirror maker edit `mirror-maker.yaml` with the folowing parameters as per your enviroment.

- Source kafka bootStrap server endpoint
- Source kafka bootStrap server endpoint
- Target kafka bootstrap server endpoints
- Topic Name under `topicsPattern`

Expand All @@ -42,4 +42,4 @@ oc create -f mirror-maker.yaml;
oc get kafkamirrormaker2
----

At this point Kafka Mirror Maker has been configured to asynchronously replicate messages from source kafka cluster to target kafka cluster. Thus providing you an architectural choice to connect various edge locations to a single core cluster with seamless data transfer.
At this point Kafka Mirror Maker has been configured to asynchronously replicate messages from source kafka cluster to target kafka cluster. Thus providing you an architectural choice to connect various edge locations to a single core cluster with seamless data transfer.
6 changes: 3 additions & 3 deletions patterns/kafka-edge-to-core/examples/README.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -24,12 +24,12 @@ oc get kafkamirrormaker2
oc run kafkacat -i -t --image debezium/tooling --restart=Never
----

* From the `kafkacat` container shell, generate a few kafka messages in the source Kafka cluster `lpr` topic(`edge-kafka-kafka-bootstrap`)
* From the `kafkacat` container shell, generate a few kafka messages in the source Kafka cluster `lpr` topic(`edge-kafka-kafka-bootstrap`)

[source,bash]
----
for i in {1..50} ; do sleep 5 ; \
echo '{"message":"Hello Red Hat Message-'$i'"}' | kafkacat -P -b edge-kafka-kafka-bootstrap -t lpr ; done
echo '{"message":"Hello Red Hat Message-'$i'"}' | kafkacat -P -b edge-kafka-kafka-bootstrap -t lpr ; done
----
* The above command will generate 50 Kafka messages every 5 seconds. The KafkaMirrorMaker2 which is configured to replicate topic messages from source (edge) to target (core) kafka topics, would move each message to the target kafka cluster.

Expand All @@ -38,7 +38,7 @@ echo '{"message":"Hello Red Hat Message-'$i'"}' | kafkacat -P -b edge-kafka-kafk
[source,bash]
----
oc rsh kafkacat
kafkacat -b core-kafka-kafka-bootstrap -t lpr
kafkacat -b core-kafka-kafka-bootstrap -t lpr
----

* On the target (core) Kafka cluster, you should able to consume every message pushed from source (edge) Kafka cluster.

0 comments on commit 3168046

Please sign in to comment.