This repo contains the code that implements the analytic worker. This worker will simulate a stream processor by becoming a consumer that will subscribe to all topics. The worker is planned to have the following features:
- The worker will be able to aggregate the log count based on the context & context of the logs
- The worker will be able to send alerts if it detects that there is an anomaly in the data or the number of error logs breached the error threshold limit
- The worker will be able to write the aggregated log count into InfluxDB
This program is written in Go 1.10.3
The worker will request the messages from Kafka. The worker will sort the messages based on the message's topics, it will create another goroutine if it detects that the message contains a new topic
cluster_analyser is an interface for *sarama.consumer to allow us to mock it for testing.
AnalyticServices creates a Service which will spawn a Consumer group for every topic. When starting the AnalyticService, it will spawn two types of worker. Firstly, it will spawn a worker that will watch for incoming new topic events that is sent by the barito-flow producer. If this worker(we named it topic refresher) detects a new topic, it will spawn a new analytic worker for that particular topic.
-
Install Go (if you haven't), follow all the instructions from here including setting the $GOPATH
-
Clone this repository
-
cd into the project directory: cd floodgate-worker
-
Install glide (if you haven't) from here
-
Run
glide install
-
With brew installed (Highly recommended):
brew install kafka
- To start zookeeper
brew services start zookeeper
- To start kafka server
brew services start kafka
-
No brew installed:
-
Download kafka from the link provided here
-
If an error occured, i.e., an error related to JVM. Go to kafka-run-class.sh located inside the bin file of the kafka file and change the line on 251 or 252 to the following:
JAVA_MAJOR_VERSION=$($JAVA -version 2>&1 | sed -E -n 's/.* version "([^.-]*).*/\1/p')
-
Unzip it and cd into the kafka directory
-
Run the zookeeper with:
bin/zookeeper-server-start.sh config/zookeeper.properties
-
Run the server with:
bin/kafka-server-start.sh config/server.properties
(in another terminal window)
-
-
Run with:
go run main.go aw
in the root of the project directory to run the worker (do this in another terminal window) -
Clone barito-flow from here
-
Run the barito-flow producer instructions
-
Send messages to the producer by posting to it
$ go test -coverprofile cover.out
To see test coverage
$ go tool cover -html=cover.out -o cover.html