-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
metricbeat/module/mongodb: Improve logic to calculate oplog info and window #42224
base: main
Are you sure you want to change the base?
Conversation
This pull request does not have a backport label.
To fixup this pull request, you need to add the backport labels for the needed
|
|
So, to test the consumption of cpu and memory resources over the changes made to calculate oplog info/ window: Let's consider 3 cases that we want to compare:
We will use the following script to track docker stats i.e., cpu and memory usage over time for the mongodb replicaset setup where 3 nodes are set: #!/bin/bash
COMPOSE_PROJECT_DIR="$1"
OUTPUT_FILE="$2"
INTERVAL=1
# Set default values
if [ -z "$COMPOSE_PROJECT_DIR" ]; then
COMPOSE_PROJECT_DIR="."
fi
if [ -z "$OUTPUT_FILE" ]; then
OUTPUT_FILE="cpu_memory_usage.log"
fi
cd "$COMPOSE_PROJECT_DIR"
# Clear previous logs
> "$OUTPUT_FILE"
while true; do
echo "----------------------------------------"
echo "Docker Container Usage - $(date)"
echo "----------------------------------------"
docker compose ps -q | while read -r container_id; do
container_name=$(docker inspect --format '{{.Name}}' "$container_id" | sed 's/\///')
stats=$(docker stats --no-stream --format "{{.CPUPerc}}\t{{.MemPerc}}" "$container_id")
cpu_usage=$(echo "$stats" | cut -f1)
mem_usage=$(echo "$stats" | cut -f2)
timestamp=$(date +%s)
echo "$timestamp,$container_name,$cpu_usage,$mem_usage" >> "$OUTPUT_FILE"
done
sleep $INTERVAL
done We will use a custom benchmarking suite: func runBenchmarkModeXXX(client *mongo.Client) {
start := time.Now()
iterations := 10000
workers := 10
var wg sync.WaitGroup
errChan := make(chan error, iterations)
batchSize := iterations / workers
for w := 0; w < workers; w++ {
wg.Add(1)
go func() {
defer wg.Add(-1)
for i := 0; i < batchSize; i++ {
if _, err := getReplicationInfoXXX(client); err != nil {
errChan <- fmt.Errorf("iteration failed: %w", err)
}
}
}()
}
go func() {
wg.Wait()
close(errChan)
}()
// Process errors
for err := range errChan {
log.Printf("Error: %v", err)
}
duration := time.Since(start)
fmt.Printf("Benchmark Results:\n")
fmt.Printf("Total iterations: %d\n", iterations)
fmt.Printf("Total time: %v\n", duration)
fmt.Printf("Average time per operation: %v\n", duration/time.Duration(iterations))
fmt.Printf("Operations per second: %.2f\n", float64(iterations)/duration.Seconds())
} To do the benchmarking and observe the usage. I have taken out the logic from all 3 cases and made them a standalone program so that we benchmark the change in isolation. Notice that cpu peaks at 800%+ and memory peaks at 80%+ which is bad and hence we received so many issues around it that claimed that the this calculation is causing memory spikes. This greatly improved but only the memory consumption. The introduction of aggregation pipeline did a good job. Now memory peaks at 1.6% and cpu still peaks at around 800%. And hence, we were not getting issues reported for memory but now we were getting issues reported for CPU spikes. Case 3: Here, the CPU peaks at 50% that too for a short while and memory peaks at 0.36%, a considerable improvement for both. Please also note that, for Case 1 and Case 2 benchmark did not even complete after a minute in my setup but Case 3 took <10s to do the benchmarking and it reported the final stats. From this, we can surely that Case 3 (current PR changes) does massive improvement in calculating the oplog window. |
Proposed commit message
This change tries to even improve the implementation done here that was made some time back. The previous changes did help a lot but this PR aims to further improve as several users have been reporting high CPU usage. Now we are following some recommended ways used by MongoDB themselves to calculate the oplog window (i.e., lastTs - firstTs of the log). The change now again leverages the natural order of the log along with using Limit (to restrict to just one doc) and Projection. Only expensive process right now when we do sort the the log in reverse i.e.,
$natural: -1
. We also have ugpraded the client library to further reduce any issues in the client side related to query, etc.Please see for a detailed comparison: #42224 (comment). Also, please read the inline comments in the code itself to understand the implemented logic; as I've documented it properly for future reference.
Checklist
CHANGELOG.next.asciidoc
orCHANGELOG-developer.next.asciidoc
.Disruptive User Impact
Author's Checklist
How to test this PR locally
Related issues
getOpTimestamp
inreplstatus
to fix sort and temp files generation issue #37688