Skip to content

Commit

Permalink
add custom collector for mampf
Browse files Browse the repository at this point in the history
TODO: caching
  • Loading branch information
henrixapp committed Jun 6, 2021
1 parent ae939d0 commit bae2d4a
Show file tree
Hide file tree
Showing 3 changed files with 55 additions and 2 deletions.
8 changes: 6 additions & 2 deletions MONITORING.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
## Getting started (development mode): Setting up prometheus & grafana

0. Start prometheus_exporter in the mampf container
`sudo docker-compose exec mampf prometheus_exporter -b 0.0.0.0`
`sudo docker-compose exec mampf prometheus_exporter -b 0.0.0.0 -a lib/collectors/mampf_collector.rb `
1. Setup prometheus in development

```sh
Expand All @@ -27,4 +27,8 @@ grafana/grafana
3. Now visit localhost:2345 and configure the datasource (`prometheus:9090`)
4. Setup the dashboard, interisting metrics:
- `rate(ruby_collector_sessions_total[5m])`
- `rate(ruby_http_requests_total[5m])`
- `rate(ruby_http_requests_total[5m])`
- `ruby_user_count`: Number of users in the DB
- `ruby_uploaded_medium_count`: Number of Media
- `ruby_tag_count`: Number of Tags
- `ruby_submissions_count`: Number of Submissions
24 changes: 24 additions & 0 deletions docker/development/prometheus.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).

# Alertmanager configuration

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'mampf'

# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
scrape_timeout: 5s

# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.

static_configs:
- targets: ['mampf:9394']
25 changes: 25 additions & 0 deletions lib/collectors/mampf_collector.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
unless defined? Rails
require File.expand_path("../../../config/environment", __FILE__)
end
class MampfCollector < PrometheusExporter::Server::TypeCollector
def initialize
end

def collect(obj)
end
def type
"mampf"
end

def metrics
user_count_gauge = PrometheusExporter::Metric::Gauge.new('user_count', 'number of users in the app')
user_count_gauge.observe User.count
medium_count_gauge = PrometheusExporter::Metric::Gauge.new('uploaded_medium_count', 'number of media')
medium_count_gauge.observe Medium.count
tag_count_gauge = PrometheusExporter::Metric::Gauge.new('tag_count', 'number of tags')
tag_count_gauge.observe Tag.count
submissions_count_gauge = PrometheusExporter::Metric::Gauge.new('submissions_count', 'number of submissions')
submissions_count_gauge.observe Submission.count
[user_count_gauge, medium_count_gauge,tag_count_gauge, submissions_count_gauge]
end
end

0 comments on commit bae2d4a

Please sign in to comment.