You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are currently using Prometheus to store our 4 metrics with the following metadata associated with each of them.
Some of the metrics (marked with [T]) are stored with timestamp and some are not. All of the metrics store a unix EPOCH timestamp as the value (marked with [V])
deploy_timestamp [V] [T]
namespace
app
image_sha
commit_timestamp [V]
namespace
app
commit
image_sha
failure_creation_timestamp and failure_resolution_timestamp [V]
app
issue_number
Problem
Prometheus is not allowing to store metrics that has timestamp() that are older than current head block operation from the time of scraping (~3h in the past)
Using timestamp() in metric is considered anti-pattern for how the Prometheus exporters should be used
Even if it's possible to back-fill samples from the past to the Prometheus, it is still considered a migration tool, rather than standard operation. Back-fill has some consequences and can not be performed to import data that are from the current Prometheus head block (last 3 hours)
Using Prometheus push mode isn't also an option, as those metrics should not include timestamp neither.
Not using timestamps() may cause false data to be processed, especially within time ranges where the calculations are made using timestamps and not the values.
If the metric does not use timestamp, it's being added as the current time when the scrape happens. This leads to the situation where the calculations based on values are considering historical data to be current.
Using timestamp() is not an option for some of the metrics such as commit time, because the commit that was used for deployment may have happened days/weeks/months prior to deployment and we are interested in the time between commit and deployment.
Possible work-around
Store some of the metrics with and some without timestamps.
Add time limit window to accept the metrics with timestamps including webhook exporter, so they will be always almost up to date.
Clean solution
Switch to a DB engine that can store historical values with timestamps that have happened in the past. Ideally if the DB engine supports both Prometheus exporters and Grafana, meaning minimal change on the Pelorus architecture will be made.
Currently we don't have the problem with the commit time, but if we add timestamp to it, we will definitely hit the issue no matter how we collect that metric. The only way around would be to collect that metric from the Git provider using webhook that sends the data when the commit happens. However in the current way the metrics are collected we can not do that, because commit metric expects:
namespace
app
commit
image_sha
And those are available later in the application lifecycle, many times way after Prometheus acceptance limit.
That is what I see wrong in committime exporter: it should work like failure, scraping data over time, needing git URL and branch only, not getting time from a specific commit hash
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Current state
We are currently using Prometheus to store our 4 metrics with the following metadata associated with each of them.
Some of the metrics (marked with [T]) are stored with timestamp and some are not. All of the metrics store a unix EPOCH timestamp as the value (marked with [V])
deploy_timestamp [V] [T]
commit_timestamp [V]
failure_creation_timestamp and failure_resolution_timestamp [V]
Problem
Possible work-around
Store some of the metrics with and some without timestamps.
Add time limit window to accept the metrics with timestamps including webhook exporter, so they will be always almost up to date.
Clean solution
Switch to a DB engine that can store historical values with timestamps that have happened in the past. Ideally if the DB engine supports both Prometheus exporters and Grafana, meaning minimal change on the Pelorus architecture will be made.
Beta Was this translation helpful? Give feedback.
All reactions