diff --git a/docs/sources/concepts/configuration-syntax/syntax.md b/docs/sources/concepts/configuration-syntax/syntax.md index 12e94bd4f8..2fe17a80e0 100644 --- a/docs/sources/concepts/configuration-syntax/syntax.md +++ b/docs/sources/concepts/configuration-syntax/syntax.md @@ -96,7 +96,7 @@ local.file "token" { All block and attribute definitions are followed by a newline, which {{< param "PRODUCT_NAME" >}} calls a _terminator_, as it terminates the current statement. A newline is treated as a terminator when it follows any expression, `]`, `)`, or `}`. -{{< param "PRODUCT_NAME" >}} ignores other newlines and you can can enter as many newlines as you want. +{{< param "PRODUCT_NAME" >}} ignores other newlines and you can enter as many newlines as you want. [identifier]: #identifiers [identifier]: #identifiers diff --git a/docs/sources/reference/components/discovery.consul.md b/docs/sources/reference/components/discovery.consul.md index 8244542a09..ab3c8b8cbb 100644 --- a/docs/sources/reference/components/discovery.consul.md +++ b/docs/sources/reference/components/discovery.consul.md @@ -22,30 +22,30 @@ discovery.consul "LABEL" { The following arguments are supported: -Name | Type | Description | Default | Required --------------------------|---------------------|-------------------------------------------------------------------------------------------------------------------|------------------|--------- -`server` | `string` | Host and port of the Consul API. | `localhost:8500` | no -`token` | `secret` | Secret token used to access the Consul API. | | no -`datacenter` | `string` | Datacenter to query. If not provided, the default is used. | | no -`namespace` | `string` | Namespace to use (only supported in Consul Enterprise). | | no -`partition` | `string` | Admin partition to use (only supported in Consul Enterprise). | | no -`tag_separator` | `string` | The string by which Consul tags are joined into the tag label. | `,` | no -`scheme` | `string` | The scheme to use when talking to Consul. | `http` | no -`username` | `string` | The username to use (deprecated in favor of the basic_auth configuration). | | no -`password` | `secret` | The password to use (deprecated in favor of the basic_auth configuration). | | no -`allow_stale` | `bool` | Allow stale Consul results (see [official documentation][consistency documentation]). Will reduce load on Consul. | `true` | no -`services` | `list(string)` | A list of services for which targets are retrieved. If omitted, all services are scraped. | | no -`tags` | `list(string)` | An optional list of tags used to filter nodes for a given service. Services must contain all tags in the list. | | no -`node_meta` | `map(string)` | Node metadata key/value pairs to filter nodes for a given service. | | no -`refresh_interval` | `duration` | Frequency to refresh list of containers. | `"30s"` | no -`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no -`bearer_token` | `secret` | Bearer token to authenticate with. | | no -`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no -`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no -`proxy_url` | `string` | HTTP proxy to send requests through. | | no -`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no -`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no -`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no +Name | Type | Description | Default | Required +-------------------------|---------------------|-----------------------------------------------------------------------------------------------------------------|------------------|--------- +`server` | `string` | Host and port of the Consul API. | `localhost:8500` | no +`token` | `secret` | Secret token used to access the Consul API. | | no +`datacenter` | `string` | Datacenter to query. If not provided, the default is used. | | no +`namespace` | `string` | Namespace to use. Only supported in Consul Enterprise. | | no +`partition` | `string` | Admin partition to use. Only supported in Consul Enterprise. | | no +`tag_separator` | `string` | The string by which Consul tags are joined into the tag label. | `,` | no +`scheme` | `string` | The scheme to use when talking to Consul. | `http` | no +`username` | `string` | The username to use. Deprecated in favor of the `basic_auth` configuration. | | no +`password` | `secret` | The password to use. Deprecated in favor of the `basic_auth` configuration. | | no +`allow_stale` | `bool` | Allow stale Consul results. Reduces load on Consul. Refer to the [Consul documentation][] for more information. | `true` | no +`services` | `list(string)` | A list of services for which targets are retrieved. If omitted, all services are scraped. | | no +`tags` | `list(string)` | An optional list of tags used to filter nodes for a given service. Services must contain all tags in the list. | | no +`node_meta` | `map(string)` | Node metadata key/value pairs to filter nodes for a given service. | | no +`refresh_interval` | `duration` | Frequency to refresh list of containers. | `"30s"` | no +`bearer_token_file` | `string` | File containing a bearer token to authenticate with. | | no +`bearer_token` | `secret` | Bearer token to authenticate with. | | no +`enable_http2` | `bool` | Whether HTTP2 is supported for requests. | `true` | no +`follow_redirects` | `bool` | Whether redirects returned by the server should be followed. | `true` | no +`proxy_url` | `string` | HTTP proxy to send requests through. | | no +`no_proxy` | `string` | Comma-separated list of IP addresses, CIDR notations, and domain names to exclude from proxying. | | no +`proxy_from_environment` | `bool` | Use the proxy URL indicated by environment variables. | `false` | no +`proxy_connect_header` | `map(list(secret))` | Specifies headers to send to proxies during CONNECT requests. | | no At most, one of the following can be provided: - [`bearer_token` argument](#arguments). @@ -56,7 +56,7 @@ Name | Type | Description {{< docs/shared lookup="reference/components/http-client-proxy-config-description.md" source="alloy" version="" >}} -[consistency documentation]: https://www.consul.io/api/features/consistency.html +[Consul documentation]: https://www.consul.io/api/features/consistency.html [arguments]: #arguments ## Blocks @@ -64,13 +64,13 @@ Name | Type | Description The following blocks are supported inside the definition of `discovery.consul`: -Hierarchy | Block | Description | Required ---------------------|-------------------|----------------------------------------------------------|--------- -basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -authorization | [authorization][] | Configure generic authorization to the endpoint. | no -oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +Hierarchy | Block | Description | Required +--------------------|-------------------|------------------------------------------------------------|--------- +basic_auth | [basic_auth][] | Configure `basic_auth` for authenticating to the endpoint. | no +authorization | [authorization][] | Configure generic authorization to the endpoint. | no +oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no +oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no The `>` symbol indicates deeper levels of nesting. For example, `oauth2 > tls_config` refers to a `tls_config` block defined inside an `oauth2` block. @@ -106,19 +106,17 @@ Name | Type | Description Each target includes the following labels: -* `__meta_consul_address`: the address of the target. -* `__meta_consul_dc`: the datacenter name for the target. -* `__meta_consul_health`: the health status of the service. -* `__meta_consul_partition`: the admin partition name where the service is registered. -* `__meta_consul_metadata_`: each node metadata key value of the target. -* `__meta_consul_node`: the node name defined for the target. -* `__meta_consul_service_address`: the service address of the target. -* `__meta_consul_service_id`: the service ID of the target. -* `__meta_consul_service_metadata_`: each service metadata key value of the target. -* `__meta_consul_service_port`: the service port of the target. -* `__meta_consul_service`: the name of the service the target belongs to. -* `__meta_consul_tagged_address_`: each node tagged address key value of the target. -* `__meta_consul_tags`: the list of tags of the target joined by the tag separator. +* `__meta_consul_address`: The address of the target. +* `__meta_consul_partition`: The admin partition name where the service is registered. +* `__meta_consul_metadata_`: Each node metadata key value of the target. +* `__meta_consul_node`: The node name defined for the target. +* `__meta_consul_service_address`: The service address of the target. +* `__meta_consul_service_id`: The service ID of the target. +* `__meta_consul_service_metadata_`: Each service metadata key value of the target. +* `__meta_consul_service_port`: The service port of the target. +* `__meta_consul_service`: The name of the service the target belongs to. +* `__meta_consul_tagged_address_`: Each node tagged address key value of the target. +* `__meta_consul_tags`: The list of tags of the target joined by the tag separator. ## Component health @@ -127,11 +125,11 @@ In those cases, exported fields retain their last healthy values. ## Debug information -`discovery.consul` does not expose any component-specific debug information. +`discovery.consul` doesn't expose any component-specific debug information. ## Debug metrics -`discovery.consul` does not expose any component-specific debug metrics. +`discovery.consul` doesn't expose any component-specific debug metrics. ## Example @@ -164,8 +162,8 @@ prometheus.remote_write "demo" { ``` Replace the following: - _``_: The URL of the Prometheus remote_write-compatible server to send metrics to. - - _``_: The username to use for authentication to the remote_write API. - - _``_: The password to use for authentication to the remote_write API. + - _``_: The username to use for authentication to the `remote_write` API. + - _``_: The password to use for authentication to the `remote_write` API. diff --git a/docs/sources/reference/components/loki.process.md b/docs/sources/reference/components/loki.process.md index 5b8846137b..8172bd6403 100644 --- a/docs/sources/reference/components/loki.process.md +++ b/docs/sources/reference/components/loki.process.md @@ -1423,7 +1423,7 @@ the stage should attempt to parse as a timestamp. The `format` field defines _how_ that source should be parsed. -First off, the `format` can be set to one of the following shorthand values for commonly-used forms: +The `format` can be set to one of the following shorthand values for commonly used forms: ``` ANSIC: Mon Jan _2 15:04:05 2006 diff --git a/docs/sources/reference/components/loki.source.api.md b/docs/sources/reference/components/loki.source.api.md index c794147519..e53c80f908 100644 --- a/docs/sources/reference/components/loki.source.api.md +++ b/docs/sources/reference/components/loki.source.api.md @@ -26,13 +26,17 @@ loki.source.api "LABEL" { } ``` -The component will start HTTP server on the configured port and address with the following endpoints: +The component will start an HTTP server on the configured port and address with the following endpoints: - `/loki/api/v1/push` - accepting `POST` requests compatible with [Loki push API][loki-push-api], for example, from another {{< param "PRODUCT_NAME" >}}'s [`loki.write`][loki.write] component. -- `/loki/api/v1/raw` - accepting `POST` requests with newline-delimited log lines in body. This can be used to send NDJSON or plaintext logs. This is compatible with promtail's push API endpoint - see [promtail's documentation][promtail-push-api] for more information. NOTE: when this endpoint is used, the incoming timestamps cannot be used and the `use_incoming_timestamp = true` setting will be ignored. -- `/ready` - accepting `GET` requests - can be used to confirm the server is reachable and healthy. -- `/api/v1/push` - internally reroutes to `/loki/api/v1/push` -- `/api/v1/raw` - internally reroutes to `/loki/api/v1/raw` +- `/loki/api/v1/raw` - accepting `POST` requests with newline-delimited log lines in body. + This can be used to send NDJSON or plain text logs. + This is compatible with the Promtail push API endpoint. + Refer to the [Promtail documentation][promtail-push-api] for more information. + When this endpoint is used, the incoming timestamps can't be used and the `use_incoming_timestamp = true` setting will be ignored. +- `/ready` - accepting `GET` requests. Can be used to confirm the server is reachable and healthy. +- `/api/v1/push` - internally reroutes to `/loki/api/v1/push`. +- `/api/v1/raw` - internally reroutes to `/loki/api/v1/raw`. [promtail-push-api]: https://grafana.com/docs/loki/latest/clients/promtail/configuration/#loki_push_api @@ -48,8 +52,7 @@ Name | Type | Description `labels` | `map(string)` | The labels to associate with each received logs record. | `{}` | no `relabel_rules` | `RelabelRules` | Relabeling rules to apply on log entries. | `{}` | no -The `relabel_rules` field can make use of the `rules` export value from a -[`loki.relabel`][loki.relabel] component to apply one or more relabeling rules to log entries before they're forwarded to the list of receivers in `forward_to`. +The `relabel_rules` field can make use of the `rules` export value from a [`loki.relabel`][loki.relabel] component to apply one or more relabeling rules to log entries before they're forwarded to the list of receivers in `forward_to`. [loki.relabel]: ../loki.relabel/ @@ -69,7 +72,7 @@ Hierarchy | Name | Description | Requ ## Exported fields -`loki.source.api` does not export any fields. +`loki.source.api` doesn't export any fields. ## Component health @@ -86,7 +89,9 @@ The following are some of the metrics that are exposed when this component is us ## Example -This example starts an HTTP server on `0.0.0.0` address and port `9999`. The server receives log entries and forwards them to a `loki.write` component while adding a `forwarded="true"` label. The `loki.write` component will send the logs to the specified loki instance using basic auth credentials provided. +This example starts an HTTP server on `0.0.0.0` address and port `9999`. +The server receives log entries and forwards them to a `loki.write` component while adding a `forwarded="true"` label. +The `loki.write` component will send the logs to the specified loki instance using basic auth credentials provided. ```alloy loki.write "local" { diff --git a/docs/sources/reference/components/loki.source.awsfirehose.md b/docs/sources/reference/components/loki.source.awsfirehose.md index c686ddd00f..ed1f5e31ab 100644 --- a/docs/sources/reference/components/loki.source.awsfirehose.md +++ b/docs/sources/reference/components/loki.source.awsfirehose.md @@ -25,7 +25,7 @@ the raw records to Loki. The decoding process goes as follows: - AWS Firehose sends batched requests - Each record is treated individually - For each `record` received in each request: - - If the `record` comes from a [CloudWatch logs subscription filter](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#DestinationKinesisExample), it is decoded and each logging event is written to Loki + - If the `record` comes from a [CloudWatch logs subscription filter](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#DestinationKinesisExample), it's decoded and each logging event is written to Loki - All other records are written raw to Loki The component exposes some internal labels, available for relabeling. The following tables describes internal labels available @@ -47,7 +47,7 @@ exposed as follows: | `__aws_cw_matched_filters` | The list of subscription filter names that match the originating log data. The list is encoded as a comma-separated list. | `Destination,Destination2` | | `__aws_cw_msg_type` | Data messages will use the `DATA_MESSAGE` type. Sometimes CloudWatch Logs may emit Kinesis Data Streams records with a `CONTROL_MESSAGE` type, mainly for checking if the destination is reachable. | `DATA_MESSAGE` | -See [Examples](#example) for a full example configuration showing how to enrich each log entry with these labels. +Refer to [Examples](#example) for a full example configuration showing how to enrich each log entry with these labels. ## Usage @@ -55,7 +55,7 @@ See [Examples](#example) for a full example configuration showing how to enrich loki.source.awsfirehose "LABEL" { http { listen_address = "LISTEN_ADDRESS" - listen_port = PORT + listen_port = PORT } forward_to = RECEIVER_LIST } @@ -119,7 +119,7 @@ The following blocks are supported inside the definition of `loki.source.awsfire ## Exported fields -`loki.source.awsfirehose` does not export any fields. +`loki.source.awsfirehose` doesn't export any fields. ## Component health diff --git a/docs/sources/reference/components/loki.source.cloudflare.md b/docs/sources/reference/components/loki.source.cloudflare.md index 6b2037aaec..67856952ce 100644 --- a/docs/sources/reference/components/loki.source.cloudflare.md +++ b/docs/sources/reference/components/loki.source.cloudflare.md @@ -7,15 +7,11 @@ title: loki.source.cloudflare # loki.source.cloudflare -`loki.source.cloudflare` pulls logs from the Cloudflare Logpull API and -forwards them to other `loki.*` components. +`loki.source.cloudflare` pulls logs from the Cloudflare Logpull API and forwards them to other `loki.*` components. -These logs contain data related to the connecting client, the request path -through the Cloudflare network, and the response from the origin web server and -can be useful for enriching existing logs on an origin server. +These logs contain data related to the connecting client, the request path through the Cloudflare network, and the response from the origin web server and can be useful for enriching existing logs on an origin server. -Multiple `loki.source.cloudflare` components can be specified by giving them -different labels. +You can specify multiple `loki.source.cloudflare` components by giving them different labels. ## Usage @@ -71,27 +67,23 @@ plus any extra fields provided via `additional_fields` argument. "BotScore", "BotScoreSrc", "BotTags", "ClientRequestBytes", "ClientSrcPort", "ClientXRequestedWith", "CacheTieredFill", "EdgeResponseCompressionRatio", "EdgeServerIP", "FirewallMatchesSources", "FirewallMatchesActions", "FirewallMatchesRuleIDs", "OriginResponseBytes", "OriginResponseTime", "ClientDeviceType", "WAFFlags", "WAFMatchedVar", "EdgeColoID", "RequestHeaders", "ResponseHeaders", "ClientRequestSource"` ``` -plus any extra fields provided via `additional_fields` argument (this is still relevant in this case if new fields are made available via Cloudflare API but are not yet included in `all`). +plus any extra fields provided via `additional_fields` argument. This is still relevant in this case if new fields are made available via Cloudflare API but are not yet included in `all`. * `custom` includes only the fields defined in `additional_fields`. -The component saves the last successfully-fetched timestamp in its positions -file. If a position is found in the file for a given zone ID, the component -restarts pulling logs from that timestamp. When no position is found, the -component starts pulling logs from the current time. +The component saves the last successfully fetched timestamp in its positions file. +If a position is found in the file for a given zone ID, the component restarts pulling logs from that timestamp. +When no position is found, the component starts pulling logs from the current time. -Logs are fetched using multiple `workers` which request the last available -`pull_range` repeatedly. It is possible to fall behind due to having too many -log lines to process for each pull; adding more workers, decreasing the pull -range, or decreasing the quantity of fields fetched can mitigate this -performance issue. +Logs are fetched using multiple `workers` which request the last available `pull_range` repeatedly. +It's possible to fall behind due to having too many log lines to process for each pull. +Adding more workers, decreasing the pull range, or decreasing the quantity of fields fetched can mitigate this performance issue. -The last timestamp fetched by the component is recorded in the -`loki_source_cloudflare_target_last_requested_end_timestamp` debug metric. +The last timestamp fetched by the component is recorded in the `loki_source_cloudflare_target_last_requested_end_timestamp` debug metric. -All incoming Cloudflare log entries are in JSON format. You can make use of the -`loki.process` component and a JSON processing stage to extract more labels or -change the log line format. A sample log looks like this: +All incoming Cloudflare log entries are in JSON format. +You can use the `loki.process` component and a JSON processing stage to extract more labels or change the log line format. +A sample log looks like this: ```json { @@ -165,12 +157,11 @@ change the log line format. A sample log looks like this: ## Exported fields -`loki.source.cloudflare` does not export any fields. +`loki.source.cloudflare` doesn't export any fields. ## Component health -`loki.source.cloudflare` is only reported as unhealthy if given an invalid -configuration. +`loki.source.cloudflare` is only reported as unhealthy if given an invalid configuration. ## Debug information @@ -178,7 +169,7 @@ configuration. * Whether the target is ready and reading logs from the API. * The Cloudflare zone ID. * The last error reported, if any. -* The stored positions file entry, as the combination of zone_id, labels and last fetched timestamp. +* The stored positions file entry, as the combination of `zone_id`, labels and last fetched timestamp. * The last timestamp fetched. * The set of fields being fetched. diff --git a/docs/sources/reference/components/loki.write.md b/docs/sources/reference/components/loki.write.md index cb864993ee..097587df85 100644 --- a/docs/sources/reference/components/loki.write.md +++ b/docs/sources/reference/components/loki.write.md @@ -6,7 +6,7 @@ title: loki.write # loki.write -`loki.write` receives log entries from other loki components and sends them over the network using Loki's `logproto` format. +`loki.write` receives log entries from other loki components and sends them over the network using the Loki `logproto` format. Multiple `loki.write` components can be specified by giving them different labels. @@ -34,20 +34,19 @@ Name | Type | Description The following blocks are supported inside the definition of `loki.write`: -Hierarchy | Block | Description | Required --------------------------------|-------------------|----------------------------------------------------------|--------- -endpoint | [endpoint][] | Location to send logs to. | no -wal | [wal][] | Write-ahead log configuration. | no -endpoint > basic_auth | [basic_auth][] | Configure basic_auth for authenticating to the endpoint. | no -endpoint > authorization | [authorization][] | Configure generic authorization to the endpoint. | no -endpoint > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no -endpoint > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -endpoint > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no -| endpoint > queue_config | [queue_config][] | When WAL is enabled, configures the queue client. | no | - -The `>` symbol indicates deeper levels of nesting. For example, `endpoint > -basic_auth` refers to a `basic_auth` block defined inside an -`endpoint` block. +Hierarchy | Block | Description | Required +-------------------------------|-------------------|------------------------------------------------------------|--------- +endpoint | [endpoint][] | Location to send logs to. | no +wal | [wal][] | Write-ahead log configuration. | no +endpoint > basic_auth | [basic_auth][] | Configure `basic_auth` for authenticating to the endpoint. | no +endpoint > authorization | [authorization][] | Configure generic authorization to the endpoint. | no +endpoint > oauth2 | [oauth2][] | Configure OAuth2 for authenticating to the endpoint. | no +endpoint > oauth2 > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +endpoint > tls_config | [tls_config][] | Configure TLS settings for connecting to the endpoint. | no +endpoint > queue_config | [queue_config][] | When WAL is enabled, configures the queue client. | no + +The `>` symbol indicates deeper levels of nesting. +For example, `endpoint > basic_auth` refers to a `basic_auth` block defined inside an `endpoint` block. [endpoint]: #endpoint-block [wal]: #wal-block @@ -59,8 +58,8 @@ basic_auth` refers to a `basic_auth` block defined inside an ### endpoint block -The `endpoint` block describes a single location to send logs to. Multiple -`endpoint` blocks can be provided to send logs to multiple locations. +The `endpoint` block describes a single location to send logs to. +You can use multiple `endpoint` blocks to send logs to multiple locations. The following arguments are supported: @@ -131,8 +130,8 @@ enabled, the retry mechanism will be governed by the backoff configuration speci ### queue_config block (experimental) -The optional `queue_config` block configures, when WAL is enabled (see [Write-Ahead block](#wal-block-experimental)), how the -underlying client queues batches of logs to be sent to Loki. +The optional `queue_config` block configures, when WAL is enabled, how the underlying client queues batches of logs sent to Loki. +Refer to [Write-Ahead block](#wal-block-experimental) for more information. The following arguments are supported: diff --git a/docs/sources/reference/components/otelcol.auth.oauth2.md b/docs/sources/reference/components/otelcol.auth.oauth2.md index 1582c342fa..d7ba252c32 100644 --- a/docs/sources/reference/components/otelcol.auth.oauth2.md +++ b/docs/sources/reference/components/otelcol.auth.oauth2.md @@ -6,19 +6,19 @@ title: otelcol.auth.oauth2 # otelcol.auth.oauth2 -`otelcol.auth.oauth2` exposes a `handler` that can be used by other `otelcol` -components to authenticate requests using OAuth 2.0. +`otelcol.auth.oauth2` exposes a `handler` that can be used by other `otelcol` components to authenticate requests using OAuth 2.0. The authorization tokens can be used by HTTP and gRPC based OpenTelemetry exporters. -This component can fetch and refresh expired tokens automatically. For further details about -OAuth 2.0 Client Credentials flow (2-legged workflow) see [this document](https://datatracker.ietf.org/doc/html/rfc6749#section-4.4). +This component can fetch and refresh expired tokens automatically. +Refer to the [OAuth 2.0 Authorization Framework](https://datatracker.ietf.org/doc/html/rfc6749#section-4.4) for more information about the Auth 2.0 Client Credentials flow. -> **NOTE**: `otelcol.auth.oauth2` is a wrapper over the upstream OpenTelemetry -> Collector `oauth2client` extension. Bug reports or feature requests will be -> redirected to the upstream repository, if necessary. -Multiple `otelcol.auth.oauth2` components can be specified by giving them -different labels. +{{< admonition type="note" >}} +`otelcol.auth.oauth2` is a wrapper over the upstream OpenTelemetry Collector `oauth2client` extension. +Bug reports or feature requests will be redirected to the upstream repository, if necessary. +{{< /admonition >}} + +You can specify multiple `otelcol.auth.oauth2` components by giving them different labels. ## Usage @@ -48,14 +48,12 @@ The `timeout` argument is used both for requesting initial tokens and for refres At least one of the `client_id` and `client_id_file` pair of arguments must be set. In case both are set, `client_id_file` takes precedence. -Similarly, at least one of the `client_secret` and `client_secret_file` pair of -arguments must be set. In case both are set, `client_secret_file` also takes -precedence. +Similarly, at least one of the `client_secret` and `client_secret_file` pair of arguments must be set. +If both are set, `client_secret_file` also takes precedence. ## Blocks -The following blocks are supported inside the definition of -`otelcol.auth.oauth2`: +The following blocks are supported inside the definition of `otelcol.auth.oauth2`: Hierarchy | Block | Description | Required ----------|---------|------------------------------------|--------- @@ -85,12 +83,11 @@ Name | Type | Description ## Component health -`otelcol.auth.oauth2` is only reported as unhealthy if given an invalid -configuration. +`otelcol.auth.oauth2` is only reported as unhealthy if given an invalid configuration. ## Debug information -`otelcol.auth.oauth2` does not expose any component-specific debug information. +`otelcol.auth.oauth2` doesn't expose any component-specific debug information. ## Example diff --git a/docs/sources/reference/components/otelcol.exporter.otlphttp.md b/docs/sources/reference/components/otelcol.exporter.otlphttp.md index ddce9c350a..bfe39e3490 100644 --- a/docs/sources/reference/components/otelcol.exporter.otlphttp.md +++ b/docs/sources/reference/components/otelcol.exporter.otlphttp.md @@ -6,15 +6,14 @@ title: otelcol.exporter.otlphttp # otelcol.exporter.otlphttp -`otelcol.exporter.otlphttp` accepts telemetry data from other `otelcol` -components and writes them over the network using the OTLP HTTP protocol. +`otelcol.exporter.otlphttp` accepts telemetry data from other `otelcol` components and writes them over the network using the OTLP HTTP protocol. -> **NOTE**: `otelcol.exporter.otlphttp` is a wrapper over the upstream -> OpenTelemetry Collector `otlphttp` exporter. Bug reports or feature requests -> will be redirected to the upstream repository, if necessary. +{{< admonition type="note" >}} +`otelcol.exporter.otlphttp` is a wrapper over the upstream OpenTelemetry Collector `otlphttp` exporter. +Bug reports or feature requests will be redirected to the upstream repository, if necessary. +{{< /admonition >}} -Multiple `otelcol.exporter.otlphttp` components can be specified by giving them -different labels. +You can specify multiple `otelcol.exporter.otlphttp` components by giving them different labels. ## Usage @@ -30,31 +29,29 @@ otelcol.exporter.otlphttp "LABEL" { `otelcol.exporter.otlphttp` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- +Name | Type | Description | Default | Required +-------------------|----------|----------------------------------|-----------------------------------|--------- `metrics_endpoint` | `string` | The endpoint to send metrics to. | `client.endpoint + "/v1/metrics"` | no `logs_endpoint` | `string` | The endpoint to send logs to. | `client.endpoint + "/v1/logs"` | no `traces_endpoint` | `string` | The endpoint to send traces to. | `client.endpoint + "/v1/traces"` | no -The default value depends on the `endpoint` field set in the required `client` -block. If set, these arguments override the `client.endpoint` field for the -corresponding signal. +The default value depends on the `endpoint` field set in the required `client` block. +If set, these arguments override the `client.endpoint` field for the corresponding signal. ## Blocks -The following blocks are supported inside the definition of -`otelcol.exporter.otlphttp`: +The following blocks are supported inside the definition of `otelcol.exporter.otlphttp`: -Hierarchy | Block | Description | Required ---------- | ----- | ----------- | -------- -client | [client][] | Configures the HTTP server to send telemetry data to. | yes -client > tls | [tls][] | Configures TLS for the HTTP client. | no -sending_queue | [sending_queue][] | Configures batching of data before sending. | no -retry_on_failure | [retry_on_failure][] | Configures retry mechanism for failed requests. | no -debug_metrics | [debug_metrics][] | Configures the metrics that this component generates to monitor its state. | no +Hierarchy | Block | Description | Required +-----------------|----------------------|----------------------------------------------------------------------------|--------- +client | [client][] | Configures the HTTP server to send telemetry data to. | yes +client > tls | [tls][] | Configures TLS for the HTTP client. | no +sending_queue | [sending_queue][] | Configures batching of data before sending. | no +retry_on_failure | [retry_on_failure][] | Configures retry mechanism for failed requests. | no +debug_metrics | [debug_metrics][] | Configures the metrics that this component generates to monitor its state. | no -The `>` symbol indicates deeper levels of nesting. For example, `client > tls` -refers to a `tls` block defined inside a `client` block. +The `>` symbol indicates deeper levels of nesting. +For example, `client > tls` refers to a `tls` block defined inside a `client` block. [client]: #client-block [tls]: #tls-block @@ -68,29 +65,29 @@ The `client` block configures the HTTP client used by the component. The following arguments are supported: -Name | Type | Description | Default | Required -------------------------- | -------------------------- | ----------- | ------- | -------- -`endpoint` | `string` | The target URL to send telemetry data to. | | yes -`encoding` | `string` | The encoding to use for messages. Should be either `"proto"` or `"json"`. | `"proto"` | no -`read_buffer_size` | `string` | Size of the read buffer the HTTP client uses for reading server responses. | `0` | no -`write_buffer_size` | `string` | Size of the write buffer the HTTP client uses for writing requests. | `"512KiB"` | no -`timeout` | `duration` | Time to wait before marking a request as failed. | `"30s"` | no -`headers` | `map(string)` | Additional headers to send with the request. | `{}` | no -`compression` | `string` | Compression mechanism to use for requests. | `"gzip"` | no -`max_idle_conns` | `int` | Limits the number of idle HTTP connections the client can keep open. | `100` | no -`max_idle_conns_per_host` | `int` | Limits the number of idle HTTP connections the host can keep open. | `0` | no -`max_conns_per_host` | `int` | Limits the total (dialing,active, and idle) number of connections per host. | `0` | no -`idle_conn_timeout` | `duration` | Time to wait before an idle connection closes itself. | `"90s"` | no -`disable_keep_alives` | `bool` | Disable HTTP keep-alive. | `false` | no -`http2_read_idle_timeout` | `duration` | Timeout after which a health check using ping frame will be carried out if no frame is received on the connection. | `0s` | no -`http2_ping_timeout` | `duration` | Timeout after which the connection will be closed if a response to Ping is not received. | `15s` | no -`auth` | `capsule(otelcol.Handler)` | Handler from an `otelcol.auth` component to use for authenticating requests. | | no +Name | Type | Description | Default | Required +--------------------------|----------------------------|--------------------------------------------------------------------------------------------------------------------|------------|--------- +`endpoint` | `string` | The target URL to send telemetry data to. | | yes +`encoding` | `string` | The encoding to use for messages. Should be either `"proto"` or `"json"`. | `"proto"` | no +`read_buffer_size` | `string` | Size of the read buffer the HTTP client uses for reading server responses. | `0` | no +`write_buffer_size` | `string` | Size of the write buffer the HTTP client uses for writing requests. | `"512KiB"` | no +`timeout` | `duration` | Time to wait before marking a request as failed. | `"30s"` | no +`headers` | `map(string)` | Additional headers to send with the request. | `{}` | no +`compression` | `string` | Compression mechanism to use for requests. | `"gzip"` | no +`max_idle_conns` | `int` | Limits the number of idle HTTP connections the client can keep open. | `100` | no +`max_idle_conns_per_host` | `int` | Limits the number of idle HTTP connections the host can keep open. | `0` | no +`max_conns_per_host` | `int` | Limits the total (dialing,active, and idle) number of connections per host. | `0` | no +`idle_conn_timeout` | `duration` | Time to wait before an idle connection closes itself. | `"90s"` | no +`disable_keep_alives` | `bool` | Disable HTTP keep-alive. | `false` | no +`http2_read_idle_timeout` | `duration` | Timeout after which a health check using ping frame will be carried out if no frame is received on the connection. | `0s` | no +`http2_ping_timeout` | `duration` | Timeout after which the connection will be closed if a response to Ping isn't received. | `15s` | no +`auth` | `capsule(otelcol.Handler)` | Handler from an `otelcol.auth` component to use for authenticating requests. | | no When setting `headers`, note that: - Certain headers such as `Content-Length` and `Connection` are automatically written when needed and values in `headers` may be ignored. - The `Host` header is automatically derived from the `endpoint` value. However, this automatic assignment can be overridden by explicitly setting a `Host` header in `headers`. -Setting `disable_keep_alives` to `true` will result in significant overhead establishing a new HTTP(s) connection for every request. +Setting `disable_keep_alives` to `true` will result in significant overhead establishing a new HTTP or HTTPS connection for every request. Before enabling this option, consider whether changes to idle connection settings can achieve your goal. If `http2_ping_timeout` is unset or set to `0s`, it will default to `15s`. @@ -101,22 +98,19 @@ If `http2_read_idle_timeout` is unset or set to `0s`, then no health check will ### tls block -The `tls` block configures TLS settings used for the connection to the HTTP -server. +The `tls` block configures TLS settings used for the connection to the HTTP server. {{< docs/shared lookup="reference/components/otelcol-tls-client-block.md" source="alloy" version="" >}} ### sending_queue block -The `sending_queue` block configures an in-memory buffer of batches before data is sent -to the HTTP server. +The `sending_queue` block configures an in-memory buffer of batches before data is sent to the HTTP server. {{< docs/shared lookup="reference/components/otelcol-queue-block.md" source="alloy" version="" >}} ### retry_on_failure block -The `retry_on_failure` block configures how failed requests to the HTTP server are -retried. +The `retry_on_failure` block configures how failed requests to the HTTP server are retried. {{< docs/shared lookup="reference/components/otelcol-retry-block.md" source="alloy" version="" >}} @@ -128,27 +122,23 @@ retried. The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- +Name | Type | Description +--------|--------------------|----------------------------------------------------------------- `input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. -`input` accepts `otelcol.Consumer` data for any telemetry signal (metrics, -logs, or traces). +`input` accepts `otelcol.Consumer` data for any telemetry signal (metrics, logs, or traces). ## Component health -`otelcol.exporter.otlphttp` is only reported as unhealthy if given an invalid -configuration. +`otelcol.exporter.otlphttp` is only reported as unhealthy if given an invalid configuration. ## Debug information -`otelcol.exporter.otlphttp` does not expose any component-specific debug -information. +`otelcol.exporter.otlphttp` doesn't expose any component-specific debug information. ## Example -This example creates an exporter to send data to a locally running Grafana -Tempo without TLS: +This example creates an exporter to send data to a locally running Grafana Tempo without TLS: ```alloy otelcol.exporter.otlphttp "tempo" { diff --git a/docs/sources/reference/components/otelcol.processor.discovery.md b/docs/sources/reference/components/otelcol.processor.discovery.md index 93d1f58e49..821bcb1084 100644 --- a/docs/sources/reference/components/otelcol.processor.discovery.md +++ b/docs/sources/reference/components/otelcol.processor.discovery.md @@ -6,11 +6,9 @@ title: otelcol.processor.discovery # otelcol.processor.discovery -`otelcol.processor.discovery` accepts traces telemetry data from other `otelcol` -components. It can be paired with `discovery.*` components, which supply a list -of labels for each discovered target. -`otelcol.processor.discovery` adds resource attributes to spans which have a hostname -matching the one in the `__address__` label provided by the `discovery.*` component. +`otelcol.processor.discovery` accepts traces telemetry data from other `otelcol` components. +It can be paired with `discovery.*` components, which supply a list of labels for each discovered target. +`otelcol.processor.discovery` adds resource attributes to spans which have a hostname matching the one in the `__address__` label provided by the `discovery.*` component. {{< admonition type="note" >}} `otelcol.processor.discovery` is a custom component unrelated to any processors from the OpenTelemetry Collector. @@ -30,16 +28,15 @@ adding resource attributes via `otelcol.processor.discovery`: only compatible with Prometheus naming conventions makes it hard to follow OpenTelemetry semantic conventions in `otelcol.processor.discovery`. -If your use case is to add resource attributes which contain Kubernetes metadata, -consider using `otelcol.processor.k8sattributes` instead. +If your use case is to add resource attributes which contain Kubernetes metadata, consider using `otelcol.processor.k8sattributes` instead. ------ The main use case for `otelcol.processor.discovery` is for users who migrate to {{< param "PRODUCT_NAME" >}} -from Grafana Agent Static mode's `prom_sd_operation_type`/`prom_sd_pod_associations` [configuration options][Traces]. +from Grafana Agent Static mode `prom_sd_operation_type`/`prom_sd_pod_associations` [configuration options][Traces]. [Prometheus data model]: https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels [OTEL sem conv]: https://github.com/open-telemetry/semantic-conventions/blob/main/docs/README.md -[Traces]: http://grafana.com/docs/agent/latest/static/configuration/traces-config/ +[Traces]: https://grafana.com/docs/agent/latest/static/configuration/traces-config/ {{< /admonition >}} ## Usage @@ -57,15 +54,15 @@ otelcol.processor.discovery "LABEL" { `otelcol.processor.discovery` supports the following arguments: -Name | Type | Description | Default | Required ------------------|---------------------|-----------------------------------------------------------------------|----------|--------- -`targets` | `list(map(string))` | List of target labels to apply to the spans. | | yes -`operation_type` | `string` | Configures whether to update a span's attribute if it already exists. | `upsert` | no -`pod_associations` | `list(string)` | Configures how to decide the hostname of the span. | `["ip", "net.host.ip", "k8s.pod.ip", "hostname", "connection"]` | no +Name | Type | Description | Default | Required +-------------------|---------------------|-----------------------------------------------------------------------|-----------------------------------------------------------------|--------- +`targets` | `list(map(string))` | List of target labels to apply to the spans. | | yes +`operation_type` | `string` | Configures whether to update a span's attribute if it already exists. | `upsert` | no +`pod_associations` | `list(string)` | Configures how to decide the hostname of the span. | `["ip", "net.host.ip", "k8s.pod.ip", "hostname", "connection"]` | no `targets` could come from `discovery.*` components: 1. The `__address__` label will be matched against the IP address of incoming spans. - * If `__address__` contains a port, it is ignored. + * If `__address__` contains a port, it is ignored. 2. If a match is found, then relabeling rules are applied. * Note that labels starting with `__` will not be added to the spans. @@ -82,9 +79,9 @@ The supported values for `pod_associations` are: * `hostname`: The hostname will be sourced from a `host.name` resource attribute. * `connection`: The hostname will be sourced from the context from the incoming requests (gRPC and HTTP). -If multiple `pod_associations` methods are enabled, the order of evaluation is honored. -For example, when `pod_associations` is `["ip", "net.host.ip"]`, `"net.host.ip"` may be matched -only if `"ip"` has not already matched. +If multiple `pod_associations` methods are enabled, the order of evaluation is honored. +For example, when `pod_associations` is `["ip", "net.host.ip"]`, `"net.host.ip"` may be matched +only if `"ip"` hasn't already matched. ## Blocks @@ -119,7 +116,7 @@ configuration. ## Debug information -`otelcol.processor.discovery` does not expose any component-specific debug +`otelcol.processor.discovery` doesn't expose any component-specific debug information. ## Examples @@ -166,10 +163,9 @@ otelcol.processor.discovery "default" { ### Using a preconfigured list of attributes -It's not necessary to use a discovery component. In the example below, both a `test_label` and -a `test.label.with.dots` resource attributes will be added to a span if its IP address is -"1.2.2.2". The `__internal_label__` will be not be added to the span, because it begins with -a double underscore (`__`). +It's not necessary to use a discovery component. +In the example below, both a `test_label` and a `test.label.with.dots` resource attributes will be added to a span if its IP address is "1.2.2.2". +The `__internal_label__` will be not be added to the span, because it begins with a double underscore (`__`). ```alloy otelcol.processor.discovery "default" { diff --git a/docs/sources/reference/components/otelcol.processor.transform.md b/docs/sources/reference/components/otelcol.processor.transform.md index c3a50b111c..4ecd139f5e 100644 --- a/docs/sources/reference/components/otelcol.processor.transform.md +++ b/docs/sources/reference/components/otelcol.processor.transform.md @@ -26,7 +26,7 @@ there is also a set of metrics-only functions: * [Booleans][OTTL booleans]: * `not true` * `not IsMatch(name, "http_.*")` -* [Boolean Expressions][OTTL boolean expressions] consisting of a `where` followed by one or more booleans: +* [Boolean Expressions][OTTL boolean expressions] consisting of a `where` followed by one or more boolean values: * `set(attributes["whose_fault"], "ours") where attributes["http.status"] == 500` * `set(attributes["whose_fault"], "theirs") where attributes["http.status"] == 400 or attributes["http.status"] == 404` * [Math expressions][OTTL math expressions]: @@ -52,28 +52,27 @@ Raw strings are generally more convenient for writing OTTL statements. {{< /admonition >}} {{< admonition type="note" >}} -`otelcol.processor.transform` is a wrapper over the upstream -OpenTelemetry Collector `transform` processor. If necessary, bug reports or feature requests -will be redirected to the upstream repository. +`otelcol.processor.transform` is a wrapper over the upstream OpenTelemetry Collector `transform` processor. +If necessary, bug reports or feature requests will be redirected to the upstream repository. {{< /admonition >}} You can specify multiple `otelcol.processor.transform` components by giving them different labels. {{< admonition type="warning" >}} -`otelcol.processor.transform` allows you to modify all aspects of your telemetry. Some specific risks are given below, -but this is not an exhaustive list. It is important to understand your data before using this processor. +`otelcol.processor.transform` allows you to modify all aspects of your telemetry. +Some specific risks are given below, but this is not an exhaustive list. +It is important to understand your data before using this processor. - [Unsound Transformations][]: Transformations between metric data types are not defined in the [metrics data model][]. -To use these functions, you must understand the incoming data and know that it can be meaningfully converted -to a new metric data type or can be used to create new metrics. +To use these functions, you must understand the incoming data and know that it can be meaningfully converted to a new metric data type or can be used to create new metrics. - Although OTTL allows you to use the `set` function with `metric.data_type`, its implementation in the transform processor is a [no-op][]. To modify a data type, you must use a specific function such as `convert_gauge_to_sum`. - [Identity Conflict][]: Transformation of metrics can potentially affect a metric's identity, - leading to an Identity Crisis. Be especially cautious when transforming a metric name and when reducing or changing - existing attributes. Adding new attributes is safe. -- [Orphaned Telemetry][]: The processor allows you to modify `span_id`, `trace_id`, and `parent_span_id` for traces - and `span_id`, and `trace_id` logs. Modifying these fields could lead to orphaned spans or logs. + leading to an Identity Crisis. Be especially cautious when transforming a metric name and when reducing or changing existing attributes. + Adding new attributes is safe. +- [Orphaned Telemetry][]: The processor allows you to modify `span_id`, `trace_id`, and `parent_span_id` for traces and `span_id`, and `trace_id` logs. + Modifying these fields could lead to orphaned spans or logs. [Unsound Transformations]: https://github.com/open-telemetry/opentelemetry-collector/blob/{{< param "OTEL_VERSION" >}}/docs/standard-warnings.md#unsound-transformations [Identity Conflict]: https://github.com/open-telemetry/opentelemetry-collector/blob/{{< param "OTEL_VERSION" >}}/docs/standard-warnings.md#identity-conflict @@ -98,8 +97,8 @@ otelcol.processor.transform "LABEL" { `otelcol.processor.transform` supports the following arguments: -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- +Name | Type | Description | Default | Required +-------------|----------|--------------------------------------------------------------------|---------------|--------- `error_mode` | `string` | How to react to errors if they occur while processing a statement. | `"propagate"` | no The supported values for `error_mode` are: @@ -133,10 +132,10 @@ debug_metrics | [debug_metrics][] | Configures the metrics that this component g The `trace_statements` block specifies statements which transform trace telemetry signals. Multiple `trace_statements` blocks can be specified. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`context` | `string` | OTTL Context to use when interpreting the associated statements. | | yes -`statements` | `list(string)` | A list of OTTL statements. | | yes +Name | Type | Description | Default | Required +-------------|----------------|------------------------------------------------------------------|---------|--------- +`context` | `string` | OTTL Context to use when interpreting the associated statements. | | yes +`statements` | `list(string)` | A list of OTTL statements. | | yes The supported values for `context` are: * `resource`: Use when interacting only with OTLP resources (for example, resource attributes). @@ -144,17 +143,17 @@ The supported values for `context` are: * `span`: Use when interacting only with OTLP spans. * `spanevent`: Use when interacting only with OTLP span events. -See [OTTL Context][] for more information about how ot use contexts. +Refer to [OTTL Context][] for more information about how to use contexts. ### metric_statements block The `metric_statements` block specifies statements which transform metric telemetry signals. Multiple `metric_statements` blocks can be specified. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`context` | `string` | OTTL Context to use when interpreting the associated statements. | | yes -`statements` | `list(string)` | A list of OTTL statements. | | yes +Name | Type | Description | Default | Required +-------------|----------------|------------------------------------------------------------------|---------|--------- +`context` | `string` | OTTL Context to use when interpreting the associated statements. | | yes +`statements` | `list(string)` | A list of OTTL statements. | | yes The supported values for `context` are: * `resource`: Use when interacting only with OTLP resources (for example, resource attributes). @@ -169,23 +168,22 @@ Refer to [OTTL Context][] for more information about how to use contexts. The `log_statements` block specifies statements which transform log telemetry signals. Multiple `log_statements` blocks can be specified. -Name | Type | Description | Default | Required ----- | ---- | ----------- | ------- | -------- -`context` | `string` | OTTL Context to use when interpreting the associated statements. | | yes -`statements` | `list(string)` | A list of OTTL statements. | | yes +Name | Type | Description | Default | Required +-------------|----------------|------------------------------------------------------------------|---------|--------- +`context` | `string` | OTTL Context to use when interpreting the associated statements. | | yes +`statements` | `list(string)` | A list of OTTL statements. | | yes The supported values for `context` are: * `resource`: Use when interacting only with OTLP resources (for example, resource attributes). * `scope`: Use when interacting only with OTLP instrumentation scope (for example, the name of the instrumentation scope). * `log`: Use when interacting only with OTLP logs. -See [OTTL Context][] for more information about how ot use contexts. +Refer to [OTTL Context][] for more information about how to use contexts. ### OTTL Context Each context allows the transformation of its type of telemetry. -For example, statements associated with a `resource` context will be able to transform the resource's -`attributes` and `dropped_attributes_count`. +For example, statements associated with a `resource` context will be able to transform the resource's `attributes` and `dropped_attributes_count`. Each type of `context` defines its own paths and enums specific to that context. Refer to the OpenTelemetry documentation for a list of paths and enums for each context: @@ -205,8 +203,7 @@ Contexts __NEVER__ supply access to individual items "lower" in the protobuf def - Similarly, statements associated to a `span` __WILL NOT__ be able to access individual SpanEvents, but can access the entire SpanEvents slice. For practical purposes, this means that a context cannot make decisions on its telemetry based on telemetry "lower" in the structure. -For example, __the following context statement is not possible__ because it attempts to use individual datapoint -attributes in the condition of a statement associated to a `metric`: +For example, __the following context statement is not possible__ because it attempts to use individual datapoint attributes in the condition of a statement associated to a `metric`: ```alloy metric_statements { @@ -256,8 +253,8 @@ span using the `span` context, it is more efficient to use the `resource` contex The following fields are exported and can be referenced by other components: -Name | Type | Description ----- | ---- | ----------- +Name | Type | Description +--------|--------------------|----------------------------------------------------------------- `input` | `otelcol.Consumer` | A value that other components can use to send telemetry data to. `input` accepts `otelcol.Consumer` data for any telemetry signal (metrics, diff --git a/docs/sources/reference/components/prometheus.exporter.azure.md b/docs/sources/reference/components/prometheus.exporter.azure.md index 9bd4367db5..75fdfd4af2 100644 --- a/docs/sources/reference/components/prometheus.exporter.azure.md +++ b/docs/sources/reference/components/prometheus.exporter.azure.md @@ -29,8 +29,10 @@ The exporter offers the following two options for gathering metrics. The account used by {{< param "PRODUCT_NAME" >}} needs: -- When using an Azure Resource Graph query, [read access to the resources that will be queried by Resource Graph](https://learn.microsoft.com/en-us/azure/governance/resource-graph/overview#permissions-in-azure-resource-graph) -- Permissions to call the [Microsoft.Insights Metrics API](https://learn.microsoft.com/en-us/rest/api/monitor/metrics/list) which should be the `Microsoft.Insights/Metrics/Read` permission +- When using an Azure Resource Graph query, [read access to the resources that will be queried by Resource Graph](https://learn.microsoft.com/en-us/azure/governance/resource-graph/overview#permissions-in-azure-resource-graph). + +- Permissions to call the [Microsoft.Insights Metrics API](https://learn.microsoft.com/en-us/rest/api/monitor/metrics/list) which should be the `Microsoft.Insights/Metrics/Read` permission. + ## Usage @@ -57,22 +59,22 @@ prometheus.exporter.azure LABEL { You can use the following arguments to configure the exporter's behavior. Omitted fields take their default values. -| Name | Type | Description | Default | Required | -|-------------------------------|----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------|----------| -| `subscriptions` | `list(string)` | List of subscriptions to scrape metrics from. | | yes | -| `resource_type` | `string` | The Azure Resource Type to scrape metrics for. | | yes | -| `metrics` | `list(string)` | The metrics to scrape from resources. | | yes | -| `resource_graph_query_filter` | `string` | The [Kusto query][] filter to apply when searching for resources. Can't be used if `regions` is set. | | no | -| `regions` | `list(string)` | The list of regions for gathering metrics and enables gathering metrics for all resources in the subscription. Can't be used if `resource_graph_query_filter` is set. | | no | -| `metric_aggregations` | `list(string)` | Aggregations to apply for the metrics produced. | | no | -| `timespan` | `string` | [ISO8601 Duration][] over which the metrics are being queried. | `"PT1M"` (1 minute) | no | -| `included_dimensions` | `list(string)` | List of dimensions to include on the final metrics. | | no | -| `included_resource_tags` | `list(string)` | List of resource tags to include on the final metrics. | `["owner"]` | no | -| `metric_namespace` | `string` | Namespace for `resource_type` which have multiple levels of metrics. | | no | -| `azure_cloud_environment` | `string` | Name of the cloud environment to connect to. | `"azurecloud"` | no | -| `metric_name_template` | `string` | Metric template used to expose the metrics. | `"azure_{type}_{metric}_{aggregation}_{unit}"` | no | -| `metric_help_template` | `string` | Description of the metric. | `"Azure metric {metric} for {type} with aggregation {aggregation} as {unit}"` | no | -| `validate_dimensions` | `bool` | Enable dimension validation in the azure sdk | `false` | no | +| Name | Type | Description | Default | Required | +|-------------------------------|----------------|------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------|----------| +| `subscriptions` | `list(string)` | List of subscriptions to scrape metrics from. | | yes | +| `resource_type` | `string` | The Azure Resource Type to scrape metrics for. | | yes | +| `metrics` | `list(string)` | The metrics to scrape from resources. | | yes | +| `resource_graph_query_filter` | `string` | The [Kusto query][] filter to apply when searching for resources. Can't be used if `regions` is set. | | no | +| `regions` | `list(string)` | The list of regions for gathering metrics and enables gathering metrics for all resources in the subscription. Can't be used if `resource_graph_query_filter` is set. | | no | +| `metric_aggregations` | `list(string)` | Aggregations to apply for the metrics produced. | | no | +| `timespan` | `string` | [ISO8601 Duration][] over which the metrics are being queried. | `"PT1M"` (1 minute) | no | +| `included_dimensions` | `list(string)` | List of dimensions to include on the final metrics. | | no | +| `included_resource_tags` | `list(string)` | List of resource tags to include on the final metrics. | `["owner"]` | no | +| `metric_namespace` | `string` | Namespace for `resource_type` which have multiple levels of metrics. | | no | +| `azure_cloud_environment` | `string` | Name of the cloud environment to connect to. | `"azurecloud"` | no | +| `metric_name_template` | `string` | Metric template used to expose the metrics. | `"azure_{type}_{metric}_{aggregation}_{unit}"` | no | +| `metric_help_template` | `string` | Description of the metric. | `"Azure metric {metric} for {type} with aggregation {aggregation} as {unit}"` | no | +| `validate_dimensions` | `bool` | Enable dimension validation in the azure sdk | `false` | no | The list of available `resource_type` values and their corresponding `metrics` can be found in [Azure Monitor essentials][]. @@ -80,15 +82,24 @@ The list of available `regions` to your subscription can be found by running the The `resource_graph_query_filter` can be embedded into a template query of the form `Resources | where type =~ "" | project id, tags`. -Valid values for `metric_aggregations` are `minimum`, `maximum`, `average`, `total`, and `count`. If no aggregation is specified, the value is retrieved from the metric. For example, the aggregation value of the metric `Availability` in [Microsoft.ClassicStorage/storageAccounts](https://learn.microsoft.com/en-us/azure/azure-monitor/reference/supported-metrics/microsoft-classicstorage-storageaccounts-metrics) is `average`. - -Every metric has its own set of dimensions. For example, the dimensions for the metric `Availability` in [Microsoft.ClassicStorage/storageAccounts](https://learn.microsoft.com/en-us/azure/azure-monitor/reference/supported-metrics/microsoft-classicstorage-storageaccounts-metrics) are `GeoType`, `ApiName`, and `Authentication`. If a single dimension is requested, it will have the name `dimension`. If multiple dimensions are requested, they will have the name `dimension`. +Valid values for `metric_aggregations` are `minimum`, `maximum`, `average`, `total`, and `count`. +If no aggregation is specified, the value is retrieved from the metric. + +For example, the aggregation value of the metric `Availability` in [Microsoft.ClassicStorage/storageAccounts](https://learn.microsoft.com/en-us/azure/azure-monitor/reference/supported-metrics/microsoft-classicstorage-storageaccounts-metrics) is `average`. + +Every metric has its own set of dimensions. + +For example, the dimensions for the metric `Availability` in [Microsoft.ClassicStorage/storageAccounts](https://learn.microsoft.com/en-us/azure/azure-monitor/reference/supported-metrics/microsoft-classicstorage-storageaccounts-metrics) are `GeoType`, `ApiName`, and `Authentication`. + +If a single dimension is requested, it will have the name `dimension`. +If multiple dimensions are requested, they will have the name `dimension`. Tags in `included_resource_tags` will be added as labels with the name `tag_`. Valid values for `azure_cloud_environment` are `azurecloud`, `azurechinacloud`, `azuregovernmentcloud` and `azurepprivatecloud`. -`validate_dimensions` is disabled by default to reduce the number of Azure exporter instances requires when a `resource_type` has metrics with varying dimensions. When `validate_dimensions` is enabled you will need one exporter instance per metric + dimension combination which is more tedious to maintain. +`validate_dimensions` is disabled by default to reduce the number of Azure exporter instances requires when a `resource_type` has metrics with varying dimensions. +When `validate_dimensions` is enabled you will need one exporter instance per metric + dimension combination which is more tedious to maintain. [Kusto query]: https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/ [Azure Monitor essentials]: https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/metrics-supported diff --git a/docs/sources/reference/components/prometheus.exporter.blackbox.md b/docs/sources/reference/components/prometheus.exporter.blackbox.md index b37394bd61..3e7e47bfb5 100644 --- a/docs/sources/reference/components/prometheus.exporter.blackbox.md +++ b/docs/sources/reference/components/prometheus.exporter.blackbox.md @@ -25,22 +25,22 @@ prometheus.exporter.blackbox "LABEL" { The following arguments can be used to configure the exporter's behavior. Omitted fields take their default values. -| Name | Type | Description | Default | Required | -| ---------------------- | -------------------- | ---------------------------------------------------------------- | -------- | -------- | -| `config_file` | `string` | blackbox_exporter configuration file path. | | no | -| `config` | `string` or `secret` | blackbox_exporter configuration as inline string. | | no | -| `probe_timeout_offset` | `duration` | Offset in seconds to subtract from timeout when probing targets. | `"0.5s"` | no | +| Name | Type | Description | Default | Required | +| ---------------------- | -------------------- | ------------------------------------------------------------------ | -------- | -------- | +| `config_file` | `string` | `blackbox_exporter` configuration file path. | | no | +| `config` | `string` or `secret` | `blackbox_exporter` configuration as inline string. | | no | +| `probe_timeout_offset` | `duration` | Offset in seconds to subtract from timeout when probing targets. | `"0.5s"` | no | Either `config_file` or `config` must be specified. -The `config_file` argument points to a YAML file defining which blackbox_exporter modules to use. -The `config` argument must be a YAML document as string defining which blackbox_exporter modules to use. +The `config_file` argument points to a YAML file defining which `blackbox_exporter` modules to use. +The `config` argument must be a YAML document as string defining which `blackbox_exporter` modules to use. `config` is typically loaded by using the exports of another component. For example, - `local.file.LABEL.content` - `remote.http.LABEL.content` - `remote.s3.LABEL.content` -See [blackbox_exporter](https://github.com/prometheus/blackbox_exporter/blob/master/example.yml) for details on how to generate a config file. +Refer to [`blackbox_exporter`](https://github.com/prometheus/blackbox_exporter/blob/master/example.yml) for more information about generating a configuration file. ## Blocks @@ -65,7 +65,7 @@ The `target` block may be specified multiple times to define multiple targets. ` | `module` | `string` | Blackbox module to use to probe. | `""` | no | | `labels` | `map(string)` | Labels to add to the target. | | no | -Labels specified in the `labels` argument will not override labels set by `blackbox_exporter`. +Labels specified in the `labels` argument won't override labels set by `blackbox_exporter`. ## Exported fields @@ -79,20 +79,20 @@ healthy values. ## Debug information -`prometheus.exporter.blackbox` does not expose any component-specific +`prometheus.exporter.blackbox` doesn't expose any component-specific debug information. ## Debug metrics -`prometheus.exporter.blackbox` does not expose any component-specific +`prometheus.exporter.blackbox` doesn't expose any component-specific debug metrics. ## Examples -### Collect metrics using a blackbox exporter config file +### Collect metrics using a blackbox exporter configuration file -This example uses a [`prometheus.scrape` component][scrape] to collect metrics -from `prometheus.exporter.blackbox`. It adds an extra label, `env="dev"`, to the metrics emitted by the `grafana` target. The `example` target does not have any added labels. +This example uses a [`prometheus.scrape` component][scrape] to collect metrics from `prometheus.exporter.blackbox`. +It adds an extra label, `env="dev"`, to the metrics emitted by the `grafana` target. The `example` target doesn't have any added labels. The `config_file` argument is used to define which `blackbox_exporter` modules to use. You can use the [blackbox example config file](https://github.com/prometheus/blackbox_exporter/blob/master/example.yml). @@ -102,13 +102,13 @@ prometheus.exporter.blackbox "example" { target { name = "example" - address = "http://example.com" + address = "https://example.com" module = "http_2xx" } target { - name = "grafana" - address = "http://grafana.com" + name = "grafana" + address = "https://grafana.com" module = "http_2xx" labels = { "env" = "dev", @@ -137,12 +137,12 @@ prometheus.remote_write "demo" { Replace the following: - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. -- `USERNAME`: The username to use for authentication to the remote_write API. -- `PASSWORD`: The password to use for authentication to the remote_write API. +- `USERNAME`: The username to use for authentication to the `remote_write` API. +- `PASSWORD`: The password to use for authentication to the `remote_write` API. ### Collect metrics using an embedded configuration -This example is the same above with using an embedded configuration: +This example uses an embedded configuration: ```alloy prometheus.exporter.blackbox "example" { @@ -150,13 +150,13 @@ prometheus.exporter.blackbox "example" { target { name = "example" - address = "http://example.com" + address = "https://example.com" module = "http_2xx" } target { name = "grafana" - address = "http://grafana.com" + address = "https://grafana.com" module = "http_2xx" labels = { "env" = "dev", @@ -185,8 +185,8 @@ prometheus.remote_write "demo" { Replace the following: - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. -- `USERNAME`: The username to use for authentication to the remote_write API. -- `PASSWORD`: The password to use for authentication to the remote_write API. +- `USERNAME`: The username to use for authentication to the `remote_write` API. +- `PASSWORD`: The password to use for authentication to the `remote_write` API. [scrape]: ../prometheus.scrape/ diff --git a/docs/sources/reference/components/prometheus.exporter.cloudwatch.md b/docs/sources/reference/components/prometheus.exporter.cloudwatch.md index 317892ca09..b983fbcb76 100644 --- a/docs/sources/reference/components/prometheus.exporter.cloudwatch.md +++ b/docs/sources/reference/components/prometheus.exporter.cloudwatch.md @@ -119,12 +119,12 @@ prometheus.exporter.cloudwatch "queues" { You can use the following arguments to configure the exporter's behavior. Omitted fields take their default values. -| Name | Type | Description | Default | Required | -| ------------------------- | ------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | -------- | -| `sts_region` | `string` | AWS region to use when calling [STS][] for retrieving account information. | | yes | -| `fips_disabled` | `bool` | Disable use of FIPS endpoints. Set 'true' when running outside of USA regions. | `true` | no | -| `debug` | `bool` | Enable debug logging on CloudWatch exporter internals. | `false` | no | -| `discovery_exported_tags` | `map(list(string))` | List of tags (value) per service (key) to export in all metrics. For example, defining the `["name", "type"]` under `"AWS/EC2"` will export the name and type tags and its values as labels in all metrics. Affects all discovery jobs. | `{}` | no | +| Name | Type | Description | Default | Required | +|---------------------------|---------------------|--------------------------------------------------------------------------------|---------|----------| +| `sts_region` | `string` | AWS region to use when calling [STS][] for retrieving account information. | | yes | +| `fips_disabled` | `bool` | Disable use of FIPS endpoints. Set 'true' when running outside of USA regions. | `true` | no | +| `debug` | `bool` | Enable debug logging on CloudWatch exporter internals. | `false` | no | +| `discovery_exported_tags` | `map(list(string))` | List of tags (value) per service (key) to export in all metrics. For example, defining the `["name", "type"]` under `"AWS/EC2"` will export the name and type tags and its values as labels in all metrics. Affects all discovery jobs. | `{}` | no | [STS]: https://docs.aws.amazon.com/STS/latest/APIReference/welcome.html @@ -188,7 +188,7 @@ different `search_tags`. | Name | Type | Description | Default | Required | | ----------------------------- | -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | -------- | | `regions` | `list(string)` | List of AWS regions. | | yes | -| `type` | `string` | Cloudwatch service alias (`"alb"`, `"ec2"`, etc) or namespace name (`"AWS/EC2"`, `"AWS/S3"`, etc). See [supported-services][] for a complete list. | | yes | +| `type` | `string` | Cloudwatch service alias (`"alb"`, `"ec2"`, etc) or namespace name (`"AWS/EC2"`, `"AWS/S3"`, etc). Refer to [supported-services][] for a complete list. | | yes | | `custom_tags` | `map(string)` | Custom tags to be added as a list of key / value pairs. When exported to Prometheus format, the label name follows the following format: `custom_tag_{key}`. | `{}` | no | | `search_tags` | `map(string)` | List of key / value pairs to use for tag filtering (all must match). Value can be a regex. | `{}` | no | | `dimension_name_requirements` | `list(string)` | List of metric dimensions to query. Before querying metric values, the total list of metrics will be filtered to only those that contain exactly this list of dimensions. An empty or undefined list results in all dimension combinations being included. | `{}` | no | @@ -261,13 +261,13 @@ metrics. Follow [this guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/viewing_metrics_with_cloudwatch.html) on how to explore metrics, to easily pick the ones you need. -| Name | Type | Description | Default | Required | -| ------------- | -------------- | ------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------- | -------- | -| `name` | `string` | Metric name. | | yes | -| `statistics` | `list(string)` | List of statistics to scrape. For example, `"Minimum"`, `"Maximum"`, etc. | | yes | -| `period` | `duration` | See [period][] section below. | | yes | -| `length` | `duration` | See [period][] section below. | Calculated based on `period`. See [period][] for details. | no | -| `nil_to_zero` | `bool` | When `true`, `NaN` metric values are converted to 0. | The value of `nil_to_zero` in the parent [static][] or [discovery][] block (`true` if not set in the parent block). | no | +| Name | Type | Description | Default | Required | +| ------------- | -------------- | ------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------ | -------- | +| `name` | `string` | Metric name. | | yes | +| `statistics` | `list(string)` | List of statistics to scrape. For example, `"Minimum"`, `"Maximum"`, etc. | | yes | +| `period` | `duration` | Refer to the [period][] section below. | | yes | +| `length` | `duration` | Refer to the [period][] section below. | Calculated based on `period`. Refer to [period][] for details. | no | +| `nil_to_zero` | `bool` | When `true`, `NaN` metric values are converted to 0. | The value of `nil_to_zero` in the parent [static][] or [discovery][] block. `true` if not set in the parent block. | no | [period]: #period-and-length @@ -310,13 +310,13 @@ Multiple roles can be useful when scraping metrics from different AWS accounts w this case, a different role is configured for {{< param "PRODUCT_NAME" >}} to assume before calling AWS APIs. Therefore, the credentials configured in the system need permission to assume the target role. -See [Granting a user permissions to switch roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_permissions-to-switch.html) +Refer to [Granting a user permissions to switch roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_permissions-to-switch.html) in the AWS IAM documentation for more information about how to configure this. -| Name | Type | Description | Default | Required | -| ------------- | -------- | --------------------------------------------------------------------- | ------- | -------- | -| `role_arn` | `string` | AWS IAM Role ARN the exporter should assume to perform AWS API calls. | | yes | -| `external_id` | `string` | External ID used when calling STS AssumeRole API. See [details][]. | `""` | no | +| Name | Type | Description | Default | Required | +| ------------- | -------- | ----------------------------------------------------------------------- | ------- | -------- | +| `role_arn` | `string` | AWS IAM Role ARN the exporter should assume to perform AWS API calls. | | yes | +| `external_id` | `string` | External ID used when calling STS AssumeRole API. Refer to the [IAM User Guide][details] for more information. | `""` | no | [details]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html diff --git a/docs/sources/reference/components/prometheus.exporter.gcp.md b/docs/sources/reference/components/prometheus.exporter.gcp.md index 71782507d4..1d1df529cd 100644 --- a/docs/sources/reference/components/prometheus.exporter.gcp.md +++ b/docs/sources/reference/components/prometheus.exporter.gcp.md @@ -8,7 +8,7 @@ title: prometheus.exporter.gcp The `prometheus.exporter.gcp` component embeds [`stackdriver_exporter`](https://github.com/prometheus-community/stackdriver_exporter). It lets you collect [GCP Cloud Monitoring (formerly stackdriver)](https://cloud.google.com/monitoring/docs), translate them to prometheus-compatible format and remote write. -The component supports all metrics available via [GCP's monitoring API](https://cloud.google.com/monitoring/api/metrics_gcp). +The component supports all metrics available via the [GCP monitoring API](https://cloud.google.com/monitoring/api/metrics_gcp). Metric names follow the template `stackdriver___`. @@ -21,12 +21,11 @@ monitored_resource = `https_lb_rule`\ metric_type_prefix = `loadbalancing.googleapis.com/`\ metric_type = `https/backend_latencies` -These attributes result in a final metric name of: -`stackdriver_https_lb_rule_loadbalancing_googleapis_com_https_backend_latencies` +These attributes result in a final metric name of `stackdriver_https_lb_rule_loadbalancing_googleapis_com_https_backend_latencies` ## Authentication -{{< param "PRODUCT_NAME" >}} must be running in an environment with access to the GCP project it is scraping. The exporter +{{< param "PRODUCT_NAME" >}} must be running in an environment with access to the GCP project it's scraping. The exporter uses the Google Golang Client Library, which offers a variety of ways to [provide credentials](https://developers.google.com/identity/protocols/application-default-credentials). Choose the option that works best for you. After deciding how {{< param "PRODUCT_NAME" >}} will obtain credentials, ensure the account is set up with the IAM role `roles/monitoring.viewer`. @@ -55,25 +54,35 @@ You can use the following arguments to configure the exporter's behavior. Omitted fields take their default values. {{< admonition type="note" >}} -Please note that if you are supplying a list of strings for the `extra_filters` argument, any string values within a particular filter string must be enclosed in escaped double quotes. For example, `loadbalancing.googleapis.com:resource.labels.backend_target_name="sample-value"` must be encoded as `"loadbalancing.googleapis.com:resource.labels.backend_target_name=\"sample-value\""` in the {{< param "PRODUCT_NAME" >}} configuration. +If you are supplying a list of strings for the `extra_filters` argument, any string values within a particular filter string must be enclosed in escaped double quotes. +For example, `loadbalancing.googleapis.com:resource.labels.backend_target_name="sample-value"` must be encoded as `"loadbalancing.googleapis.com:resource.labels.backend_target_name=\"sample-value\""` in the {{< param "PRODUCT_NAME" >}} configuration. {{< /admonition >}} -| Name | Type | Description | Default | Required | -| ------------------------- | -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | -------- | -| `project_ids` | `list(string)` | Configure the GCP Project(s) to scrape for metrics. | | yes | -| `metrics_prefixes` | `list(string)` | One or more values from the supported [GCP Metrics](https://cloud.google.com/monitoring/api/metrics_gcp). These can be as targeted or loose as needed. | | yes | -| `extra_filters` | `list(string)` | Used to further refine the resources you would like to collect metrics from. Please note that any string value within a particular filter string must be enclosed in escaped double-quotes. The structure for these filters is `:`. | `[]` | no | -| `request_interval` | `duration` | The time range used when querying for metrics. | `5m` | no | -| `ingest_delay` | `boolean` | When enabled, this automatically adjusts the time range used when querying for metrics backwards based on the metadata GCP has published for how long the data can take to be ingested. | `false` | no | -| `request_offset` | `duration` | When enabled this offsets the time range used when querying for metrics by a set amount. | `0s` | no | -| `drop_delegated_projects` | `boolean` | When enabled drops metrics from attached projects and only fetches metrics from the explicitly configured `project_ids`. | `false` | no | -| `gcp_client_timeout` | `duration` | Sets a timeout on the client used to make API calls to GCP. A single scrape can initiate numerous calls to GCP, so be mindful if you choose to override this value. | `15s` | no | - -For `extra_filters`, the `targeted_metric_prefix` is used to ensure the filter is only applied to the metric_prefix(es) where it makes sense. It does not explicitly have to match a value from `metric_prefixes`, but the `targeted_metric_prefix` must be at least a prefix to one or more `metric_prefixes`. The `filter_query` is applied to a final metrics API query when querying for metric data. The final query sent to the metrics API already includes filters for project and metric type. Each applicable `filter_query` is appended to the query with an AND. You can read more about the metric API filter options in [GCPs documentation](https://cloud.google.com/monitoring/api/v3/filters). - -For `request_interval`, most of the time the default works perfectly fine. Most documented metrics include a comments of the form `Sampled every X seconds. After sampling, data is not visible for up to Y seconds.` As long as your `request_interval` is >= `Y` you should have no issues. Consider using `ingest_delay` if you would like this to be done programmatically or are gathering slower moving metrics. - -For `ingest_delay`, you can see the values for this in documented metrics as `After sampling, data is not visible for up to Y seconds.` Since GCPs ingestion delay is an "at worst", this is off by default to ensure data is gathered as soon as it's available. +| Name | Type | Description | Default | Required | +|---------------------------|----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|----------| +| `project_ids` | `list(string)` | Configure the GCP Projects to scrape for metrics. | | yes | +| `metrics_prefixes` | `list(string)` | One or more values from the supported [GCP Metrics](https://cloud.google.com/monitoring/api/metrics_gcp). These can be as targeted or loose as needed. | | yes | +| `extra_filters` | `list(string)` | Used to further refine the resources you would like to collect metrics from. Any string value within a particular filter string must be enclosed in escaped double-quotes. The structure for these filters is `:`. | `[]` | no | +| `request_interval` | `duration` | The time range used when querying for metrics. | `5m` | no | +| `ingest_delay` | `boolean` | When enabled, this automatically adjusts the time range used when querying for metrics backwards based on the metadata GCP has published for how long the data can take to be ingested. | `false` | no | +| `request_offset` | `duration` | When enabled this offsets the time range used when querying for metrics by a set amount. | `0s` | no | +| `drop_delegated_projects` | `boolean` | When enabled drops metrics from attached projects and only fetches metrics from the explicitly configured `project_ids`. | `false` | no | +| `gcp_client_timeout` | `duration` | Sets a timeout on the client used to make API calls to GCP. A single scrape can initiate numerous calls to GCP, so be mindful if you choose to override this value. | `15s` | no | + +For `extra_filters`, the `targeted_metric_prefix` is used to ensure the filter is only applied to the metric_prefix(es) where it makes sense. +It doesn't explicitly have to match a value from `metric_prefixes`, but the `targeted_metric_prefix` must be at least a prefix to one or more `metric_prefixes`. +The `filter_query` is applied to a final metrics API query when querying for metric data. +The final query sent to the metrics API already includes filters for project and metric type. +Each applicable `filter_query` is appended to the query with an AND. +You can read more about the metric API filter options in the [GCP documentation](https://cloud.google.com/monitoring/api/v3/filters). + +For `request_interval`, most of the time the default works perfectly fine. +Most documented metrics include a comments of the form `Sampled every X seconds. After sampling, data is not visible for up to Y seconds.` +As long as your `request_interval` is >= `Y` you should have no issues. +Consider using `ingest_delay` if you would like this to be done programmatically or are gathering slower moving metrics. + +For `ingest_delay`, you can find the values for this in documented metrics as `After sampling, data is not visible for up to Y seconds.` +Since the GCP ingestion delay is an "at worst", this is off by default to ensure data is gathered as soon as it's available. ## Exported fields @@ -81,18 +90,16 @@ For `ingest_delay`, you can see the values for this in documented metrics as `Af ## Component health -`prometheus.exporter.gcp` is only reported as unhealthy if given -an invalid configuration. In those cases, exported fields retain their last healthy values. +`prometheus.exporter.gcp` is only reported as unhealthy if given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`prometheus.exporter.gcp` does not expose any component-specific -debug information. +`prometheus.exporter.gcp` doesn't expose any component-specific debug information. ## Debug metrics -`prometheus.exporter.gcp` does not expose any component-specific -debug metrics. +`prometheus.exporter.gcp` doesn't expose any component-specific debug metrics. ## Examples diff --git a/docs/sources/reference/components/prometheus.exporter.mssql.md b/docs/sources/reference/components/prometheus.exporter.mssql.md index f8d0fd7d6a..24977100ff 100644 --- a/docs/sources/reference/components/prometheus.exporter.mssql.md +++ b/docs/sources/reference/components/prometheus.exporter.mssql.md @@ -6,9 +6,7 @@ title: prometheus.exporter.mssql # prometheus.exporter.mssql -The `prometheus.exporter.mssql` component embeds -[sql_exporter](https://github.com/burningalchemist/sql_exporter) for collecting stats from a Microsoft SQL Server and exposing them as -Prometheus metrics. +The `prometheus.exporter.mssql` component embeds [`sql_exporter`](https://github.com/burningalchemist/sql_exporter) for collecting stats from a Microsoft SQL Server and exposing them as Prometheus metrics. ## Usage @@ -31,22 +29,24 @@ Omitted fields take their default values. | `timeout` | `duration` | The query timeout in seconds. | `"10s"` | no | | `query_config` | `string` | MSSQL query to Prometheus metric configuration as an inline string. | | no | -[The sql_exporter examples](https://github.com/burningalchemist/sql_exporter/blob/master/examples/azure-sql-mi/sql_exporter.yml#L21) show the format of the `connection_string` argument: +The [`sql_exporter` examples](https://github.com/burningalchemist/sql_exporter/blob/master/examples/azure-sql-mi/sql_exporter.yml#L21) show the format of the `connection_string` argument: ```conn sqlserver://USERNAME_HERE:PASSWORD_HERE@SQLMI_HERE_ENDPOINT.database.windows.net:1433?encrypt=true&hostNameInCertificate=%2A.SQL_MI_DOMAIN_HERE.database.windows.net&trustservercertificate=true ``` If specified, the `query_config` argument must be a YAML document as string defining which MSSQL queries map to custom Prometheus metrics. -`query_config` is typically loaded by using the exports of another component. For example, +`query_config` is typically loaded by using the exports of another component. +For example, - `local.file.LABEL.content` - `remote.http.LABEL.content` - `remote.s3.LABEL.content` -See [sql_exporter](https://github.com/burningalchemist/sql_exporter#collectors) for details on how to create a configuration. +Refer to [sql_exporter](https://github.com/burningalchemist/sql_exporter#collectors) for details on how to create a configuration. ### Authentication + By default, the `USERNAME` and `PASSWORD` used within the `connection_string` argument corresponds to a SQL Server username and password. If {{< param "PRODUCT_NAME" >}} is running in the same Windows domain as the SQL Server, then you can use the parameter `authenticator=winsspi` within the `connection_string` to authenticate without any additional credentials. @@ -55,7 +55,7 @@ If {{< param "PRODUCT_NAME" >}} is running in the same Windows domain as the SQL sqlserver://@:?authenticator=winsspi ``` -If you want to use Windows credentials to authenticate, instead of SQL Server credentials, you can use the parameter `authenticator=ntlm` within the `connection_string`. +If you want to use Windows credentials to authenticate, instead of SQL Server credentials, you can use the parameter `authenticator=ntlm` within the `connection_string`. The `USERNAME` and `PASSWORD` then corresponds to a Windows username and password. The Windows domain may need to be prefixed to the username with a trailing `\`. @@ -65,8 +65,7 @@ sqlserver://:@:?authenticator=ntlm ## Blocks -The `prometheus.exporter.mssql` component does not support any blocks, and is configured -fully through arguments. +The `prometheus.exporter.mssql` component does not support any blocks, and is configured fully through arguments. ## Exported fields @@ -74,24 +73,20 @@ fully through arguments. ## Component health -`prometheus.exporter.mssql` is only reported as unhealthy if given -an invalid configuration. In those cases, exported fields retain their last -healthy values. +`prometheus.exporter.mssql` is only reported as unhealthy if given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`prometheus.exporter.mssql` does not expose any component-specific -debug information. +`prometheus.exporter.mssql` doesn't expose any component-specific debug information. ## Debug metrics -`prometheus.exporter.mssql` does not expose any component-specific -debug metrics. +`prometheus.exporter.mssql` doesn't expose any component-specific debug metrics. ## Example -This example uses a [`prometheus.scrape` component][scrape] to collect metrics -from `prometheus.exporter.mssql`: +This example uses a [`prometheus.scrape` component][scrape] to collect metrics from `prometheus.exporter.mssql`: ```alloy prometheus.exporter.mssql "example" { @@ -119,18 +114,20 @@ prometheus.remote_write "demo" { Replace the following: - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. -- `USERNAME`: The username to use for authentication to the remote_write API. -- `PASSWORD`: The password to use for authentication to the remote_write API. +- `USERNAME`: The username to use for authentication to the `remote_write` API. +- `PASSWORD`: The password to use for authentication to the `remote_write` API. [scrape]: ../prometheus.scrape/ ## Custom metrics + You can use the optional `query_config` parameter to retrieve custom Prometheus metrics for a MSSQL instance. If this is defined, the new configuration will be used to query your MSSQL instance and create whatever Prometheus metrics are defined. If you want additional metrics on top of the default metrics, the default configuration must be used as a base. The default configuration used by this integration is as follows: + ``` collector_name: mssql_standard @@ -211,8 +208,8 @@ metrics: query: | SELECT (a.cntr_value * 1.0 / b.cntr_value) * 100.0 as BufferCacheHitRatio FROM sys.dm_os_performance_counters a - JOIN (SELECT cntr_value, OBJECT_NAME - FROM sys.dm_os_performance_counters + JOIN (SELECT cntr_value, OBJECT_NAME + FROM sys.dm_os_performance_counters WHERE counter_name = 'Buffer cache hit ratio base' AND OBJECT_NAME = 'SQLServer:Buffer Manager') b ON a.OBJECT_NAME = b.OBJECT_NAME WHERE a.counter_name = 'Buffer cache hit ratio' diff --git a/docs/sources/reference/components/prometheus.exporter.postgres.md b/docs/sources/reference/components/prometheus.exporter.postgres.md index b67370a4cf..f82e8bfab6 100644 --- a/docs/sources/reference/components/prometheus.exporter.postgres.md +++ b/docs/sources/reference/components/prometheus.exporter.postgres.md @@ -6,11 +6,9 @@ title: prometheus.exporter.postgres # prometheus.exporter.postgres -The `prometheus.exporter.postgres` component embeds -[postgres_exporter](https://github.com/prometheus-community/postgres_exporter) for collecting metrics from a postgres database. +The `prometheus.exporter.postgres` component embeds the [`postgres_exporter`](https://github.com/prometheus-community/postgres_exporter) for collecting metrics from a PostgreSQL database. -Multiple `prometheus.exporter.postgres` components can be specified by giving them different -labels. +Multiple `prometheus.exporter.postgres` components can be specified by giving them different labels. ## Usage @@ -26,29 +24,58 @@ The following arguments are supported: | Name | Type | Description | Default | Required | |------------------------------|----------------|-------------------------------------------------------------------------------|---------|----------| -| `data_source_names` | `list(secret)` | Specifies the Postgres server(s) to connect to. | | yes | -| `disable_settings_metrics` | `bool` | Disables collection of metrics from pg_settings. | `false` | no | +| `data_source_names` | `list(secret)` | Specifies the PostgreSQL servers to connect to. | | yes | +| `disable_settings_metrics` | `bool` | Disables collection of metrics from `pg_settings`. | `false` | no | | `disable_default_metrics` | `bool` | When `true`, only exposes metrics supplied from `custom_queries_config_path`. | `false` | no | | `custom_queries_config_path` | `string` | Path to YAML file containing custom queries to expose as metrics. | "" | no | -| `enabled_collectors` | `list(string)` | List of collectors to enable. See below for more detail. | [] | no | +| `enabled_collectors` | `list(string)` | List of collectors to enable. Refer to the information below for more detail. | [] | no | -The format for connection strings in `data_source_names` can be found in the [official postgresql documentation](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING). +Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING) for more information about the format of the connection strings in `data_source_names`. -See examples for the `custom_queries_config_path` file in the [postgres_exporter repository](https://github.com/prometheus-community/postgres_exporter/blob/master/queries.yaml). +Refer to the examples for the `custom_queries_config_path` file in the [`postgres_exporter` repository](https://github.com/prometheus-community/postgres_exporter/blob/master/queries.yaml). -**NOTE**: There are a number of environment variables that are not recommended for use, as they will affect _all_ `prometheus.exporter.postgres` components. A full list can be found in the [postgres_exporter repository](https://github.com/prometheus-community/postgres_exporter#environment-variables). +{{< admonition type="note" >}} +There are a number of environment variables that aren't recommended for use, as they will affect _all_ `prometheus.exporter.postgres` components. +Refer to the [`postgres_exporter` repository](https://github.com/prometheus-community/postgres_exporter#environment-variables) for a full list of environment variables. +{{< /admonition >}} -By default, the same set of metrics is enabled as in the upstream [postgres_exporter](https://github.com/prometheus-community/postgres_exporter/). If `custom_queries_config_path` is set, additional metrics defined in the given config file will be exposed. +By default, the same set of metrics is enabled as in the upstream [`postgres_exporter`](https://github.com/prometheus-community/postgres_exporter/). +If `custom_queries_config_path` is set, additional metrics defined in the given configuration file will be exposed. If `disable_default_metrics` is set to `true`, only the metrics defined in the `custom_queries_config_path` file will be exposed. -A subset of metrics collectors can be controlled by setting the `enabled_collectors` argument. The following collectors are available for selection: -`database`, `database_wraparound`, `locks`, `long_running_transactions`, `postmaster`, `process_idle`, -`replication`, `replication_slot`, `stat_activity_autovacuum`, `stat_bgwriter`, `stat_database`, -`stat_statements`, `stat_user_tables`, `stat_wal_receiver`, `statio_user_indexes`, `statio_user_tables`, -`wal`, `xlog_location`. - -By default, the following collectors are enabled: `database`, `locks`, `replication`, `replication_slot`, `stat_bgwriter`, `stat_database`, -`stat_user_tables`, `statio_user_tables`, `wal`. +A subset of metrics collectors can be controlled by setting the `enabled_collectors` argument. +The following collectors are available for selection: + +* `database` +* `database_wraparound` +* `locks` +* `long_running_transactions` +* `postmaster` +* `process_idle` +* `replication` +* `replication_slot` +* `stat_activity_autovacuum` +* `stat_bgwriter` +* `stat_database` +* `stat_statements` +* `stat_user_tables` +* `stat_wal_receiver` +* `statio_user_indexes` +* `statio_user_tables` +* `wal` +* `xlog_location` + +By default, the following collectors are enabled: + +* `database` +* `locks` +* `replication` +* `replication_slot` +* `stat_bgwriter` +* `stat_database` +* `stat_user_tables` +* `statio_user_tables` +* `wal` {{< admonition type="note" >}} Due to a limitation of the upstream exporter, when multiple `data_source_names` are used, the collectors that are controlled via the `enabled_collectors` argument will only be applied to the first data source in the list. @@ -71,8 +98,8 @@ The `autodiscovery` block configures discovery of databases, outside of any spec The following arguments are supported: | Name | Type | Description | Default | Required | -| -------------------- | -------------- | ------------------------------------------------------------------------------ | ------- | -------- | -| `enabled` | `bool` | Whether to autodiscover other databases | `false` | no | +|----------------------|----------------|--------------------------------------------------------------------------------|---------|----------| +| `enabled` | `bool` | Whether to automatically discover other databases. | `false` | no | | `database_allowlist` | `list(string)` | List of databases to filter for, meaning only these databases will be scraped. | | no | | `database_denylist` | `list(string)` | List of databases to filter out, meaning all other databases will be scraped. | | no | @@ -86,25 +113,21 @@ If `autodiscovery` is disabled, neither `database_allowlist` nor `database_denyl ## Component health -`prometheus.exporter.postgres` is only reported as unhealthy if given -an invalid configuration. +`prometheus.exporter.postgres` is only reported as unhealthy if given an invalid configuration. ## Debug information -`prometheus.exporter.postgres` does not expose any component-specific -debug information. +`prometheus.exporter.postgres` doesn't expose any component-specific debug information. ## Debug metrics -`prometheus.exporter.postgres` does not expose any component-specific -debug metrics. +`prometheus.exporter.postgres` doesn't expose any component-specific debug metrics. ## Examples ### Collect metrics from a PostgreSQL server -This example uses a `prometheus.exporter.postgres` component to collect metrics from a Postgres -server running locally with all default settings: +This example uses a `prometheus.exporter.postgres` component to collect metrics from a PostgreSQL server running locally with all default settings: ```alloy // Because no autodiscovery is defined, this will only scrape the 'database_name' database, as defined @@ -133,8 +156,8 @@ prometheus.remote_write "demo" { Replace the following: - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. -- `USERNAME`: The username to use for authentication to the remote_write API. -- `PASSWORD`: The password to use for authentication to the remote_write API. +- `USERNAME`: The username to use for authentication to the `remote_write` API. +- `PASSWORD`: The password to use for authentication to the `remote_write` API. ### Collect custom metrics from an allowlisted set of databases @@ -177,13 +200,12 @@ prometheus.remote_write "demo" { Replace the following: - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. -- `USERNAME`: The username to use for authentication to the remote_write API. -- `PASSWORD`: The password to use for authentication to the remote_write API. +- `USERNAME`: The username to use for authentication to the `remote_write` API. +- `PASSWORD`: The password to use for authentication to the `remote_write` API. ### Collect metrics from all databases except for a denylisted database -This example uses a `prometheus.exporter.postgres` component to collect custom metrics from all databases except -for the `secrets` database. +This example uses a `prometheus.exporter.postgres` component to collect custom metrics from all databases except for the `secrets` database. ```alloy prometheus.exporter.postgres "example" { @@ -219,8 +241,8 @@ prometheus.remote_write "demo" { Replace the following: - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. -- `USERNAME`: The username to use for authentication to the remote_write API. -- `PASSWORD`: The password to use for authentication to the remote_write API. +- `USERNAME`: The username to use for authentication to the `remote_write` API. +- `PASSWORD`: The password to use for authentication to the `remote_write` API. [scrape]: ../prometheus.scrape/ diff --git a/docs/sources/reference/components/prometheus.exporter.redis.md b/docs/sources/reference/components/prometheus.exporter.redis.md index f3bb2d51f4..f7c5a68d81 100644 --- a/docs/sources/reference/components/prometheus.exporter.redis.md +++ b/docs/sources/reference/components/prometheus.exporter.redis.md @@ -6,8 +6,7 @@ title: prometheus.exporter.redis # prometheus.exporter.redis -The `prometheus.exporter.redis` component embeds -[redis_exporter](https://github.com/oliver006/redis_exporter) for collecting metrics from a Redis database. +The `prometheus.exporter.redis` component embeds the [`redis_exporter`](https://github.com/oliver006/redis_exporter) for collecting metrics from a Redis database. ## Usage @@ -25,7 +24,7 @@ Omitted fields take their default values. | Name | Type | Description | Default | Required | | ----------------------------- | -------------- | ----------------------------------------------------------------------------------------------------------------------- | ---------- | -------- | | `redis_addr` | `string` | Address (host and port) of the Redis instance to connect to. | | yes | -| `redis_user` | `string` | User name to use for authentication (Redis ACL for Redis 6.0 and newer). | | no | +| `redis_user` | `string` | User name to use for authentication. Redis ACL for Redis 6.0 and newer. | | no | | `redis_password` | `secret` | Password of the Redis instance. | | no | | `redis_password_file` | `string` | Path of a file containing a password. | | no | | `redis_password_map_file` | `string` | Path of a JSON file containing a map of Redis URIs to passwords. | | no | @@ -53,23 +52,24 @@ Omitted fields take their default values. | `export_client_port` | `bool` | Whether to include the client's port when exporting the client list. | | no | | `redis_metrics_only` | `bool` | Whether to just export metrics or to also export go runtime metrics. | | no | | `ping_on_connect` | `bool` | Whether to ping the Redis instance after connecting. | | no | -| `incl_system_metrics` | `bool` | Whether to include system metrics (e.g. `redis_total_system_memory_bytes`). | | no | -| `skip_tls_verification` | `bool` | Whether to to skip TLS verification. | | no | +| `incl_system_metrics` | `bool` | Whether to include system metrics. For example `redis_total_system_memory_bytes`. | | no | +| `skip_tls_verification` | `bool` | Whether to skip TLS verification. | | no | If `redis_password_file` is defined, it will take precedence over `redis_password`. -When `check_key_groups` is not set, no key groups are made. +When `check_key_groups` isn't set, no key groups are made. The `check_key_groups_batch_size` argument name reflects key groups for backwards compatibility, but applies to both key and key groups. -The `script_path` argument may also be specified as a comma-separated string of paths, though it is encouraged to use `script_paths` when using -multiple Lua scripts. +The `script_path` argument may also be specified as a comma-separated string of paths, though it's encouraged to use `script_paths` when using multiple Lua scripts. Any leftover key groups beyond `max_distinct_key_groups` are aggregated in the 'overflow' bucket. The `is_cluster` argument must be set to `true` when connecting to a Redis cluster and using either of the `check_keys` and `check_single_keys` arguments. -Note that setting `export_client_port` increases the cardinality of all Redis metrics. +{{< admonition type="note" >}} +Setting `export_client_port` increases the cardinality of all Redis metrics. +{{< /admonition >}} ## Exported fields @@ -77,24 +77,20 @@ Note that setting `export_client_port` increases the cardinality of all Redis me ## Component health -`prometheus.exporter.redis` is only reported as unhealthy if given -an invalid configuration. In those cases, exported fields retain their last -healthy values. +`prometheus.exporter.redis` is only reported as unhealthy if given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`prometheus.exporter.redis` does not expose any component-specific -debug information. +`prometheus.exporter.redis` doesn't expose any component-specific debug information. ## Debug metrics -`prometheus.exporter.redis` does not expose any component-specific -debug metrics. +`prometheus.exporter.redis` doesn't expose any component-specific debug metrics. ## Example -This example uses a [`prometheus.scrape` component][scrape] to collect metrics -from `prometheus.exporter.redis`: +This example uses a [`prometheus.scrape` component][scrape] to collect metrics from `prometheus.exporter.redis`: ```alloy prometheus.exporter.redis "example" { @@ -121,9 +117,9 @@ prometheus.remote_write "demo" { Replace the following: -- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. -- `USERNAME`: The username to use for authentication to the remote_write API. -- `PASSWORD`: The password to use for authentication to the remote_write API. +- `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus `remote_write` compatible server to send metrics to. +- `USERNAME`: The username to use for authentication to the `remote_write` API. +- `PASSWORD`: The password to use for authentication to the `remote_write` API. [scrape]: ../prometheus.scrape/ diff --git a/docs/sources/reference/components/prometheus.exporter.statsd.md b/docs/sources/reference/components/prometheus.exporter.statsd.md index a2460cf210..2a56ed1af5 100644 --- a/docs/sources/reference/components/prometheus.exporter.statsd.md +++ b/docs/sources/reference/components/prometheus.exporter.statsd.md @@ -42,15 +42,12 @@ All arguments are optional. Omitted fields take their default values. | `relay_packet_length` | `int` | Maximum relay output packet length to avoid fragmentation. | `1400` | no | At least one of `listen_udp`, `listen_tcp`, or `listen_unixgram` should be enabled. -For details on how to use the mapping config file, please check the official -[statsd_exporter docs](https://github.com/prometheus/statsd_exporter#metric-mapping-and-configuration). -Please make sure the kernel parameter `net.core.rmem_max` is set to a value greater -than the value specified in `read_buffer`. +Refer to the [`statsd_exporter` documentation](https://github.com/prometheus/statsd_exporter#metric-mapping-and-configuration) more information about the mapping `config file`. +Make sure the kernel parameter `net.core.rmem_max` is set to a value greater than the value specified in `read_buffer`. ### Blocks -The `prometheus.exporter.statsd` component does not support any blocks, and is configured -fully through arguments. +The `prometheus.exporter.statsd` component doesn't support any blocks, and is configured fully through arguments. ## Exported fields @@ -58,24 +55,20 @@ fully through arguments. ## Component health -`prometheus.exporter.statsd` is only reported as unhealthy if given -an invalid configuration. In those cases, exported fields retain their last -healthy values. +`prometheus.exporter.statsd` is only reported as unhealthy if given an invalid configuration. +In those cases, exported fields retain their last healthy values. ## Debug information -`prometheus.exporter.statsd` does not expose any component-specific -debug information. +`prometheus.exporter.statsd` doesn't expose any component-specific debug information. ## Debug metrics -`prometheus.exporter.statsd` does not expose any component-specific -debug metrics. +`prometheus.exporter.statsd` doesn't expose any component-specific debug metrics. ## Example -This example uses a [`prometheus.scrape` component][scrape] to collect metrics -from `prometheus.exporter.statsd`: +This example uses a [`prometheus.scrape` component][scrape] to collect metrics from `prometheus.exporter.statsd`: ```alloy prometheus.exporter.statsd "example" { @@ -117,8 +110,8 @@ prometheus.remote_write "demo" { Replace the following: - `PROMETHEUS_REMOTE_WRITE_URL`: The URL of the Prometheus remote_write-compatible server to send metrics to. -- `USERNAME`: The username to use for authentication to the remote_write API. -- `PASSWORD`: The password to use for authentication to the remote_write API. +- `USERNAME`: The username to use for authentication to the `remote_write` API. +- `PASSWORD`: The password to use for authentication to the `remote_write` API. [scrape]: ../prometheus.scrape/ diff --git a/docs/sources/reference/components/pyroscope.java.md b/docs/sources/reference/components/pyroscope.java.md index 7870493a7d..1f0fcc7ab0 100644 --- a/docs/sources/reference/components/pyroscope.java.md +++ b/docs/sources/reference/components/pyroscope.java.md @@ -53,12 +53,12 @@ If you change the `tmp_dir` configuration to something other than `/tmp`, then y #### `targets` argument -The special `__process_pid__` label _must always_ be present and corresponds to the process PID that is used for profiling. +The special `__process_pid__` label _must always_ be present and corresponds to the process PID that's used for profiling. Labels starting with a double underscore (`__`) are treated as _internal_, and are removed prior to scraping. The special label `service_name` is required and must always be present. -If it is not specified, `pyroscope.scrape` will attempt to infer it from either of the following sources, in this order: +If it's not specified, `pyroscope.scrape` will attempt to infer it from either of the following sources, in this order: 1. `__meta_kubernetes_pod_annotation_pyroscope_io_service_name` which is a `pyroscope.io/service_name` pod annotation. 2. `__meta_kubernetes_namespace` and `__meta_kubernetes_pod_container_name` 3. `__meta_docker_container_name` @@ -90,7 +90,7 @@ Name | Type | Description `alloc` | `string` | Allocation profiling sampling configuration It is passed as `--alloc` arg to async-profiler. | "512k" | no `lock` | `string` | Lock profiling sampling configuration. It is passed as `--lock` arg to async-profiler. | "10ms" | no -For more information on async-profiler configuration, see [profiler-options](https://github.com/async-profiler/async-profiler?tab=readme-ov-file#profiler-options) +Refer to [profiler-options](https://github.com/async-profiler/async-profiler?tab=readme-ov-file#profiler-options) for more information about async-profiler configuration. ## Exported fields diff --git a/docs/sources/tasks/configure/configure-windows.md b/docs/sources/tasks/configure/configure-windows.md index 7b6b101815..01869e20c4 100644 --- a/docs/sources/tasks/configure/configure-windows.md +++ b/docs/sources/tasks/configure/configure-windows.md @@ -26,8 +26,7 @@ To configure {{< param "PRODUCT_NAME" >}} on Windows, perform the following step ## Change command-line arguments -By default, the {{< param "PRODUCT_NAME" >}} service will launch and pass the -following arguments to the {{< param "PRODUCT_NAME" >}} binary: +By default, the {{< param "PRODUCT_NAME" >}} service will launch and pass the following arguments to the {{< param "PRODUCT_NAME" >}} binary: * `run` * `%PROGRAMFILES%\GrafanaLabs\Alloy\config.alloy` @@ -46,7 +45,7 @@ To change the set of command-line arguments passed to the {{< param "PRODUCT_NAM 1. Double-click on the value called **Arguments***. 1. In the dialog box, enter the new set of arguments to pass to the {{< param "PRODUCT_NAME" >}} binary. - Make sure that each argument is is on its own line. + Make sure that each argument is on its own line. 1. Restart the {{< param "PRODUCT_NAME" >}} service: diff --git a/docs/sources/tasks/migrate/from-flow.md b/docs/sources/tasks/migrate/from-flow.md index 60dfa9d03e..ce740a0b22 100644 --- a/docs/sources/tasks/migrate/from-flow.md +++ b/docs/sources/tasks/migrate/from-flow.md @@ -11,7 +11,7 @@ weight: 350 This topic describes how to perform a live migration from Grafana Agent Flow to {{< param "FULL_PRODUCT_NAME" >}} with minimal downtime. {{< admonition type="note" >}} -This procedure is only required for live migrations from an existing Grafana Agent Flow install to {{< param "PRODUCT_NAME" >}}. +This procedure is only required for live migrations from a Grafana Agent Flow install to {{< param "PRODUCT_NAME" >}}. If you want a fresh start with {{< param "PRODUCT_NAME" >}}, you can [uninstall Grafana Agent Flow][uninstall] and [install {{< param "PRODUCT_NAME" >}}][install]. @@ -21,31 +21,31 @@ If you want a fresh start with {{< param "PRODUCT_NAME" >}}, you can [uninstall ## Before you begin -* You must have an existing Grafana Agent Flow configuration to migrate. +* You must have a Grafana Agent Flow configuration to migrate. * You must be running Grafana Agent Flow version v0.40 or later. -* If auto-scaling is used: - * Disable auto-scaling for your Grafana Agent Flow deployment to prevent it from scaling during the migration. +* If you use auto-scaling make sure you disable auto-scaling for your Grafana Agent Flow deployment to prevent it from scaling during the migration. ## Differences between Grafana Agent Flow and {{% param "PRODUCT_NAME" %}} -* Only functionality marked _Generally available_ may be used by default. Functionality in _Experimental_ and _Public preview_ can be enabled by setting the `--stability.level` flag in [run]. +* Only functionality marked _Generally available_ may be used by default. +You can enable functionality in _Experimental_ and _Public preview_ by setting the `--stability.level` flag in [run]. * The default value of `--storage.path` has changed from `data-agent/` to `data-alloy/`. * The default value of `--server.http.memory-addr` has changed from `agent.internal:12345` to `alloy.internal:12345`. * Debug metrics reported by {{% param "PRODUCT_NAME" %}} are prefixed with `alloy_` instead of `agent_`. -* "Classic modules" (`module.file`, `module.git`, `module.http`, and `module.string`) has been removed in favor of import configuration blocks. +* The "classic modules", `module.file`, `module.git`, `module.http`, and `module.string` have been removed in favor of import configuration blocks. * The `prometheus.exporter.vsphere` component has been replaced by the `otelcol.receiver.vcenter` component. [run]: ../../../reference/cli/run ## Steps -### Prepare the Grafana Agent Flow configuration +### Prepare your Grafana Agent Flow configuration {{< param "PRODUCT_NAME" >}} uses the same configuration format as Grafana Agent Flow, but some functionality has been removed. Before migrating, modify your Grafana Agent Flow configuration to remove or replace any unsupported components: -* Flow's "classic modules" have been removed in favor of the new modules introduced in v0.40: +* The "classic modules" in Grafana Agent Flow have been removed in favor of the modules introduced in v0.40: * `module.file` is replaced by the [import.file] configuration block. * `module.git` is replaced by the [import.git] configuration block. * `module.http` is replaced by the [import.http] configuration block. @@ -64,7 +64,8 @@ Follow the [installation instructions][install] for {{< param "PRODUCT_NAME" >}} When deploying {{< param "PRODUCT_NAME" >}}, be aware of the following settings: -- {{< param "PRODUCT_NAME" >}} should be deployed with identical topology as Grafana Agent Flow. The CPU, and storage limits should match. +- {{< param "PRODUCT_NAME" >}} should be deployed with topology that's the same as Grafana Agent Flow. + The CPU, and storage limits should match. - Custom command-line flags configured in Grafana Agent Flow should be reflected in your {{< param "PRODUCT_NAME" >}} installation. - {{< param "PRODUCT_NAME" >}} may need to be deployed with the `--stability.level` flag in [run] to enable non-stable components: - Set `--stability.level` to `experimental` if you are using the following component: @@ -118,8 +119,8 @@ This alternative approach results in some duplicate data being sent to backends ### Uninstall Grafana Agent Flow -After you have migrated the configuration to {{< param "PRODUCT_NAME" >}}, you can uninstall Grafana Agent Flow. +After you have completed the migration, you can uninstall Grafana Agent Flow. ### Cleanup temporary changes -Auto-scaling may be re-enabled in your {{< param "PRODUCT_NAME" >}} deployment if it was disabled during the migration process. +You can enable auto-scaling in your {{< param "PRODUCT_NAME" >}} deployment if you disabled it during the migration process. diff --git a/docs/sources/tasks/migrate/from-operator.md b/docs/sources/tasks/migrate/from-operator.md index 779249f643..f5b6d880ba 100644 --- a/docs/sources/tasks/migrate/from-operator.md +++ b/docs/sources/tasks/migrate/from-operator.md @@ -11,7 +11,7 @@ weight: 320 You can migrate from Grafana Agent Operator to {{< param "PRODUCT_NAME" >}}. - The Monitor types (`PodMonitor`, `ServiceMonitor`, `Probe`, and `PodLogs`) are all supported natively by {{< param "PRODUCT_NAME" >}}. -- The parts of Grafana Agent Operator that deploy the Grafana Agent itself (`GrafanaAgent`, `MetricsInstance`, and `LogsInstance` CRDs) are deprecated. +- The parts of Grafana Agent Operator that deploy Grafana Agent, `GrafanaAgent`, `MetricsInstance`, and `LogsInstance` CRDs, are deprecated. ## Deploy {{% param "PRODUCT_NAME" %}} with Helm @@ -119,13 +119,12 @@ Refer to the documentation for the relevant components for additional informatio - [prometheus.operator.probes][] - [prometheus.scrape][] -## Collecting Logs +## Collecting logs -Our current recommendation is to create an additional DaemonSet deployment of {{< param "PRODUCT_NAME" >}}s to scrape logs. +The current recommendation is to create an additional DaemonSet deployment of {{< param "PRODUCT_NAME" >}} to scrape logs. -> We have components that can scrape pod logs directly from the Kubernetes API without needing a DaemonSet deployment. These are -> still considered experimental, but if you would like to try them, see the documentation for [loki.source.kubernetes][] and -> [loki.source.podlogs][]. +> {{< param "PRODUCT_NAME" >}} has components that can scrape Pod logs directly from the Kubernetes API without needing a DaemonSet deployment. +> These are still considered experimental, but if you would like to try them, see the documentation for [loki.source.kubernetes][] and [loki.source.podlogs][]. These values are close to what Grafana Agent Operator deploys for logs: diff --git a/docs/sources/tasks/migrate/from-static.md b/docs/sources/tasks/migrate/from-static.md index c561b764a0..93015e95bd 100644 --- a/docs/sources/tasks/migrate/from-static.md +++ b/docs/sources/tasks/migrate/from-static.md @@ -26,16 +26,15 @@ This topic describes how to: ## Before you begin -* You must have an existing Grafana Agent Static configuration. -* You must be running Grafana Agent Static version v0.40 or later. +* You must have a Grafana Agent Static configuration. +* You must be familiar with the [Components][] concept in {{< param "PRODUCT_NAME" >}}. ## Convert a Grafana Agent Static configuration To fully migrate Grafana Agent Static to {{< param "PRODUCT_NAME" >}}, you must convert your Grafana Agent Static configuration into an {{< param "PRODUCT_NAME" >}} configuration. -This conversion will enable you to take full advantage of the many additional features available in {{< param "PRODUCT_NAME" >}}. +This conversion allows you to take full advantage of the many additional features available in {{< param "PRODUCT_NAME" >}}. -> In this task, you will use the [convert][] CLI command to output an {{< param "PRODUCT_NAME" >}} -> configuration from a Static configuration. +> In this task, you use the [convert][] CLI command to output an {{< param "PRODUCT_NAME" >}} configuration from a Grafana Agent Static configuration. 1. Open a terminal window and run the following command. @@ -45,7 +44,7 @@ This conversion will enable you to take full advantage of the many additional fe Replace the following: - * _``_: The full path to the Grafana Agent Static configuration. + * _``_: The full path to the configuration file for Grafana Agent Static. * _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. 1. [Run][run alloy] {{< param "PRODUCT_NAME" >}} using the new {{< param "PRODUCT_NAME" >}} configuration from _``_: @@ -66,7 +65,7 @@ This conversion will enable you to take full advantage of the many additional fe Replace the following: - * _``_: The full path to the Grafana Agent Static configuration. + * _``_: The full path to the configuration file for Grafana Agent Static. * _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. 1. You can use the `--report` flag to output a diagnostic report. @@ -77,7 +76,7 @@ This conversion will enable you to take full advantage of the many additional fe Replace the following: - * _``_: The full path to the Grafana Agent Static configuration. + * _``_: The full path to the configuration file for Grafana Agent Static. * _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. * _``_: The output path for the report. @@ -89,18 +88,18 @@ This conversion will enable you to take full advantage of the many additional fe ## Run a Grafana Agent Static mode configuration -If you’re not ready to completely switch to an {{< param "PRODUCT_NAME" >}} configuration, you can run {{< param "PRODUCT_NAME" >}} using your existing Grafana Agent Static configuration. -The `--config.format=static` flag tells {{< param "PRODUCT_NAME" >}} to convert your Grafana Agent Static configuration to {{< param "PRODUCT_NAME" >}} and load it directly without saving the new configuration. -This allows you to try {{< param "PRODUCT_NAME" >}} without modifying your existing Grafana Agent Static configuration infrastructure. +If you’re not ready to completely switch to an {{< param "PRODUCT_NAME" >}} configuration, you can run {{< param "PRODUCT_NAME" >}} using your Grafana Agent Static configuration. +The `--config.format=static` flag tells {{< param "PRODUCT_NAME" >}} to convert your Grafana Agent Static configuration to {{< param "PRODUCT_NAME" >}} and load it directly without saving the configuration. +This allows you to try {{< param "PRODUCT_NAME" >}} without modifying your Grafana Agent Static configuration infrastructure. -> In this task, you will use the [run][] CLI command to run {{< param "PRODUCT_NAME" >}} using a Static configuration. +> In this task, you use the [run][] CLI command to run {{< param "PRODUCT_NAME" >}} using a Grafana Agent Static configuration. [Run][] {{< param "PRODUCT_NAME" >}} and include the command line flag `--config.format=static`. Your configuration file must be a valid Grafana Agent Static configuration file. ### Debugging -1. You can follow the convert CLI command [debugging][] instructions to generate a diagnostic report. +1. Follow the convert CLI command [debugging][] instructions to generate a diagnostic report. 1. Refer to the {{< param "PRODUCT_NAME" >}} [debugging UI][UI] for more information about running {{< param "PRODUCT_NAME" >}}. @@ -109,7 +108,7 @@ Your configuration file must be a valid Grafana Agent Static configuration file. {{< admonition type="caution" >}} If you bypass the errors, the behavior of the converted configuration may not match the original Grafana Agent Static configuration. - Do not use this flag in a production environment. + Don't use this flag in a production environment. {{< /admonition >}} ## Example @@ -180,10 +179,10 @@ alloy convert --source-format=static --output= `_: The full path to the Grafana Agent Static configuration. +* _``_: The full path to the configuration file for Grafana Agent Static. * _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. -The new {{< param "PRODUCT_NAME" >}} configuration file looks like this: +The {{< param "PRODUCT_NAME" >}} configuration file looks like this: ```alloy prometheus.scrape "metrics_test_local_agent" { @@ -270,10 +269,10 @@ alloy convert --source-format=static --extra-args="-enable-features=integrations ``` Replace the following: - * _``_: The full path to the Grafana Agent Static configuration. + * _``_: The full path to the configuration file for Grafana Agent Static. * _``_: The full path to output the {{< param "PRODUCT_NAME" >}} configuration. -## Environment Vars +## Environment variables You can use the `-config.expand-env` command line flag to interpret environment variables in your Grafana Agent Static configuration. You can pass these flags to [convert][] with `--extra-args="-config.expand-env"` or to [run][] with `--config.extra-args="-config.expand-env"`. @@ -283,7 +282,7 @@ You can pass these flags to [convert][] with `--extra-args="-config.expand-env"` ## Limitations -Configuration conversion is done on a best-effort basis. {{< param "PRODUCT_NAME" >}} will issue warnings or errors where the conversion can't be performed. +Configuration conversion is done on a best-effort basis. {{< param "PRODUCT_NAME" >}} issues warnings or errors if the conversion can't be done. After the configuration is converted, review the {{< param "PRODUCT_NAME" >}} configuration file and verify that it's correct before starting to use it in a production environment. @@ -291,21 +290,18 @@ The following list is specific to the convert command and not {{< param "PRODUCT * The [Agent Management][] configuration options can't be automatically converted to {{< param "PRODUCT_NAME" >}}. Any additional unsupported features are returned as errors during conversion. -* There is no gRPC server to configure for {{< param "PRODUCT_NAME" >}}, as any non-default configuration will show as unsupported during the conversion. -* Check if you are using any extra command line arguments with Static that aren't present in your configuration file. For example, `-server.http.address`. +* There is no gRPC server to configure for {{< param "PRODUCT_NAME" >}}. Any non-default configuration shows as unsupported during the conversion. +* Check if you are using any extra command line arguments with Grafana Agent Static that aren't present in your configuration file. For example, `-server.http.address`. * Check if you are using any environment variables in your Grafana Agent Static configuration. - These will be evaluated during conversion and you may want to replace them with the {{< param "PRODUCT_NAME" >}} Standard library [env][] function after conversion. + These are evaluated during conversion, and you may want to replace them with the {{< param "PRODUCT_NAME" >}} Standard library [env][] function after conversion. * Review additional [Prometheus Limitations][] for limitations specific to your [Metrics][] configuration. * Review additional [Promtail Limitations][] for limitations specific to your [Logs][] configuration. -* The logs produced by {{< param "PRODUCT_NAME" >}} mode will differ from those produced by Static. +* The logs produced by {{< param "PRODUCT_NAME" >}} differ from those produced by Grafana Agent Static. * {{< param "PRODUCT_NAME" >}} exposes the {{< param "PRODUCT_NAME" >}} [UI][]. [debugging]: #debugging [example]: #example - - [Static]: https://grafana.com/docs/agent/latest/static - [prometheus.scrape]: ../../../reference/components/prometheus.scrape/ [prometheus.remote_write]: ../../../reference/components/prometheus.remote_write/ [local.file_match]: ../../../reference/components/local.file_match/ @@ -318,16 +314,11 @@ The following list is specific to the convert command and not {{< param "PRODUCT [run alloy]: ../../../get-started/run/ [DebuggingUI]: ../../debug/ [configuration]: ../../../concepts/configuration-syntax/ - - [Integrations next]: https://grafana.com/docs/agent/latest/static/configuration/integrations/integrations-next/ [Agent Management]: https://grafana.com/docs/agent/latest/static/configuration/agent-management/ - [env]: ../../../reference/stdlib/env/ [Prometheus Limitations]: ../from-prometheus/#limitations [Promtail Limitations]: ../from-promtail/#limitations - - [Metrics]: https://grafana.com/docs/agent/latest/static/configuration/metrics-config/ [Logs]: https://grafana.com/docs/agent/latest/static/configuration/logs-config/ [UI]: ../../debug/#alloy-ui