From 05d5b1ee760fdb63610748161123157508b32762 Mon Sep 17 00:00:00 2001 From: Clayton Cornell <131809008+clayton-cornell@users.noreply.github.com> Date: Wed, 17 Jul 2024 10:37:46 -0700 Subject: [PATCH] Update Alloy tutorials (#1257) Co-authored-by: Mischa Thompson Co-authored-by: Paulin Todev --- docs/sources/tutorials/_index.md | 5 +- ...ndex.md => first-components-and-stdlib.md} | 44 +++--- ...index.md => logs-and-relabeling-basics.md} | 36 ++--- .../index.md => processing-logs.md} | 30 ++-- .../{get-started.md => send-logs-to-loki.md} | 142 +++++++++--------- .../tutorials/send-metrics-to-prometheus.md | 130 ++++++++-------- 6 files changed, 201 insertions(+), 186 deletions(-) rename docs/sources/tutorials/{first-components-and-stdlib/index.md => first-components-and-stdlib.md} (90%) rename docs/sources/tutorials/{logs-and-relabeling-basics/index.md => logs-and-relabeling-basics.md} (91%) rename docs/sources/tutorials/{processing-logs/index.md => processing-logs.md} (92%) rename docs/sources/tutorials/{get-started.md => send-logs-to-loki.md} (53%) diff --git a/docs/sources/tutorials/_index.md b/docs/sources/tutorials/_index.md index bcaf380a86..c14c59eeaf 100644 --- a/docs/sources/tutorials/_index.md +++ b/docs/sources/tutorials/_index.md @@ -1,11 +1,12 @@ --- canonical: https://grafana.com/docs/alloy/latest/tutorials/ description: Learn how to use Grafana Alloy -title: Tutorials +menuTitle: Tutorials +title: Grafana Alloy tutorials weight: 200 --- -# Tutorials + # {{% param "FULL_PRODUCT_NAME" %}} tutorials This section provides a set of step-by-step tutorials that show how to use {{< param "PRODUCT_NAME" >}}. diff --git a/docs/sources/tutorials/first-components-and-stdlib/index.md b/docs/sources/tutorials/first-components-and-stdlib.md similarity index 90% rename from docs/sources/tutorials/first-components-and-stdlib/index.md rename to docs/sources/tutorials/first-components-and-stdlib.md index 7762df8831..06d37abc43 100644 --- a/docs/sources/tutorials/first-components-and-stdlib/index.md +++ b/docs/sources/tutorials/first-components-and-stdlib.md @@ -1,26 +1,29 @@ --- canonical: https://grafana.com/docs/alloy/latest/tutorials/first-components-and-stdlib/ -description: Learn about the basics of the Alloy configuration syntax -title: First components and the standard library -weight: 20 +description: Learn the basics of the Grafana Alloy configuration syntax +menuTitle: First components and the standard library +title: First components and the standard library in Grafana Alloy +weight: 200 --- -# First components and the standard library +# First components and the standard library in {{% param "FULL_PRODUCT_NAME" %}} This tutorial covers the basics of the {{< param "PRODUCT_NAME" >}} configuration syntax and the standard library. It introduces a basic pipeline that collects metrics from the host and sends them to Prometheus. -## Prerequisites +## Before you begin -Set up a local Grafana instance as described in [Get started with {{< param "FULL_PRODUCT_NAME" >}}][get started] +To complete this tutorial: + +* You must set up a [local Grafana instance][previous tutorial]. ### Recommended reading -- [{{< param "PRODUCT_NAME" >}} configuration syntax][Configuration syntax] +- [{{< param "PRODUCT_NAME" >}} configuration syntax][configuration syntax] -## {{< param "PRODUCT_NAME" >}} configuration syntax basics +## {{% param "PRODUCT_NAME" %}} configuration syntax basics -An {{< param "PRODUCT_NAME" >}} configuration file is comprised of three things: +An {{< param "PRODUCT_NAME" >}} configuration file contains three elements: 1. **Attributes** @@ -50,7 +53,7 @@ An {{< param "PRODUCT_NAME" >}} configuration file is comprised of three things: ``` {{< admonition type="note" >}} -The default log level is `info` and the default log format is `logfmt`. + The default log level is `info` and the default log format is `logfmt`. {{< /admonition >}} Try pasting this into `config.alloy` and running ` run config.alloy` to see what happens. Replace _``_ with the path to the {{< param "PRODUCT_NAME" >}} binary. @@ -59,7 +62,7 @@ The default log level is `info` and the default log format is `logfmt`. This configuration won't do anything, so let's add some components to it. {{< admonition type="note" >}} -Comments in {{< param "PRODUCT_NAME" >}} syntax are prefixed with `//` and are single-line only. For example: `// This is a comment`. + Comments in {{< param "PRODUCT_NAME" >}} syntax are prefixed with `//` and are single-line only. For example: `// This is a comment`. {{< /admonition >}} ## Components @@ -291,16 +294,15 @@ Generally, you can use a persistent directory for this, as some components may u In the next tutorial, you learn how to configure {{< param "PRODUCT_NAME" >}} to collect logs from a file and send them to Loki. You also learn how to use different components to process metrics and logs. -[get started]: ../get-started/#set-up-a-local-grafana-instance -[Configuration syntax]: ../../concepts/configuration-syntax/ +[previous tutorial]: ../send-logs-to-loki/#set-up-a-local-grafana-instance +[configuration syntax]: ../../get-started/configuration-syntax/ [Standard library documentation]: ../../reference/stdlib/ [node_exporter]: https://github.com/prometheus/node_exporter -[prometheus.exporter.redis]: ../../reference/components/prometheus.exporter.redis/ -[http://localhost:3000/explore]: http://localhost:3000/explore -[prometheus.exporter.unix]: ../../reference/components/prometheus.exporter.unix/ -[prometheus.scrape]: ../../reference/components/prometheus.scrape/ -[prometheus.remote_write]: ../../reference/components/prometheus.remote_write/ -[Components]: ../../concepts/components/ -[Component controller]: ../../concepts/component_controller/ -[Components configuration language]: ../../concepts/configuration-syntax/components/ +[prometheus.exporter.redis]: ../../reference/components/prometheus/prometheus.exporter.redis/ +[prometheus.exporter.unix]: ../../reference/components/prometheus/prometheus.exporter.unix/ +[prometheus.scrape]: ../../reference/components/prometheus/prometheus.scrape/ +[prometheus.remote_write]: ../../reference/components/prometheus/prometheus.remote_write/ +[Components]: ../../get-started/components/ +[Component controller]: ../../get-started/component_controller/ +[Components configuration language]: ../../get-started/configuration-syntax/components/ [env]: ../../reference/stdlib/env/ diff --git a/docs/sources/tutorials/logs-and-relabeling-basics/index.md b/docs/sources/tutorials/logs-and-relabeling-basics.md similarity index 91% rename from docs/sources/tutorials/logs-and-relabeling-basics/index.md rename to docs/sources/tutorials/logs-and-relabeling-basics.md index b61d63f971..203f0bf956 100644 --- a/docs/sources/tutorials/logs-and-relabeling-basics/index.md +++ b/docs/sources/tutorials/logs-and-relabeling-basics.md @@ -1,17 +1,20 @@ --- canonical: https://grafana.com/docs/alloy/latest/tutorials/logs-and-relabeling-basics/ description: Learn how to relabel metrics and collect logs -title: Logs and relabeling basics -weight: 30 +menuTitle: Logs and relabeling basics +title: Logs and relabeling basics in Grafana Alloy +weight: 250 --- -# Logs and relabeling basics +# Logs and relabeling basics in {{% param "FULL_PRODUCT_NAME" %}} This tutorial covers some basic metric relabeling, and shows you how to send logs to Loki. -## Prerequisites +## Before you begin -Complete the [First components and the standard library][first] tutorial. +To complete this tutorial: + +* You must complete the [First components and the standard library][first] tutorial. ## Relabel metrics @@ -83,7 +86,7 @@ There is an issue commonly faced when relabeling and using labels that start wit These labels are considered internal and are dropped before relabeling rules from a `prometheus.relabel` component are applied. If you would like to keep or act on these kinds of labels, use a [discovery.relabel][] component. -[discovery.relabel]: ../../reference/components/discovery.relabel/ +[discovery.relabel]: ../../reference/components/discovery/discovery.relabel/ {{< /admonition >}} ## Send logs to Loki @@ -181,8 +184,8 @@ loki.write "local_loki" { {{< admonition type="tip" >}} You can use the [loki.relabel][] component to relabel and add labels, just like you can with the [prometheus.relabel][] component. -[loki.relabel]: ../../reference/components/loki.relabel -[prometheus.relabel]: ../../reference/components/prometheus.relabel +[loki.relabel]: ../../reference/components/loki/loki.relabel +[prometheus.relabel]: ../../reference/components/prometheus/prometheus.relabel {{< /admonition >}} Run {{< param "PRODUCT_NAME" >}} and execute the following: @@ -259,7 +262,7 @@ echo 'level=warn msg="WARN: This is a warn level log!"' >> /tmp/alloy-logs/log.l echo 'level=debug msg="DEBUG: This is a debug level log!"' >> /tmp/alloy-logs/log.log ``` -Navigate to [localhost:3000/explore][] and switch the Datasource to `Loki`. +Navigate to [http://localhost:3000/explore](http://localhost:3000/explore) and switch the Datasource to `Loki`. Try querying for `{level!=""}` to see the new labels in action. {{< figure src="/media/docs/alloy/screenshot-log-line-levels.png" alt="Grafana Explore view of example log lines, now with the extracted 'level' label" >}} @@ -325,12 +328,11 @@ You have also seen how to use some standard library components to collect metric In the next tutorial, you learn more about how to use the `loki.process` component to extract values from logs and use them. [first]: ../first-components-and-stdlib/ -[prometheus.relabel]: ../../reference/components/prometheus.relabel/ +[prometheus.relabel]: ../../reference/components/prometheus/prometheus.relabel/ [constants]: ../../reference/stdlib/constants/ -[localhost:3000/explore]: http://localhost:3000/explore -[prometheus.relabel rule-block]: ../../reference/components/prometheus.relabel/#rule-block -[local.file_match]: ../../reference/components/local.file_match/ -[loki.source.file]: ../../reference/components/loki.source.file/ -[loki.write]: ../../reference/components/loki.write/ -[loki.relabel]: ../../reference/components/loki.relabel/ -[loki.process]: ../../reference/components/loki.process/ +[prometheus.relabel rule-block]: ../../reference/components/prometheus/prometheus.relabel/#rule-block +[local.file_match]: ../../reference/components/local/local.file_match/ +[loki.source.file]: ../../reference/components/loki/loki.source.file/ +[loki.write]: ../../reference/components/loki/loki.write/ +[loki.relabel]: ../../reference/components/loki/loki.relabel/ +[loki.process]: ../../reference/components/loki/loki.process/ diff --git a/docs/sources/tutorials/processing-logs/index.md b/docs/sources/tutorials/processing-logs.md similarity index 92% rename from docs/sources/tutorials/processing-logs/index.md rename to docs/sources/tutorials/processing-logs.md index 520036cbcd..2b8b4a452c 100644 --- a/docs/sources/tutorials/processing-logs/index.md +++ b/docs/sources/tutorials/processing-logs.md @@ -1,18 +1,21 @@ --- canonical: https://grafana.com/docs/alloy/latest/tutorials/processing-logs/ description: Learn how to process logs -title: Processing Logs -weight: 40 +menuTitle: Processing Logs +title: Processing logs with Grafana Alloy +weight: 300 --- -# Processing Logs +# Processing logs with {{% param "FULL_PRODUCT_NAME" %}} This tutorial assumes you are familiar with setting up and connecting components. It covers using `loki.source.api` to receive logs over HTTP, processing and filtering them, and sending them to Loki. -## Prerequisites +## Before you begin -Complete the [Logs and relabeling basics][logs] tutorial. +To complete this tutorial: + +* You must complete the [Logs and relabeling basics][logs] tutorial. ## Receive logs over HTTP and Process @@ -349,7 +352,7 @@ curl localhost:9999/loki/api/v1/raw -XPOST -H "Content-Type: application/json" - ``` Now that you have sent some logs, its time to see how they look in Grafana. -Navigate to [localhost:3000/explore][] and switch the Datasource to `Loki`. +Navigate to [http://localhost:3000/explore](http://localhost:3000/explore) and switch the Datasource to `Loki`. Try querying for `{source="demo-api"}` and see if you can find the logs you sent. Try playing around with the values of `"level"`, `"message"`, `"timestamp"`, and `"is_secret"` and see how the logs change. @@ -412,11 +415,10 @@ loki.write "local_loki" { {{< /collapse >}} [logs]: ../logs-and-relabeling-basics/ -[loki.source.api]: ../../reference/components/loki.source.api/ -[loki.process#stage.drop]: ../../reference/components/loki.process/#stagedrop-block -[loki.process#stage.json]: ../../reference/components/loki.process/#stagejson-block -[loki.process#stage.labels]: ../../reference/components/loki.process/#stagelabels-block -[localhost:3000/explore]: http://localhost:3000/explore -[discovery.docker]: ../../reference/components/discovery.docker/ -[loki.source.docker]: ../../reference/components/loki.source.docker/ -[discovery.relabel]: ../../reference/components/discovery.relabel/ +[loki.source.api]: ../../reference/components/loki/loki.source.api/ +[loki.process#stage.drop]: ../../reference/components/loki/loki.process/#stagedrop-block +[loki.process#stage.json]: ../../reference/components/loki/loki.process/#stagejson-block +[loki.process#stage.labels]: ../../reference/components/loki/loki.process/#stagelabels-block +[discovery.docker]: ../../reference/components/discovery/discovery.docker/ +[loki.source.docker]: ../../reference/components/loki/loki.source.docker/ +[discovery.relabel]: ../../reference/components/discovery/discovery.relabel/ diff --git a/docs/sources/tutorials/get-started.md b/docs/sources/tutorials/send-logs-to-loki.md similarity index 53% rename from docs/sources/tutorials/get-started.md rename to docs/sources/tutorials/send-logs-to-loki.md index 93f844da2b..fb1404b6b4 100644 --- a/docs/sources/tutorials/get-started.md +++ b/docs/sources/tutorials/send-logs-to-loki.md @@ -1,26 +1,28 @@ --- -canonical: https://grafana.com/docs/alloy/latest/tutorials/get-started/ -description: Getting started with Alloy -title: Get started with Alloy -weight: 10 +canonical: https://grafana.com/docs/alloy/latest/tutorials/send-logs-to-loki/ +aliases: + - ./get-started/ #/docs/alloy/latest/tutorials/get-started/ +description: Learn how to use Grafana Alloy to send logs to Loki +menuTitle: Send logs to Loki +title: Use Grafana Alloy to send logs to Loki +weight: 100 --- -## Get started with {{% param "PRODUCT_NAME" %}} +## Use {{% param "FULL_PRODUCT_NAME" %}} to send logs to Loki -This tutorial shows you how to configure {{< param "PRODUCT_NAME" >}} to collect logs from your local machine, filter non-essential log lines, and send them to Loki, running in a local Grafana stack. +This tutorial shows you how to configure {{< param "PRODUCT_NAME" >}} to collect logs from your local machine, filter non-essential log lines, send them to Loki, and use Grafana to explore the results. -This process allows you to query and visualize the logs sent to Loki using the Grafana dashboard. +## Before you begin -To follow this tutorial, you must have a basic understanding of Alloy and telemetry collection in general. -You should also be familiar with Prometheus and PromQL, Loki and LogQL, and basic Grafana navigation. -You don't need to know about the {{< param "PRODUCT_NAME" >}} [configuration syntax][configuration] concepts. +To complete this tutorial: -## Prerequisites - -This tutorial requires a Linux or macOS environment with Docker installed. +* You must have a basic understanding of {{< param "PRODUCT_NAME" >}} and telemetry collection in general. +* You should be familiar with Prometheus, PromQL, Loki, LogQL, and basic Grafana navigation. ## Install {{% param "PRODUCT_NAME" %}} and start the service +This tutorial requires a Linux or macOS environment with Docker installed. + ### Linux Install and run {{< param "PRODUCT_NAME" >}} on Linux. @@ -39,7 +41,9 @@ Install and run {{< param "PRODUCT_NAME" >}} on macOS. ## Set up a local Grafana instance -To allow {{< param "PRODUCT_NAME" >}} to write data to Loki running in the local Grafana stack, you can use the following Docker Compose file to set up a local Grafana instance alongside Loki and Prometheus, which are pre-configured as data sources. +In this tutorial, you configure {{< param "PRODUCT_NAME" >}} to collect logs from your local machine and send them to Loki. +You can use the following Docker Compose file to set up a local Grafana instance. +This Docker Compose file includes Loki and Prometheus configured as data sources. ```yaml version: '3' @@ -95,25 +99,21 @@ services: Run `docker compose up` to start your Docker container and open [http://localhost:3000](http://localhost:3000) in your browser to view the Grafana UI. - {{< admonition type="note" >}} -If you the following error when you start your Docker container, `docker: 'compose' is not a docker command`, use the command `docker-compose up` to start your Docker container. - {{< /admonition >}} +{{< admonition type="note" >}} +If you encounter the following error when you start your Docker container, `docker: 'compose' is not a docker command`, use the command `docker-compose up` to start your Docker container. +{{< /admonition >}} ## Configure {{% param "PRODUCT_NAME" %}} -Once the local Grafana instance is set up, the next step is to configure {{< param "PRODUCT_NAME" >}}. +After the local Grafana instance is set up, the next step is to configure {{< param "PRODUCT_NAME" >}}. You use components in the `config.alloy` file to tell {{< param "PRODUCT_NAME" >}} which logs you want to scrape, how you want to process that data, and where you want the data sent. The examples run on a single host so that you can run them on your laptop or in a Virtual Machine. -You can try the examples using a `config.alloy` file and experiment with the examples yourself. - -For the following steps, create a file called `config.alloy` in your current working directory. -If you have enabled the {{< param "PRODUCT_NAME" >}} UI, you can "hot reload" a configuration from a file. -In a later step, you copy this file to where {{< param "PRODUCT_NAME" >}} picks it up, and reloads without restarting the system service. +You can try the examples using a `config.alloy` file and experiment with the examples. ### First component: Log files -Paste this component into the top of the `config.alloy` file: +Create a file called `config.alloy` in your current working directory and paste the following component configuration at the top of the file: ```alloy local.file_match "local_files" { @@ -122,11 +122,14 @@ local.file_match "local_files" { } ``` -This component creates a [local.file_match][] component named `local_files` with an attribute that tells {{< param "PRODUCT_NAME" >}} which files to source, and to check for new files every 5 seconds. +This configuration creates a [local.file_match][] component named `local_files` which does the following: + +* It tells {{< param "PRODUCT_NAME" >}} which files to source. +* It checks for new files every 5 seconds. ### Second component: Scraping -Paste this component next in the `config.alloy` file: +Paste the following component configuration below the previous component in your `config.alloy` file: ```alloy loki.source.file "log_scrape" { @@ -136,20 +139,20 @@ loki.source.file "log_scrape" { } ``` -This configuration creates a [loki.source.file][] component named `log_scrape`, and shows the pipeline concept of {{< param "PRODUCT_NAME" >}} in action. The `log_scrape` component does the following: +This configuration creates a [loki.source.file][] component named `log_scrape` which does the following: -1. It connects to the `local_files` component as its "source" or target. -1. It forwards the logs it scrapes to the receiver of another component called `filter_logs`. -1. It provides extra attributes and options to tail the log files from the end so you don't ingest the entire log file history. +* It connects to the `local_files` component as its source or target. +* It forwards the logs it scrapes to the receiver of another component called `filter_logs`. +* It provides extra attributes and options to tail the log files from the end so you don't ingest the entire log file history. ### Third component: Filter non-essential logs Filtering non-essential logs before sending them to a data source can help you manage log volumes to reduce costs. -The filtering strategy of each organization differs because they have different monitoring needs and setups. -The following example demonstrates filtering out or dropping logs before sending them to Loki. +The following example demonstrates how you can filter out or drop logs before sending them to Loki. + +Paste the following component configuration below the previous component in your `config.alloy` file: -Paste this component next in the `config.alloy` file: ```alloy loki.process "filter_logs" { stage.drop { @@ -161,24 +164,22 @@ loki.process "filter_logs" { } ``` -`loki.process` is a component that allows you to transform, filter, parse, and enrich log data. +The `loki.process` component allows you to transform, filter, parse, and enrich log data. Within this component, you can define one or more processing stages to specify how you would like to process log entries before they're stored or forwarded. -* The `filter_logs` component receives scraped log entries from the `log_scrape` component and uses the `stage.drop` block to drop log entries based on specified criteria. -* The `source` parameter is an empty string. - This tells {{< param "PRODUCT_NAME" >}} to scrape logs from the default `log_scrape` component. -* The `expression` parameter contains the expression to drop from the logs. - In this example, it's the log message _".*Connection closed by authenticating user root"_. -* You can include an optional string label `drop_counter_reason` to show the rationale for dropping log entries. - You can use this label to categorize and count the drops to track and analyze the reasons for dropping logs. -* The `forward_to` parameter specifies where to send the processed logs. - In this example, you send the processed logs to a component you create next called `grafana_loki`. +This configuration creates a [loki.process][] component named `filter_logs` which does the following: + +* It receives scraped log entries from the default `log_scrape` component. +* It uses the `stage.drop` block to define what to drop from the scraped logs. +* It uses the `expression` parameter to identify the specific log entries to drop. +* It uses an optional string label `drop_counter_reason` to show the reason for dropping the log entries. +* It forwards the processed logs to the receiver of another component called `grafana_loki`. -Check out the following [tutorial][] and the [`loki.process` documentation][loki.process] for more comprehensive information on processing logs. +The [`loki.process` documentation][loki.process] provides more comprehensive information on processing logs. ### Fourth component: Write logs to Loki -Paste this component last in your configuration file: +Paste this component configuration below the previous component in your `config.alloy` file: ```alloy loki.write "grafana_loki" { @@ -193,7 +194,8 @@ loki.write "grafana_loki" { } ``` -This last component creates a [loki.write][] component named `grafana_loki` that points to `http://localhost:3100/loki/api/v1/push`. +This final component creates a [`loki.write`][] component named `grafana_loki` that points to `http://localhost:3100/loki/api/v1/push`. + This completes the simple configuration pipeline. {{< admonition type="tip" >}} @@ -201,14 +203,14 @@ The `basic_auth` block is commented out because the local `docker compose` stack It's included in this example to show how you can configure authorization for other environments. For further authorization options, refer to the [loki.write][] component reference. -[loki.write]: ../../reference/components/loki.write/ +[loki.write]: ../../reference/components/loki/loki.write/ {{< /admonition >}} With this configuration, {{< param "PRODUCT_NAME" >}} connects directly to the Loki instance running in the Docker container. ## Reload the configuration -1. Copy your local `config.alloy` file into the default configuration file location. +1. Copy your local `config.alloy` file into the default {{< param "PRODUCT_NAME" >}} configuration file location. {{< code >}} @@ -233,7 +235,7 @@ With this configuration, {{< param "PRODUCT_NAME" >}} connects directly to the L If you chose to run {{< param "PRODUCT_NAME" >}} in a Docker container, make sure you use the `--server.http.listen-addr=0.0.0.0:12345` argument. If you don’t use this argument, the [debugging UI][debug] won’t be available outside of the Docker container. - [debug]: ../../tasks/debug/#alloy-ui + [debug]: ../../troubleshoot/debug/#alloy-ui {{< /admonition >}} 1. Optional: You can do a system service restart {{< param "PRODUCT_NAME" >}} and load the configuration file: @@ -252,41 +254,43 @@ With this configuration, {{< param "PRODUCT_NAME" >}} connects directly to the L ## Inspect your configuration in the {{% param "PRODUCT_NAME" %}} UI -Open [http://localhost:12345] and click the Graph tab at the top. +Open [http://localhost:12345](http://localhost:12345) and click the **Graph** tab at the top. The graph should look similar to the following: {{< figure src="/media/docs/alloy/tutorial/Inspect-your-config-in-the-Alloy-UI-image.png" alt="Your configuration in the Alloy UI" >}} -The UI allows you to see a visual representation of the pipeline you built with your {{< param "PRODUCT_NAME" >}} component configuration. -We can see that the components are healthy, and you are ready to go. +The {{< param "PRODUCT_NAME" >}} UI shows you a visual representation of the pipeline you built with your {{< param "PRODUCT_NAME" >}} component configuration. + +You can see that the components are healthy, and you are ready to explore the logs in Grafana. ## Log in to Grafana and explore Loki logs -Open [http://localhost:3000/explore] to access **Explore** feature in Grafana. +Open [http://localhost:3000/explore](http://localhost:3000/explore) to access **Explore** feature in Grafana. + Select Loki as the data source and click the **Label Browser** button to select a file that {{< param "PRODUCT_NAME" >}} has sent to Loki. Here you can see that logs are flowing through to Loki as expected, and the end-to-end configuration was successful. {{< figure src="/media/docs/alloy/tutorial/loki-logs.png" alt="Logs reported by Alloy in Grafana" >}} -## Conclusion +## Summary + +You have installed and configured {{< param "PRODUCT_NAME" >}}, and sent logs from your local host to your local Grafana stack. -Congratulations, you have installed and configured {{< param "PRODUCT_NAME" >}}, and sent logs from your local host to a Grafana stack. -In the following tutorials, you learn more about configuration concepts and metrics. +In the [next tutorial][], you learn more about configuration concepts and metrics. -[http://localhost:3000/explore]: http://localhost:3000/explore -[http://localhost:12345]: http://localhost:12345 -[MacOS Install]: ../../get-started/install/macos/ -[Linux Install]: ../../get-started/install/linux/ -[Run on Linux]: ../../get-started/run/linux/ -[Run on MacOS]: ../../get-started/run/macos/ -[local.file_match]: ../../reference/components/local.file_match/ -[loki.write]: ../../reference/components/loki.write/ -[loki.source.file]: ../../reference/components/loki.source.file/ +[MacOS Install]: ../../set-up/install/macos/ +[Linux Install]: ../../set-up/install/linux/ +[Run on Linux]: ../../set-up/run/linux/ +[Run on MacOS]: ../../set-up/run/macos/ +[local.file_match]: ../../reference/components/local/local.file_match/ +[loki.write]: ../../reference/components/loki/loki.write/ +[loki.source.file]: ../../reference/components/loki/loki.source.file/ +[loki.process]: ../../reference/components/loki/loki.process/ [alloy]: https://grafana.com/docs/alloy/latest/ [configuration]: ../../concepts/configuration-syntax/ [install]: ../../get-started/install/binary/#install-alloy-as-a-standalone-binary -[debugging your configuration]: ../../tasks/debug/ -[parse]: ../../reference/components/loki.process/ -[tutorial]: ../processing-logs/ -[loki.process]: ../../reference/components/loki.process/ +[debugging your configuration]: ../../troubleshoot/debug/ +[parse]: ../../reference/components/loki/loki.process/ +[next tutorial]: ../send-metrics-to-prometheus/ +[loki.process]: ../../reference/components/loki/loki.process/ diff --git a/docs/sources/tutorials/send-metrics-to-prometheus.md b/docs/sources/tutorials/send-metrics-to-prometheus.md index 80d4eea271..8d81f9c468 100644 --- a/docs/sources/tutorials/send-metrics-to-prometheus.md +++ b/docs/sources/tutorials/send-metrics-to-prometheus.md @@ -3,33 +3,36 @@ canonical: https://grafana.com/docs/alloy/latest/tutorials/send-metrics-to-prome description: Learn how to send metrics to Prometheus title: Use Grafana Alloy to send metrics to Prometheus menuTitle: Send metrics to Prometheus -weight: 15 +weight: 150 --- -# Use Grafana Alloy to send metrics to Prometheus -In the [Get started with {{< param "PRODUCT_NAME" >}} tutorial][get started], you learned how to configure {{< param "PRODUCT_NAME" >}} to collect and process logs from your local machine and send them to Loki, running in the local Grafana stack. +# Use {{% param "FULL_PRODUCT_NAME" %}} to send metrics to Prometheus -As a next step, you will collect and process metrics from the same machine using {{< param "PRODUCT_NAME" >}} and send them to Prometheus, running in the same Grafana stack. +In the [previous tutorial][], you learned how to configure {{< param "PRODUCT_NAME" >}} to collect and process logs from your local machine and send them to Loki. -This process will enable you to query and visualize the metrics sent to Prometheus using the Grafana dashboard. +This tutorial shows you how to configure {{< param "PRODUCT_NAME" >}} to collect and process metrics from your local machine, send them to Prometheus, and use Grafana to explore the results. -## Prerequisites +## Before you begin -Complete the [previous tutorial][get started] to: -1. Install {{< param "PRODUCT_NAME" >}} and start the service in your environment. -1. Set up a local Grafana instance. -1. Create a `config.alloy` file. +To complete this tutorial: + +* You must have a basic understanding of Alloy and telemetry collection in general. +* You should be familiar with Prometheus, PromQL, Loki, LogQL, and basic Grafana navigation. +* You must complete the [previous tutorial][] to prepare the following prerequisites: + * Install {{< param "PRODUCT_NAME" >}} and start the service in your environment. + * Set up a local Grafana instance. + * Create a `config.alloy` file. ## Configure {{% param "PRODUCT_NAME" %}} -Once the prerequisite steps have been completed, the next step is to configure {{< param "PRODUCT_NAME" >}} for metric collection. +In this tutorial, you configure {{< param "PRODUCT_NAME" >}} to collect metrics and send them to Prometheus. -Same as you did for logs, you will use the components in the `config.alloy` file to tell {{< param "PRODUCT_NAME" >}} which metrics you want to scrape, how you want to process that data, and where you want the data sent. +You add components to your `config.alloy` file to tell {{< param "PRODUCT_NAME" >}} which metrics you want to scrape, how you want to process that data, and where you want the data sent. -Add the following to the `config.alloy` file you created in the prerequisite steps. +The following steps build on the `config.alloy` file you created in the previous tutorial. ### First component: Scraping -Paste this component into the top of the `config.alloy` file: +Paste the following component configuration at the top of your `config.alloy` file: ```alloy prometheus.exporter.unix "local_system" { } @@ -41,21 +44,19 @@ prometheus.scrape "scrape_metrics" { } ``` -This configuration defines a Prometheus exporter for a local system from which the metrics will be collected. - -It also creates a [`prometheus.scrape`][prometheus.scrape] component named `scrape_metrics` which does the following: +This configuration creates a [`prometheus.scrape`][prometheus.scrape] component named `scrape_metrics` which does the following: -1. It connects to the `local_system` component (its "source" or target). -1. It forwards the metrics it scrapes to the "receiver" of another component called `filter_metrics` which you will define next. -1. It tells {{< param "PRODUCT_NAME" >}} to scrape metrics every 10 seconds. +* It connects to the `local_system` component as its source or target. +* It forwards the metrics it scrapes to the receiver of another component called `filter_metrics`. +* It tells {{< param "PRODUCT_NAME" >}} to scrape metrics every 10 seconds. ### Second component: Filter metrics -Filtering non-essential metrics before sending them to a data source can help you reduce costs and enable you to focus on the data that matters most. The filtering strategy of each organization will differ as they have different monitoring needs and setups. +Filtering non-essential metrics before sending them to a data source can help you reduce costs and allow you to focus on the data that matters most. -The following example demonstrates filtering out or dropping metrics before sending them to Prometheus. +The following example demonstrates how you can filter out or drop metrics before sending them to Prometheus. -Paste this component next in your configuration file: +Paste the following component configuration below the previous component in your `config.alloy` file: ```alloy prometheus.relabel "filter_metrics" { @@ -64,24 +65,24 @@ prometheus.relabel "filter_metrics" { source_labels = ["env"] regex = "dev" } - + forward_to = [prometheus.remote_write.metrics_service.receiver] } ``` -1. `prometheus.relabel` is a component most commonly used to filter Prometheus metrics or standardize the label set passed to one or more downstream receivers. -1. In this example, you create a `prometheus.relabel` component named “filter_metrics”. - This component receives scraped metrics from the `scrape_metrics` component you created in the previous step. -1. There are many ways to [process metrics][prometheus.relabel]. - Within this component, you can define rule block(s) to specify how you would like to process metrics before they are stored or forwarded. -1. This example assumes that you are monitoring a production environment and the metrics collected from the dev environment will not be needed for this particular use case. -1. To instruct {{< param "PRODUCT_NAME" >}} to drop metrics whose environment label, `"env"`, is equal to `"dev"`, you set the `action` parameter to `"drop"`, set the `source_labels` parameter equal to `["env"]`, and the `regex` parameter to `"dev"`. -1. You use the `forward_to` parameter to specify where to send the processed metrics. - In this case, you will send the processed metrics to a component you will create next called `metrics_service`. +The [`prometheus.relabel`][prometheus.relabel] component is commonly used to filter Prometheus metrics or standardize the label set passed to one or more downstream receivers. +You can use this component to rewrite the label set of each metric sent to the receiver. +Within this component, you can define rule blocks to specify how you would like to process metrics before they're stored or forwarded. + +This configuration creates a [`prometheus.relabel`][prometheus.relabel] component named `filter_metrics` which does the following: + +* It receives scraped metrics from the `scrape_metrics` component. +* It tells {{< param "PRODUCT_NAME" >}} to drop metrics that have an `"env"` label equal to `"dev"`. +* It forwards the processed metrics to the receiver of another component called `metrics_service`. ### Third component: Write metrics to Prometheus -Paste this component next in your configuration file: +Paste the following component configuration below the previous component in your `config.alloy` file: ```alloy prometheus.remote_write "metrics_service" { @@ -96,24 +97,24 @@ prometheus.remote_write "metrics_service" { } ``` -This last component creates a [prometheus.remote_write][prometheus.remote_write] component named `metrics_service` that points to `http://localhost:9090/api/v1/write`. +This final component creates a [`prometheus.remote_write`][prometheus.remote_write] component named `metrics_service` that points to `http://localhost:9090/api/v1/write`. This completes the simple configuration pipeline. {{< admonition type="tip" >}} -The `basic_auth` is commented out because the local `docker compose` stack doesn't require it. -It is included in this example to show how you can configure authorization for other environments. +The `basic_auth` is commented out because the local `docker compose` stack doesn't require it. +It's included in this example to show how you can configure authorization for other environments. For further authorization options, refer to the [`prometheus.remote_write`][prometheus.remote_write] component documentation. -[prometheus.remote_write]: ../../reference/components/prometheus.remote_write/ +[prometheus.remote_write]: ../../reference/components/prometheus/prometheus.remote_write/ {{< /admonition >}} This connects directly to the Prometheus instance running in the Docker container. -## Reload the Configuration +## Reload the configuration -Copy your local `config.alloy` file into the default configuration file location, which depends on your OS. +Copy your local `config.alloy` file into the default {{< param "PRODUCT_NAME" >}} configuration file location. {{< code >}} @@ -126,48 +127,50 @@ sudo cp config.alloy /etc/alloy/config.alloy ``` {{< /code >}} -Finally, call the reload endpoint to notify {{< param "PRODUCT_NAME" >}} to the configuration change without the need for restarting the system service. +Call the `/-/reload` endpoint to tell {{< param "PRODUCT_NAME" >}} to reload the configuration file without a system service restart. ```bash curl -X POST http://localhost:12345/-/reload ``` {{< admonition type="tip" >}} -This step uses the {{< param "PRODUCT_NAME" >}} UI, which is exposed on `localhost` port `12345`. +This step uses the {{< param "PRODUCT_NAME" >}} UI, on `localhost` port `12345`. If you choose to run Alloy in a Docker container, make sure you use the `--server.http.listen-addr=0.0.0.0:12345` argument. If you don’t use this argument, the [debugging UI][debug] won’t be available outside of the Docker container. -[debug]: ../../tasks/debug/#alloy-ui +[debug]: ../../troubleshoot/debug/#alloy-ui {{< /admonition >}} -The alternative to using this endpoint is to reload the {{< param "PRODUCT_NAME" >}} configuration, which can be done as follows: +1. Optional: You can do a system service restart {{< param "PRODUCT_NAME" >}} and load the configuration file: -{{< code >}} + {{< code >}} -```macos -brew services restart alloy -``` + ```macos + brew services restart alloy + ``` -```linux -sudo systemctl reload alloy -``` + ```linux + sudo systemctl reload alloy + ``` + + {{< /code >}} -{{< /code >}} ## Inspect your configuration in the {{% param "PRODUCT_NAME" %}} UI -Open and click the Graph tab at the top. -The graph should look similar to the following: +Open [http://localhost:12345](http://localhost:12345) and click the **Graph** tab at the top. +The graph should look similar to the following: + {{< figure src="/media/docs/alloy/tutorial/Metrics-inspect-your-config.png" alt="Your configuration in the Alloy UI" >}} The {{< param "PRODUCT_NAME" >}} UI shows you a visual representation of the pipeline you built with your {{< param "PRODUCT_NAME" >}} component configuration. -You can see that the components are healthy, and you are ready to go. +You can see that the components are healthy, and you are ready to explore the metrics in Grafana. -## Log into Grafana and explore metrics in Prometheus +## Log into Grafana and explore metrics in Prometheus -Open to access the Explore feature in Grafana. +Open [http://localhost:3000/explore](http://localhost:3000/explore) to access the **Explore** feature in Grafana. Select Prometheus as the data source and click the **Metrics Browser** button to select the metric, labels, and values for your labels. @@ -175,11 +178,12 @@ Here you can see that metrics are flowing through to Prometheus as expected, and {{< figure src="/media/docs/alloy/tutorial/Metrics_visualization.png" alt="Your data flowing through Prometheus." >}} -## Conclusion -Well done. You have configured {{< param "PRODUCT_NAME" >}} to collect and process metrics from your local host and send them to a Grafana stack. +## Summary + +You have configured {{< param "PRODUCT_NAME" >}} to collect and process metrics from your local host and send them to your local Grafana stack. -[get started]: ../get-started/ -[prometheus.scrape]: ../../reference/components/prometheus.scrape/ -[prometheus.relabel]: ../../reference/components/prometheus.relabel/ -[prometheus.remote_write]: ../../reference/components/prometheus.remote_write/ +[previous tutorial]: ../send-logs-to-loki/ +[prometheus.scrape]: ../../reference/components/prometheus/prometheus.scrape/ +[prometheus.relabel]: ../../reference/components/prometheus/prometheus.relabel/ +[prometheus.remote_write]: ../../reference/components/prometheus/prometheus.remote_write/