diff --git a/docs/sources/_index.md b/docs/sources/_index.md index b469bd90e5..ce268302d7 100644 --- a/docs/sources/_index.md +++ b/docs/sources/_index.md @@ -15,7 +15,7 @@ cascade: {{< param "FULL_PRODUCT_NAME" >}} is a vendor-neutral distribution of the [OpenTelemetry][] (OTel) Collector. {{< param "PRODUCT_NAME" >}} uniquely combines the very best OSS observability signals in the community. It offers native pipelines for OTel, [Prometheus][], [Pyroscope][], [Loki][], and many other metrics, logs, traces, and profile tools. -In additon, you can also use {{< param "PRODUCT_NAME" >}} pipelines to do other tasks such as configure alert rules in Loki and Mimir. +In additon, you can use {{< param "PRODUCT_NAME" >}} pipelines to do other tasks such as configure alert rules in Loki and Mimir. {{< param "PRODUCT_NAME" >}} is fully compatible with the OTel Collector, Prometheus Agent, and Promtail. You can use {{< param "PRODUCT_NAME" >}} as an alternative to either of these solutions or combined into a hybrid system of multiple collectors and agents. You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructure and you can pair it with your Grafana LGTM stack, a telemetry backend from Grafana Cloud, or any other compatible backend from any other vendor. @@ -27,7 +27,7 @@ You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructu Some of these features include: * **Custom components:** You can use {{< param "PRODUCT_NAME" >}} to create and share custom components. - Custom components combine a pipeline of existing components into a single, easy-to-understand component that is just a few lines long. + Custom components combine a pipeline of existing components into a single, easy-to-understand component that's just a few lines long. You can use pre-built custom components from the community, ones packaged by Grafana, or create your own. * **GitOps compatibility:** {{< param "PRODUCT_NAME" >}} uses frameworks to pull configurations from Git, S3, HTTP endpoints, and just about any other source. * **Clustering support:** {{< param "PRODUCT_NAME" >}} has native clustering support. @@ -37,10 +37,10 @@ Some of these features include: * **Debugging utilities:** {{< param "PRODUCT_NAME" >}} provides troubleshooting support and an embedded [user interface][UI] to help you identify and resolve configuration problems. [OpenTelemetry]: https://opentelemetry.io/ecosystem/distributions/ -[Prometheus]: https://prometheus.io -[Loki]: https://github.com/grafana/loki -[Grafana]: https://github.com/grafana/grafana -[Tempo]: https://github.com/grafana/tempo -[Mimir]: https://github.com/grafana/mimir -[Pyroscope]: https://github.com/grafana/pyroscope +[Prometheus]: https://prometheus.io/ +[Loki]: https://grafana.com/docs/loki/ +[Grafana]: https://grafana.com/docs/grafana/ +[Tempo]: https://grafana.com/docs/tempo/ +[Mimir]: https://grafana.com/docs/mimir/ +[Pyroscope]: https://grafana.com/docs/pyroscope/ [UI]: ./tasks/debug/#alloy-ui diff --git a/docs/sources/_index.md.t b/docs/sources/_index.md.t index 235196373d..0a702ff4a9 100644 --- a/docs/sources/_index.md.t +++ b/docs/sources/_index.md.t @@ -15,7 +15,7 @@ cascade: {{< param "FULL_PRODUCT_NAME" >}} is a vendor-neutral distribution of the [OpenTelemetry][] (OTel) Collector. {{< param "PRODUCT_NAME" >}} uniquely combines the very best OSS observability signals in the community. It offers native pipelines for OTel, [Prometheus][], [Pyroscope][], [Loki][], and many other metrics, logs, traces, and profile tools. -In additon, you can also use {{< param "PRODUCT_NAME" >}} pipelines to do other tasks such as configure alert rules in Loki and Mimir. +In additon, you can use {{< param "PRODUCT_NAME" >}} pipelines to do other tasks such as configure alert rules in Loki and Mimir. {{< param "PRODUCT_NAME" >}} is fully compatible with the OTel Collector, Prometheus Agent, and Promtail. You can use {{< param "PRODUCT_NAME" >}} as an alternative to either of these solutions or combined into a hybrid system of multiple collectors and agents. You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructure and you can pair it with your Grafana LGTM stack, a telemetry backend from Grafana Cloud, or any other compatible backend from any other vendor. @@ -27,7 +27,7 @@ You can deploy {{< param "PRODUCT_NAME" >}} anywhere within your IT infrastructu Some of these features include: * **Custom components:** You can use {{< param "PRODUCT_NAME" >}} to create and share custom components. - Custom components combine a pipeline of existing components into a single, easy-to-understand component that is just a few lines long. + Custom components combine a pipeline of existing components into a single, easy-to-understand component that's just a few lines long. You can use pre-built custom components from the community, ones packaged by Grafana, or create your own. * **GitOps compatibility:** {{< param "PRODUCT_NAME" >}} uses frameworks to pull configurations from Git, S3, HTTP endpoints, and just about any other source. * **Clustering support:** {{< param "PRODUCT_NAME" >}} has native clustering support. @@ -37,10 +37,10 @@ Some of these features include: * **Debugging utilities:** {{< param "PRODUCT_NAME" >}} provides troubleshooting support and an embedded [user interface][UI] to help you identify and resolve configuration problems. [OpenTelemetry]: https://opentelemetry.io/ecosystem/distributions/ -[Prometheus]: https://prometheus.io -[Loki]: https://github.com/grafana/loki -[Grafana]: https://github.com/grafana/grafana -[Tempo]: https://github.com/grafana/tempo -[Mimir]: https://github.com/grafana/mimir -[Pyroscope]: https://github.com/grafana/pyroscope +[Prometheus]: https://prometheus.io/ +[Loki]: https://grafana.com/docs/loki/ +[Grafana]: https://grafana.com/docs/grafana/ +[Tempo]: https://grafana.com/docs/tempo/ +[Mimir]: https://grafana.com/docs/mimir/ +[Pyroscope]: https://grafana.com/docs/pyroscope/ [UI]: ./tasks/debug/#alloy-ui diff --git a/docs/sources/concepts/clustering.md b/docs/sources/concepts/clustering.md index a35e2eeefb..506e638035 100644 --- a/docs/sources/concepts/clustering.md +++ b/docs/sources/concepts/clustering.md @@ -8,11 +8,11 @@ weight: 500 # Clustering -Clustering enables a fleet of {{< param "PRODUCT_NAME" >}}s to work together for workload distribution and high availability. +Clustering enables a fleet of {{< param "PRODUCT_NAME" >}} deployments to work together for workload distribution and high availability. It helps create horizontally scalable deployments with minimal resource and operational overhead. To achieve this, {{< param "PRODUCT_NAME" >}} makes use of an eventually consistent model that assumes all participating -{{< param "PRODUCT_NAME" >}}s are interchangeable and converge on using the same configuration file. +{{< param "PRODUCT_NAME" >}} deployments are interchangeable and converge on using the same configuration file. The behavior of a standalone, non-clustered {{< param "PRODUCT_NAME" >}} is the same as if it were a single-node cluster. @@ -24,7 +24,7 @@ You configure clustering by passing `cluster` command-line flags to the [run][] Target auto-distribution is the most basic use case of clustering. It allows scraping components running on all peers to distribute the scrape load between themselves. -Target auto-distribution requires that all {{< param "PRODUCT_NAME" >}} in the same cluster can reach the same service discovery APIs and scrape the same targets. +Target auto-distribution requires that all {{< param "PRODUCT_NAME" >}} deployments in the same cluster can reach the same service discovery APIs and scrape the same targets. You must explicitly enable target auto-distribution on components by defining a `clustering` block. @@ -41,7 +41,7 @@ prometheus.scrape "default" { A cluster state change is detected when a new node joins or an existing node leaves. All participating components locally recalculate target ownership and re-balance the number of targets they’re scraping without explicitly communicating ownership over the network. -Target auto-distribution allows you to dynamically scale the number of {{< param "PRODUCT_NAME" >}}s to distribute workload during peaks. +Target auto-distribution allows you to dynamically scale the number of {{< param "PRODUCT_NAME" >}} deployments to distribute workload during peaks. It also provides resiliency because targets are automatically picked up by one of the node peers if a node leaves. {{< param "PRODUCT_NAME" >}} uses a local consistent hashing algorithm to distribute targets, meaning that, on average, only ~1/N of the targets are redistributed. diff --git a/docs/sources/concepts/components.md b/docs/sources/concepts/components.md index 5874a59051..f6dafbd785 100644 --- a/docs/sources/concepts/components.md +++ b/docs/sources/concepts/components.md @@ -12,8 +12,8 @@ Each component handles a single task, such as retrieving secrets or collecting P Components are composed of the following: -* Arguments: Settings that configure a component. -* Exports: Named values that a component exposes to other components. +* **Arguments:** Settings that configure a component. +* **Exports:** Named values that a component exposes to other components. Each component has a name that describes what that component is responsible for. For example, the `local.file` component is responsible for retrieving the contents of files on disk. diff --git a/docs/sources/concepts/configuration-syntax/files.md b/docs/sources/concepts/configuration-syntax/files.md index 22ca0e40a3..b1bb302c63 100644 --- a/docs/sources/concepts/configuration-syntax/files.md +++ b/docs/sources/concepts/configuration-syntax/files.md @@ -7,7 +7,7 @@ weight: 100 # Files -{{< param "PRODUCT_NAME" >}} configuration files are plain text files with the `.alloy` file extension. +{{< param "PRODUCT_NAME" >}} configuration files are plain text files with a `.alloy` file extension. You can refer to each {{< param "PRODUCT_NAME" >}} file as a "configuration file" or an "{{< param "PRODUCT_NAME" >}} configuration." {{< param "PRODUCT_NAME" >}} configuration files must be UTF-8 encoded and can contain Unicode characters. diff --git a/docs/sources/get-started/deploy.md b/docs/sources/get-started/deploy.md index 681b0edd41..f7244d4ef5 100644 --- a/docs/sources/get-started/deploy.md +++ b/docs/sources/get-started/deploy.md @@ -39,7 +39,7 @@ To decide whether scaling is necessary, check metrics such as: #### Stateful and stateless components In the context of tracing, a "stateful component" is a component that needs to aggregate certain spans to work correctly. -A "stateless {{< param "PRODUCT_NAME" >}}" is an {{< param "PRODUCT_NAME" >}} which doesn't contain stateful components. +A "stateless {{< param "PRODUCT_NAME" >}}" is an {{< param "PRODUCT_NAME" >}} instance which doesn't contain stateful components. Scaling stateful {{< param "PRODUCT_NAME" >}} instances is more difficult, because spans must be forwarded to a specific {{< param "PRODUCT_NAME" >}} instance according to a span property such as trace ID or a `service.name` attribute. You can forward spans with `otelcol.exporter.loadbalancing`. diff --git a/docs/sources/get-started/install/_index.md b/docs/sources/get-started/install/_index.md index 673e8de2ee..0f605d4b95 100644 --- a/docs/sources/get-started/install/_index.md +++ b/docs/sources/get-started/install/_index.md @@ -28,4 +28,4 @@ Installing {{< param "PRODUCT_NAME" >}} on other operating systems is possible, By default, {{< param "PRODUCT_NAME" >}} sends anonymous usage information to Grafana Labs. Refer to [data collection][] for more information about what data is collected and how you can opt-out. -[data collection]: "../../../data-collection/ +[data collection]: "../../../../data-collection/ diff --git a/docs/sources/get-started/install/kubernetes.md b/docs/sources/get-started/install/kubernetes.md index 62dd655d09..b208b14612 100644 --- a/docs/sources/get-started/install/kubernetes.md +++ b/docs/sources/get-started/install/kubernetes.md @@ -69,7 +69,7 @@ You have successfully deployed {{< param "PRODUCT_NAME" >}} on Kubernetes, using - [Configure {{< param "PRODUCT_NAME" >}}][Configure] -- Refer to the [{{< param "PRODUCT_NAME" >}} Helm chart documentation on Artifact Hub][Artifact Hub] for more information about the Helm chart. + [Helm]: https://helm.sh [Artifact Hub]: https://artifacthub.io/packages/helm/grafana/alloy diff --git a/docs/sources/introduction/_index.md b/docs/sources/introduction/_index.md index bb8ab09c86..5c21b5434e 100644 --- a/docs/sources/introduction/_index.md +++ b/docs/sources/introduction/_index.md @@ -18,7 +18,7 @@ It's fully compatible with the most popular open source observability standards Some of the key features of {{< param "PRODUCT_NAME" >}} include: * **Custom components:** You can use {{< param "PRODUCT_NAME" >}} to create and share custom components. - Custom components combine a pipeline of existing components into a single, easy-to-understand component that is just a few lines long. + Custom components combine a pipeline of existing components into a single, easy-to-understand component that's just a few lines long. You can use pre-built custom components from the community, ones packaged by Grafana, or create your own. * **Reusable components:** You can use the output of a component as the input for multiple other components. * **Chained components:** You can chain components together to form a pipeline. @@ -30,6 +30,7 @@ Some of the key features of {{< param "PRODUCT_NAME" >}} include: * **Security:** {{< param "PRODUCT_NAME" >}} helps you manage authentication credentials and connect to HashiCorp Vaults or Kubernetes clusters to retrieve secrets. * **Debugging utilities:** {{< param "PRODUCT_NAME" >}} provides troubleshooting support and an embedded [user interface][UI] to help you identify and resolve configuration problems. + [hashmod sharding]: https://grafana.com/docs/agent/latest/static/operation-guide/ diff --git a/docs/sources/tasks/configure/_index.md b/docs/sources/tasks/configure/_index.md index 844aa85187..c4dc741b33 100644 --- a/docs/sources/tasks/configure/_index.md +++ b/docs/sources/tasks/configure/_index.md @@ -8,7 +8,7 @@ weight: 90 # Configure {{% param "FULL_PRODUCT_NAME" %}} -You can configure {{< param "PRODUCT_NAME" >}} after it is [installed][Install]. +You can configure {{< param "PRODUCT_NAME" >}} after it is [installed][]. The default configuration file for {{< param "PRODUCT_NAME" >}} is located at: * Linux: `/etc/alloy/config.alloy` @@ -19,4 +19,4 @@ This section includes information that helps you configure {{< param "PRODUCT_NA {{< section >}} -[Install]: ../../get-started/install/ +[installed]: ../../get-started/install/ diff --git a/docs/sources/tasks/configure/configure-windows.md b/docs/sources/tasks/configure/configure-windows.md index 39fc29bf6e..db6f883ab0 100644 --- a/docs/sources/tasks/configure/configure-windows.md +++ b/docs/sources/tasks/configure/configure-windows.md @@ -10,7 +10,7 @@ weight: 500 To configure {{< param "PRODUCT_NAME" >}} on Windows, perform the following steps: -1. Edit the default configuration file at `C:\Program Files\Grafana Alloy\config.alloy`. +1. Edit the default configuration file at `%PROGRAMFILES%\GrafanaLabs\Alloy\config.alloy`. 1. Restart the {{< param "PRODUCT_NAME" >}} service: @@ -30,8 +30,8 @@ By default, the {{< param "PRODUCT_NAME" >}} service will launch and pass the following arguments to the {{< param "PRODUCT_NAME" >}} binary: * `run` -* `C:\Program Files\Grafana Alloy\config.alloy` -* `--storage.path=C:\ProgramData\Grafana Alloy\data` +* `%PROGRAMFILES%\GrafanaLabs\Alloy\config.alloy` +* `--storage.path=%PROGRAMDATA%\GrafanaLabs\Alloy\data` To change the set of command-line arguments passed to the {{< param "PRODUCT_NAME" >}} binary, perform the following steps: @@ -41,7 +41,7 @@ To change the set of command-line arguments passed to the {{< param "PRODUCT_NAM 1. Type `regedit` and click **OK**. -1. Navigate to the key at the path `HKEY_LOCAL_MACHINE\SOFTWARE\Grafana\Grafana Alloy`. +1. Navigate to the key at the path `HKEY_LOCAL_MACHINE\SOFTWARE\GrafanaLabs\Alloy`. 1. Double-click on the value called **Arguments***. diff --git a/docs/sources/tasks/migrate/from-operator.md b/docs/sources/tasks/migrate/from-operator.md index f7d877a652..7abf65f2e3 100644 --- a/docs/sources/tasks/migrate/from-operator.md +++ b/docs/sources/tasks/migrate/from-operator.md @@ -8,24 +8,18 @@ weight: 320 # Migrate from Grafana Agent Operator to {{% param "FULL_PRODUCT_NAME" %}} -With the release of {{< param "PRODUCT_NAME" >}}, Grafana Agent Operator is no longer the recommended way to deploy {{< param "PRODUCT_NAME" >}} in Kubernetes. -Some of the Operator functionality has moved into {{< param "PRODUCT_NAME" >}} itself, and the Helm Chart has replaced the remaining functionality. +You can migrate from Grafana Agent Operator to {{< param "PRODUCT_NAME" >}}. - The Monitor types (`PodMonitor`, `ServiceMonitor`, `Probe`, and `PodLogs`) are all supported natively by {{< param "PRODUCT_NAME" >}}. - You are no longer required to use the Operator to consume those CRDs for dynamic monitoring in your cluster. - The parts of the Operator that deploy the {{< param "PRODUCT_NAME" >}} itself (`GrafanaAgent`, `MetricsInstance`, and `LogsInstance` CRDs) are deprecated. - Operator users should use the {{< param "PRODUCT_NAME" >}} [Helm Chart][] to deploy {{< param "PRODUCT_NAME" >}} directly to your clusters. - -This guide provides some steps to get started with {{< param "PRODUCT_NAME" >}} for users coming from Grafana Agent Operator. ## Deploy {{% param "PRODUCT_NAME" %}} with Helm -1. Create a `values.yaml` file, which contains options for deploying your {{< param "PRODUCT_NAME" >}}. +1. Create a `values.yaml` file, which contains options for deploying {{< param "PRODUCT_NAME" >}}. You can start with the [default values][] and customize as you see fit, or start with this snippet, which should be a good starting point for what the Operator does. ```yaml - agent: - mode: 'flow' + alloy: configMap: create: true clustering: @@ -42,7 +36,7 @@ This guide provides some steps to get started with {{< param "PRODUCT_NAME" >}} This is one of many deployment possible modes. For example, you may want to use a `DaemonSet` to collect host-level logs or metrics. See the {{< param "PRODUCT_NAME" >}} [deployment guide][] for more details about different topologies. -1. Create an {{< param "PRODUCT_NAME" >}} configuration file, `alloy.alloy`. +1. Create an {{< param "PRODUCT_NAME" >}} configuration file, `config.alloy`. In the next step, you add to this configuration as you convert `MetricsInstances`. You can add any additional configuration to this file as you need. @@ -136,8 +130,7 @@ Our current recommendation is to create an additional DaemonSet deployment of {{ These values are close to what the Operator currently deploys for logs: ```yaml -agent: - mode: 'flow' +alloy: configMap: create: true clustering: @@ -281,7 +274,7 @@ The [reference documentation][component documentation] should help convert those [default values]: https://github.com/grafana/alloy/blob/main/operations/helm/charts/alloy/values.yaml [clustering]: ../../../concepts/clustering/ -[deployment guide]: ../../../get-started/deploy-alloy +[deployment guide]: ../../../get-started/deploy/ [operator guide]: https://grafana.com/docs/agent/latest/operator/deploy-agent-operator-resources/#deploy-a-metricsinstance-resource diff --git a/docs/sources/tasks/migrate/from-prometheus.md b/docs/sources/tasks/migrate/from-prometheus.md index d3e6181c8b..54fe063b4b 100644 --- a/docs/sources/tasks/migrate/from-prometheus.md +++ b/docs/sources/tasks/migrate/from-prometheus.md @@ -207,6 +207,6 @@ The following list is specific to the convert command and not {{< param "PRODUCT [convert]: ../../../reference/cli/convert/ [run]: ../../../reference/cli/run/ [run alloy]: ../../../get-started/run/ -[DebuggingUI]: ../../tasks/debug/ +[DebuggingUI]: ../../debug/ [{{< param "PRODUCT_NAME" >}} configuration]: ../../../concepts/config-language/ [UI]: ../../debug/#alloy-ui diff --git a/docs/sources/tasks/migrate/from-promtail.md b/docs/sources/tasks/migrate/from-promtail.md index 4046ccbd6c..ba13c1babe 100644 --- a/docs/sources/tasks/migrate/from-promtail.md +++ b/docs/sources/tasks/migrate/from-promtail.md @@ -126,7 +126,7 @@ scrape_configs: __path__: /var/log/*.log ``` -The convert command takes the YAML file as input and outputs a [{{< param "PRODUCT_NAME" >}} configuration][] file. +The convert command takes the YAML file as input and outputs an [{{< param "PRODUCT_NAME" >}} configuration][configuration] file. ```shell alloy convert --source-format=promtail --output= @@ -189,5 +189,5 @@ The following list is specific to the convert command and not {{< param "PRODUCT [run]: ../../../reference/cli/run/ [run alloy]: ../../../get-started/run/ [DebuggingUI]: ../../../tasks/debug/ -[{{< param "PRODUCT_NAME" >}} configuration]: ../../../concepts/config-language/ -[UI]: ../../tasks/debug/#alloy-ui +[configuration]: ../../../concepts/configuration-syntax/ +[UI]: ../../debug/#alloy-ui diff --git a/docs/sources/tasks/migrate/from-static.md b/docs/sources/tasks/migrate/from-static.md index 50d6cfa7ee..92029786c7 100644 --- a/docs/sources/tasks/migrate/from-static.md +++ b/docs/sources/tasks/migrate/from-static.md @@ -102,7 +102,7 @@ Your configuration file must be a valid Grafana Agent Static configuration file. 1. You can follow the convert CLI command [debugging][] instructions to generate a diagnostic report. -1. Refer to the {{< param "PRODUCT_NAME" >}} [debugging UI][DebuggingUI] for more information about running {{< param "PRODUCT_NAME" >}}. +1. Refer to the {{< param "PRODUCT_NAME" >}} [debugging UI][UI] for more information about running {{< param "PRODUCT_NAME" >}}. 1. If your Grafana Agent Static configuration can't be converted and loaded directly into {{< param "PRODUCT_NAME" >}}, diagnostic information is sent to `stderr`. You can use the `--config.bypass-conversion-errors` flag with `--config.format=static` to bypass any non-critical issues and start {{< param "PRODUCT_NAME" >}}. @@ -172,7 +172,7 @@ logs: - url: https://USER_ID:API_KEY@logs-prod3.grafana.net/loki/api/v1/push ``` -The convert command takes the YAML file as input and outputs a [{{< param "PRODUCT_NAME" >}} configuration][] file. +The convert command takes the YAML file as input and outputs a [{{< param "PRODUCT_NAME" >}} configuration][configuration] file. ```shell alloy convert --source-format=static --output= @@ -316,7 +316,7 @@ The following list is specific to the convert command and not {{< param "PRODUCT [run]: ../../../reference/cli/run/ [run alloy]: ../../../get-started/run/ [DebuggingUI]: ../../debug/ -[{{< param "PRODUCT_NAME" >}} configuration]: ../../../concepts/config-language/ +[configuration]: ../../../concepts/configuration-syntax/ [Integrations next]: https://grafana.com/docs/agent/latest/static/configuration/integrations/integrations-next/ @@ -330,5 +330,4 @@ The following list is specific to the convert command and not {{< param "PRODUCT [Metrics]: https://grafana.com/docs/agent/latest/static/configuration/metrics-config/ [Logs]: https://grafana.com/docs/agent/latest/static/configuration/logs-config/ - -[UI]: ../../debug/#grafana-agent-flow-ui +[UI]: ../../debug/#alloy-ui diff --git a/docs/sources/tutorials/_index.md b/docs/sources/tutorials/_index.md index 91fcbcffe0..13d057749e 100644 --- a/docs/sources/tutorials/_index.md +++ b/docs/sources/tutorials/_index.md @@ -1,5 +1,5 @@ --- -canonical: https://grafana.com/docs/alloy/latest/tutorials/flow-by-example/ +canonical: https://grafana.com/docs/alloy/latest/tutorials/ description: Learn how to use Grafana Alloy title: Tutorials weight: 100 diff --git a/docs/sources/tutorials/first-components-and-stdlib/index.md b/docs/sources/tutorials/first-components-and-stdlib/index.md index 0575de5f63..add9a90c90 100644 --- a/docs/sources/tutorials/first-components-and-stdlib/index.md +++ b/docs/sources/tutorials/first-components-and-stdlib/index.md @@ -1,5 +1,5 @@ --- -canonical: https://grafana.com/docs/alloy/latest/tutorials/flow-by-example/first-components-and-stdlib/ +canonical: https://grafana.com/docs/alloy/latest/tutorials/first-components-and-stdlib/ description: Learn about the basics of the {{< param "PRODUCT_NAME" >}} configuration syntax title: First components and introducing the standard library weight: 20 @@ -14,10 +14,9 @@ It introduces a basic pipeline that collects metrics from the host and sends the **Recommended reading** -- [Configuration language][] -- [Configuration language concepts][] +- [{{< param "PRODUCT_NAME" >}} configuration syntax][Configuration syntax] -The [{{< param "PRODUCT_NAME" >}} configuration syntax][] is an HCL-inspired configuration language used to configure {{< param "PRODUCT_NAME" >}}. +The [{{< param "PRODUCT_NAME" >}} configuration syntax][Configuration syntax] is an HCL-inspired configuration language used to configure {{< param "PRODUCT_NAME" >}}. An {{< param "PRODUCT_NAME" >}} configuration file is comprised of three things: 1. **Attributes** @@ -93,7 +92,7 @@ prometheus.remote_write "local_prom" { A list of all available components can be found in the [Component reference][]. Each component has a link to its documentation, which contains a description of what the component does, its arguments, its exports, and examples. -[Component reference]: ../../../reference/components/ +[Component reference]: ../../reference/components/ {{< /admonition >}} This pipeline has two components: `local.file` and `prometheus.remote_write`. @@ -106,7 +105,7 @@ The `basic_auth` block contains the `username` and `password` attributes, which The `content` export is referenced by using the syntax `local.file.example.content`, where `local.file.example` is the fully qualified name of the component (the component's type + its label) and `content` is the name of the export.

-Flow of example pipeline with local.file and prometheus.remote_write components +Example pipeline with local.file and prometheus.remote_write components

{{< admonition type="note" >}} @@ -159,7 +158,7 @@ After ~15-20 seconds, you should be able to see the metrics from the `prometheus Try querying for `node_memory_Active_bytes` to see the active memory of your host.

-Screenshot of node_memory_Active_bytes query in Grafana +Screenshot of node_memory_Active_bytes query in Grafana

## Visualizing the relationship between components @@ -167,7 +166,7 @@ Try querying for `node_memory_Active_bytes` to see the active memory of your hos The following diagram is an example pipeline:

-Flow of example pipeline with a prometheus.scrape, prometheus.exporter.unix, and prometheus.remote_write components +Example pipeline with a prometheus.scrape, prometheus.exporter.unix, and prometheus.remote_write components

The preceding configuration defines three components: @@ -201,13 +200,13 @@ You can refer to the [prometheus.exporter.redis][] component documentation for m To give a visual hint, you want to create a pipeline that looks like this:

-Flow of exercise pipeline, with a scrape, unix_exporter, redis_exporter, and remote_write component +Exercise pipeline, with a scrape, unix_exporter, redis_exporter, and remote_write component

{{< admonition type="note" >}} You may find the [concat][] standard library function useful. -[concat]: ../../../reference/stdlib/concat/ +[concat]: ../../reference/stdlib/concat/ {{< /admonition >}} You can run {{< param "PRODUCT_NAME" >}} with the new configuration file by running: @@ -279,17 +278,15 @@ Generally, you can use a persistent directory for this, as some components may u In the next tutorial, you will look at how to configure {{< param "PRODUCT_NAME" >}} to collect logs from a file and send them to Loki. You will also look at using different components to process metrics and logs before sending them. -[Configuration language]: ../../../concepts/config-language/ -[Configuration language concepts]: ../../../concepts/configuration_language/ -[Standard library documentation]: ../../../reference/stdlib/ +[Configuration syntax]: ../../concepts/configuration-syntax/ +[Standard library documentation]: ../../reference/stdlib/ [node_exporter]: https://github.com/prometheus/node_exporter -[{{< param "PRODUCT_NAME" >}} configuration syntax]: https://github.com/grafana/river -[prometheus.exporter.redis]: ../../../reference/components/prometheus.exporter.redis/ +[prometheus.exporter.redis]: ../../reference/components/prometheus.exporter.redis/ [http://localhost:3000/explore]: http://localhost:3000/explore -[prometheus.exporter.unix]: ../../../reference/components/prometheus.exporter.unix/ -[prometheus.scrape]: ../../../reference/components/prometheus.scrape/ -[prometheus.remote_write]: ../../../reference/components/prometheus.remote_write/ -[Components]: ../../../concepts/components/ -[Component controller]: ../../../concepts/component_controller/ -[Components configuration language]: ../../../concepts/config-language/components/ -[env]: ../../../reference/stdlib/env/ +[prometheus.exporter.unix]: ../../reference/components/prometheus.exporter.unix/ +[prometheus.scrape]: ../../reference/components/prometheus.scrape/ +[prometheus.remote_write]: ../../reference/components/prometheus.remote_write/ +[Components]: ../../concepts/components/ +[Component controller]: ../../concepts/component_controller/ +[Components configuration language]: ../../concepts/configuration-syntax/components/ +[env]: ../../reference/stdlib/env/ diff --git a/docs/sources/tutorials/get-started.md b/docs/sources/tutorials/get-started.md index 896d0c1e9d..8184105c32 100644 --- a/docs/sources/tutorials/get-started.md +++ b/docs/sources/tutorials/get-started.md @@ -1,6 +1,6 @@ --- -canonical: https://grafana.com/docs/alloy/latest/tutorials/flow-by-example/get-started/ -description: Getting started with Flow-by-Example Tutorials +canonical: https://grafana.com/docs/alloy/latest/tutorials/get-started/ +description: Getting started with the tutorials title: Get started weight: 10 --- @@ -11,7 +11,7 @@ This set of tutorials contains a collection of examples that build on each other ## What is {{% param "PRODUCT_NAME" %}}? -{{< param "PRODUCT_NAME" >}} uses a declarative configuration language that allows you to define a pipeline of telemetry collection, processing, and output. It is built on top of the [{{< param "PRODUCT_NAME" >}}][river] configuration language, which is designed to be fast, simple, and debuggable. +{{< param "PRODUCT_NAME" >}} uses a declarative configuration language that allows you to define a pipeline of telemetry collection, processing, and output. It is built on top of the [{{< param "PRODUCT_NAME" >}} configuration syntax][configuration], which is designed to be fast, simple, and easy to debug. ## What do I need to get started? @@ -82,5 +82,5 @@ The tutorials are designed to be followed in order and generally build on each o The Recommended Reading sections in each tutorial provide a list of documentation topics. To help you understand the concepts used in the example, read the recommended topics in the order given. [alloy]: https://grafana.com/docs/alloy/latest/ -[river]: https://github.com/grafana/river -[install]: ../../../get-started/install/binary/#install-alloy-as-a-standalone-binary +[configuration]: ../../concepts/configuration-syntax/ +[install]: ../../get-started/install/binary/#install-alloy-as-a-standalone-binary diff --git a/docs/sources/tutorials/logs-and-relabeling-basics/index.md b/docs/sources/tutorials/logs-and-relabeling-basics/index.md index 95e77a6137..181f87fced 100644 --- a/docs/sources/tutorials/logs-and-relabeling-basics/index.md +++ b/docs/sources/tutorials/logs-and-relabeling-basics/index.md @@ -1,5 +1,5 @@ --- -canonical: https://grafana.com/docs/alloy/latest/tutorials/flow-by-example/logs-and-relabeling-basics/ +canonical: https://grafana.com/docs/alloy/latest/tutorials/logs-and-relabeling-basics/ description: Learn how to relabel metrics and collect logs title: Logs and relabeling basics weight: 30 @@ -77,7 +77,7 @@ There is an issue commonly faced when relabeling and using labels that start wit These labels are considered internal and are dropped before relabeling rules from a `prometheus.relabel` component are applied. If you would like to keep or act on these kinds of labels, use a [discovery.relabel][] component. -[discovery.relabel]: ../../../reference/components/discovery.relabel/ +[discovery.relabel]: ../../reference/components/discovery.relabel/ {{< /admonition >}} ## Send logs to Loki @@ -91,14 +91,14 @@ If you would like to keep or act on these kinds of labels, use a [discovery.rela Now that you're comfortable creating components and chaining them together, let's collect some logs and send them to Loki. We will use the `local.file_match` component to perform file discovery, the `loki.source.file` to collect the logs, and the `loki.write` component to send the logs to Loki. -Before doing this, we need to ensure we have a log file to scrape. We will use the `echo` command to create a file with some log content. +Before doing this, make sure you have a log file to scrape. You can use the `echo` command to create a file with some log content. ```bash mkdir -p /tmp/flow-logs echo "This is a log line" > /tmp/flow-logs/log.log ``` -Now that we have a log file, let's create a pipeline to scrape it. +Now that you have a log file, you can create a pipeline to scrape it. ```alloy local.file_match "tmplogs" { @@ -169,8 +169,8 @@ loki.write "local_loki" { {{< admonition type="note" >}} You can use the [loki.relabel][] component to relabel and add labels, just like you can with the [prometheus.relabel][] component. -[loki.relabel]: ../../../reference/components/loki.relabel -[prometheus.relabel]: ../../../reference/components/prometheus.relabel +[loki.relabel]: ../../reference/components/loki.relabel +[prometheus.relabel]: ../../reference/components/prometheus.relabel {{< /admonition >}} Once you have your completed configuration, run {{< param "PRODUCT_NAME" >}} and execute the following: @@ -311,12 +311,12 @@ You have learned the concepts of components, attributes, and expressions. You ha In the next tutorial, you will learn more about how to use the `loki.process` component to extract values from logs and use them. [First components and introducing the standard library]: ../first-components-and-stdlib/ -[prometheus.relabel]: ../../../reference/components/prometheus.relabel/ -[constants]: ../../../reference/stdlib/constants/ +[prometheus.relabel]: ../../reference/components/prometheus.relabel/ +[constants]: ../../reference/stdlib/constants/ [localhost:3000/explore]: http://localhost:3000/explore -[prometheus.relabel rule-block]: ../../../reference/components/prometheus.relabel/#rule-block -[local.file_match]: ../../../reference/components/local.file_match/ -[loki.source.file]: ../../../reference/components/loki.source.file/ -[loki.write]: ../../../reference/components/loki.write/ -[loki.relabel]: ../../../reference/components/loki.relabel/ -[loki.process]: ../../../reference/components/loki.process/ +[prometheus.relabel rule-block]: ../../reference/components/prometheus.relabel/#rule-block +[local.file_match]: ../../reference/components/local.file_match/ +[loki.source.file]: ../../reference/components/loki.source.file/ +[loki.write]: ../../reference/components/loki.write/ +[loki.relabel]: ../../reference/components/loki.relabel/ +[loki.process]: ../../reference/components/loki.process/ diff --git a/docs/sources/tutorials/processing-logs/index.md b/docs/sources/tutorials/processing-logs/index.md index 162b3e7381..1999df3712 100644 --- a/docs/sources/tutorials/processing-logs/index.md +++ b/docs/sources/tutorials/processing-logs/index.md @@ -1,5 +1,5 @@ --- -canonical: https://grafana.com/docs/alloy/latest/tutorials/flow-by-example/processing-logs/ +canonical: https://grafana.com/docs/alloy/latest/tutorials/processing-logs/ description: Learn how to process logs title: Processing Logs weight: 40 @@ -127,7 +127,7 @@ loki.write "local_loki" { ``` You can skip to the next section if you successfully completed the previous section's exercises. -If not, or if you were unsure how things worked, let's break down what is happening in the `loki.process` component. +If not, or if you were unsure how things worked, let's break down what's happening in the `loki.process` component. Many of the `stage.*` blocks in `loki.process` act on reading or writing a shared map of values extracted from the logs. You can think of this extracted map as a hashmap or table that each stage has access to, and it is referred to as the "extracted map" from here on. @@ -210,7 +210,7 @@ stage.timestamp { This stage acts on the `ts` value in the map you extracted in the previous stage. The value of `ts` is parsed in the format of `RFC3339` and added as the timestamp to be ingested by Loki. This is useful if you want to use the timestamp present in the log itself, rather than the time the log is ingested. -This stage does not modify the extracted map. +This stage doesn't modify the extracted map. ### Stage 3 @@ -288,7 +288,7 @@ stage.drop { ``` This stage acts on the `is_secret` value in the extracted map, which is a value that you extracted in the previous stage. -This stage drops the log line if the value of `is_secret` is `"true"` and does not modify the extracted map. +This stage drops the log line if the value of `is_secret` is `"true"` and doesn't modify the extracted map. There are many other ways to filter logs, but this is a simple example. Refer to the [loki.process#stage.drop][] documentation for more information. @@ -322,7 +322,7 @@ This stage doesn't modify the extracted map. ## Putting it all together Now that you have all of the pieces, let's run {{< param "PRODUCT_NAME" >}} and send some logs to it. -Modify `config.alloy` with the config from the previous example and start {{< param "PRODUCT_NAME" >}} with: +Modify `config.alloy` with the configuration from the previous example and start {{< param "PRODUCT_NAME" >}} with: ```bash /path/to/alloy run config.alloy @@ -347,7 +347,7 @@ Try querying for `{source="demo-api"}` and see if you can find the logs you sent Try playing around with the values of `"level"`, `"message"`, `"timestamp"`, and `"is_secret"` and see how the logs change. You can also try adding more stages to the `loki.process` component to extract more values from the logs, or add more labels. -![Example Loki Logs](/media/docs/agent/screenshot-flow-by-example-processed-log-lines.png) +![Example Loki Logs](/media/docs/alloy/screenshot-processed-log-lines.png) ## Exercise @@ -403,11 +403,11 @@ loki.write "local_loki" { {{< /collapse >}} -[loki.source.api]: ../../../reference/components/loki.source.api/ -[loki.process#stage.drop]: ../../../reference/components/loki.process/#stagedrop-block -[loki.process#stage.json]: ../../../reference/components/loki.process/#stagejson-block -[loki.process#stage.labels]: ../../../reference/components/loki.process/#stagelabels-block +[loki.source.api]: ../../reference/components/loki.source.api/ +[loki.process#stage.drop]: ../../reference/components/loki.process/#stagedrop-block +[loki.process#stage.json]: ../../reference/components/loki.process/#stagejson-block +[loki.process#stage.labels]: ../../reference/components/loki.process/#stagelabels-block [localhost:3000/explore]: http://localhost:3000/explore -[discovery.docker]: ../../../reference/components/discovery.docker/ -[loki.source.docker]: ../../../reference/components/loki.source.docker/ -[discovery.relabel]: ../../../reference/components/discovery.relabel/ +[discovery.docker]: ../../reference/components/discovery.docker/ +[loki.source.docker]: ../../reference/components/loki.source.docker/ +[discovery.relabel]: ../../reference/components/discovery.relabel/