Skip to content

Commit

Permalink
Some minor tweaks and updates to remove flow and agent refs
Browse files Browse the repository at this point in the history
  • Loading branch information
clayton-cornell committed Mar 1, 2024
1 parent cffb19d commit 49e1022
Show file tree
Hide file tree
Showing 8 changed files with 22 additions and 44 deletions.
2 changes: 1 addition & 1 deletion docs/sources/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ prometheus.remote_write "default" {

## {{% param "PRODUCT_NAME" %}} configuration generator

The {{< param "PRODUCT_NAME" >}} [configuration generator][] helps you get a head start on creating flow code.
The {{< param "PRODUCT_NAME" >}} [configuration generator][] helps you get a head start on creating {{< param "PRODUCT_NAME" >}} configurations.

{{< admonition type="note" >}}
This feature is experimental, and it doesn't support all River components.
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/reference/cli/convert.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ Using the `--source-format=promtail` will convert the source configuration from

Nearly all [Promtail features][] are supported and can be converted to {{< param "PRODUCT_NAME" >}} configuration.

If you have unsupported features in a source configuration, you will receive [errors][] when you convert to a flow configuration.
If you have unsupported features in a source configuration, you will receive [errors][] when you convert to a {{< param "PRODUCT_NAME" >}} configuration.
The converter will also raise warnings for configuration options that may require your attention.

Refer to [Migrate from Promtail to {{< param "PRODUCT_NAME" >}}][migrate promtail] for a detailed migration guide.
Expand Down
5 changes: 0 additions & 5 deletions docs/sources/reference/components/discovery.lightsail.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,4 @@
---
aliases:
- /docs/grafana-cloud/agent/flow/reference/components/discovery.lightsail/
- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/discovery.lightsail/
- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/discovery.lightsail/
- /docs/grafana-cloud/send-data/agent/flow/reference/components/discovery.lightsail/
canonical: https://grafana.com/docs/alloy/latest/reference/components/discovery.lightsail/
description: Learn about discovery.lightsail
title: discovery.lightsail
Expand Down
5 changes: 0 additions & 5 deletions docs/sources/reference/components/loki.process.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,4 @@
---
aliases:
- /docs/grafana-cloud/agent/flow/reference/components/loki.process/
- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/loki.process/
- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/loki.process/
- /docs/grafana-cloud/send-data/agent/flow/reference/components/loki.process/
canonical: https://grafana.com/docs/alloy/latest/reference/components/loki.process/
description: Learn about loki.process
title: loki.process
Expand Down
Original file line number Diff line number Diff line change
@@ -1,9 +1,4 @@
---
aliases:
- /docs/grafana-cloud/agent/flow/reference/components/otelcol.processor.tail_sampling/
- /docs/grafana-cloud/monitor-infrastructure/agent/flow/reference/components/otelcol.processor.tail_sampling/
- /docs/grafana-cloud/monitor-infrastructure/integrations/agent/flow/reference/components/otelcol.processor.tail_sampling/
- /docs/grafana-cloud/send-data/agent/flow/reference/components/otelcol.processor.tail_sampling/
canonical: https://grafana.com/docs/alloy/latest/reference/components/otelcol.processor.tail_sampling/
description: Learn about otelcol.processor.tail_sampling
labels:
Expand Down
23 changes: 8 additions & 15 deletions docs/sources/reference/components/prometheus.remote_write.md
Original file line number Diff line number Diff line change
Expand Up @@ -433,8 +433,7 @@ retention directly to the data age itself, as the truncation logic works on
_segments_, not the samples themselves. This makes data retention less
predictable when the component receives a non-consistent rate of data.

The [WAL block][] in Flow mode, or the [metrics config][] in Static mode
contain some configurable parameters that can be used to control the tradeoff
The [WAL block][] contains some configurable parameters that can be used to control the tradeoff
between memory usage, disk usage, and data retention.

The `truncate_frequency` or `wal_truncate_frequency` parameter configures the
Expand Down Expand Up @@ -496,18 +495,14 @@ To delete the corrupted WAL:
1. Find and delete the contents of the `wal` directory.

By default the `wal` directory is a subdirectory
of the `data-agent` directory located in the Grafana Agent working directory. The WAL data directory
may be different than the default depending on the [wal_directory][] setting in your Static configuration
file or the path specified by the Flow [command line flag][run] `--storage-path`.
of the `data-agent` directory located in the {{< param "PRODUCT_NAME" >}} working directory. The WAL data directory
may be different than the default depending on the path specified by the [command line flag][run] `--storage-path`.

{{< admonition type="note" >}}
There is one `wal` directory per:

* Metrics instance running in Static mode
* `prometheus.remote_write` component running in Flow mode
There is one `wal` directory per `prometheus.remote_write` component.
{{< /admonition >}}

1. [Start][Stop] Grafana Agent and verify that the WAL is working correctly.
1. [Start][Stop] {{< param "PRODUCT_NAME" >}} and verify that the WAL is working correctly.

<!-- START GENERATED COMPATIBLE COMPONENTS -->

Expand All @@ -525,8 +520,6 @@ Refer to the linked documentation for more details.
<!-- END GENERATED COMPATIBLE COMPONENTS -->

[snappy]: https://en.wikipedia.org/wiki/Snappy_(compression)
[WAL block]: /docs/agent/<ALLOY_VERSION>/flow/reference/components/prometheus.remote_write#wal-block
[metrics config]: /docs/agent/<ALLOY_VERSION>/static/configuration/metrics-config
[Stop]: /docs/agent/<ALLOY_VERSION>/flow/get-started/start-agent
[wal_directory]: /docs/agent/<ALLOY_VERSION>/static/configuration/metrics-config
[run]: /docs/agent/<ALLOY_VERSION>/flow/reference/cli/run
[WAL block]: #wal-block
[Stop]: ../../../get-started/start-agent/
[run]: ../../../reference/cli/run/
22 changes: 11 additions & 11 deletions docs/sources/shared/deploy-alloy.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,12 @@ This page lists common topologies used for deployments of {{% param "PRODUCT_NAM
## As a centralized collection service

Deploying {{< param "PRODUCT_NAME" >}} as a centralized service is recommended for collecting application telemetry.
This topology allows you to use a smaller number of agents to coordinate service discovery, collection, and remote writing.
This topology allows you to use a smaller number of collectors to coordinate service discovery, collection, and remote writing.

![centralized-collection](/media/docs/agent/agent-topologies/centralized-collection.png)

Using this topology requires deploying the Agent on separate infrastructure, and making sure that agents can discover and reach these applications over the network.
The main predictor for the size of the agent is the number of active metrics series it is scraping; a rule of thumb is approximately 10 KB of memory for each series.
Using this topology requires deploying {{< param "PRODUCT_NAME" >}} on separate infrastructure, and making sure that they can discover and reach these applications over the network.
The main predictor for the size of {{< param "PRODUCT_NAME" >}} is the number of active metrics series it's scraping. A rule of thumb is approximately 10 KB of memory for each series.
We recommend you start looking towards horizontal scaling around the 1 million active series mark.

### Using Kubernetes StatefulSets
Expand Down Expand Up @@ -57,7 +57,7 @@ Deploying one {{< param "PRODUCT_NAME" >}} per machine is required for collectin
Each {{< param "PRODUCT_NAME" >}} requires you to open an outgoing connection for each remote endpoint it’s shipping data to.
This can lead to NAT port exhaustion on the egress infrastructure.
Each egress IP can support up to (65535 - 1024 = 64511) outgoing connections on different ports.
So, if all {{< param "PRODUCT_NAME" >}}s are shipping metrics and log data, an egress IP can support up to 32,255 agents.
So, if all {{< param "PRODUCT_NAME" >}}s are shipping metrics and log data, an egress IP can support up to 32,255 collectors.

### Using Kubernetes DaemonSets

Expand All @@ -66,13 +66,13 @@ The simplest use case of the host daemon topology is a Kubernetes DaemonSet, and
### Pros

* Doesn’t require running on separate infrastructure
* Typically leads to smaller-sized agents
* Typically leads to smaller-sized collectors
* Lower network latency to instrumented applications

### Cons

* Requires planning a process for provisioning Grafana Agent on new machines, as well as keeping configuration up to date to avoid configuration drift
* Not possible to scale agents independently when using Kubernetes DaemonSets
* Requires planning a process for provisioning {{< param "PRODUCT_NAME" >}} on new machines, as well as keeping configuration up to date to avoid configuration drift
* Not possible to scale independently when using Kubernetes DaemonSets
* Scaling the topology can strain external APIs (like service discovery) and network infrastructure (like firewalls, proxy servers, and egress points)

### Use for
Expand All @@ -81,19 +81,19 @@ The simplest use case of the host daemon topology is a Kubernetes DaemonSet, and

### Don’t use for

* Scenarios where Grafana Agent grows so large it can become a noisy neighbor
* Scenarios where {{< param "PRODUCT_NAME" >}} grows so large it can become a noisy neighbor
* Collecting an unpredictable amount of telemetry

## As a container sidecar

Deploying {{< param "PRODUCT_NAME" >}} as a container sidecar is only recommended for short-lived applications or specialized agent deployments.
Deploying {{< param "PRODUCT_NAME" >}} as a container sidecar is only recommended for short-lived applications or specialized {{< param "PRODUCT_NAME" >}} deployments.

![daemonset](/media/docs/agent/agent-topologies/sidecar.png)

### Using Kubernetes Pod sidecars

In a Kubernetes environment, the sidecar model consists of deploying {{< param "PRODUCT_NAME" >}} as an extra container on the Pod.
The Pod’s controller, network configuration, enabled capabilities, and available resources are shared between the actual application and the sidecar agent.
The Pod’s controller, network configuration, enabled capabilities, and available resources are shared between the actual application and the sidecar {{< param "PRODUCT_NAME" >}}.

### Pros

Expand All @@ -115,7 +115,7 @@ The Pod’s controller, network configuration, enabled capabilities, and availab
### Don’t use for

* Long-lived applications
* Scenarios where the agent size grows so large it can become a noisy neighbor
* Scenarios where the {{< param "PRODUCT_NAME" >}} size grows so large it can become a noisy neighbor

<!-- ToDo: Check URL path -->
[hashmod sharding]: https://grafana.com/docs/agent/latest/static/operation-guide/
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/tasks/migrate/from-operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ weight: 320
With the release of {{< param "PRODUCT_NAME" >}}, Grafana Agent Operator is no longer the recommended way to deploy {{< param "PRODUCT_ROOT_NAME" >}} in Kubernetes.
Some of the Operator functionality has moved into {{< param "PRODUCT_NAME" >}} itself, and the Helm Chart has replaced the remaining functionality.

- The Monitor types (`PodMonitor`, `ServiceMonitor`, `Probe`, and `LogsInstance`) are all supported natively by {{< param "PRODUCT_NAME" >}}.
- The Monitor types (`PodMonitor`, `ServiceMonitor`, `Probe`, and `PodLogs`) are all supported natively by {{< param "PRODUCT_NAME" >}}.
You are no longer required to use the Operator to consume those CRDs for dynamic monitoring in your cluster.
- The parts of the Operator that deploy the {{< param "PRODUCT_ROOT_NAME" >}} itself (`GrafanaAgent`, `MetricsInstance`, and `LogsInstance` CRDs) are deprecated.
Operator users should use the {{< param "PRODUCT_ROOT_NAME" >}} [Helm Chart][] to deploy {{< param "PRODUCT_ROOT_NAME" >}} directly to your clusters.
Expand Down

0 comments on commit 49e1022

Please sign in to comment.