diff --git a/_aggregations/bucket/terms.md b/_aggregations/bucket/terms.md index b36214e3f6..b4479d97af 100644 --- a/_aggregations/bucket/terms.md +++ b/_aggregations/bucket/terms.md @@ -112,7 +112,7 @@ While the `doc_count` field provides a representation of the number of individua * The field does not support nested arrays; only positive integers can be used. * If a document does not contain the `_doc_count` field, aggregation uses the document to increase the count by 1. -OpenSearch features that rely on an accurate document count illustrate the importance of using the `_doc_count` field. To see how this field can be used to support other search tools, refer to [Index rollups](https://opensearch.org/docs/latest/im-plugin/index-rollups/index/), an OpenSearch feature for the Index Management (IM) plugin that stores documents with pre-aggregated data in rollup indexes. +OpenSearch features that rely on an accurate document count illustrate the importance of using the `_doc_count` field. To see how this field can be used to support other search tools, refer to [Index rollups]({{site.url}}{{site.baseurl}}/im-plugin/index-rollups/index/), an OpenSearch feature for the Index Management (IM) plugin that stores documents with pre-aggregated data in rollup indexes. {: .tip} #### Example request diff --git a/_api-reference/index-apis/component-template.md b/_api-reference/index-apis/component-template.md index 4dc736f4c3..7577025587 100644 --- a/_api-reference/index-apis/component-template.md +++ b/_api-reference/index-apis/component-template.md @@ -75,11 +75,11 @@ Parameter | Data type | Description #### `mappings` -The field mappings that exist in the index. For more information, see [Mappings and field types](https://opensearch.org/docs/latest/field-types/). Optional. +The field mappings that exist in the index. For more information, see [Mappings and field types]({{site.url}}{{site.baseurl}}/field-types/). Optional. #### `settings` -Any configuration options for the index. For more information, see [Index settings](https://opensearch.org/docs/latest/install-and-configure/configuring-opensearch/index-settings/). +Any configuration options for the index. For more information, see [Index settings]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/index-settings/). ## Example requests diff --git a/_api-reference/index-apis/create-index-template.md b/_api-reference/index-apis/create-index-template.md index 7f14bb9927..203d65f989 100644 --- a/_api-reference/index-apis/create-index-template.md +++ b/_api-reference/index-apis/create-index-template.md @@ -68,11 +68,11 @@ Parameter | Data type | Description #### `mappings` -The field mappings that exist in the index. For more information, see [Mappings and field types](https://opensearch.org/docs/latest/field-types/). Optional. +The field mappings that exist in the index. For more information, see [Mappings and field types]({{site.url}}{{site.baseurl}}/field-types/). Optional. #### `settings` -Any configuration options for the index. For more information, see [Index settings](https://opensearch.org/docs/latest/install-and-configure/configuring-opensearch/index-settings/). +Any configuration options for the index. For more information, see [Index settings]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/index-settings/). ## Example requests diff --git a/_api-reference/index-apis/recover.md b/_api-reference/index-apis/recover.md index 3a04ca58f5..1b6866cf72 100644 --- a/_api-reference/index-apis/recover.md +++ b/_api-reference/index-apis/recover.md @@ -55,7 +55,7 @@ The following examples demonstrate how to recover information using the Recovery ### Recover information from several or all indexes -The following example request returns recovery information about several indexes in a [human-readable format](https://opensearch.org/docs/latest/api-reference/common-parameters/#human-readable-output): +The following example request returns recovery information about several indexes in a [human-readable format]({{site.url}}{{site.baseurl}}/api-reference/common-parameters/#human-readable-output): ```json GET index1,index2/_recovery?human diff --git a/_api-reference/index-apis/rollover.md b/_api-reference/index-apis/rollover.md index 54eb7d99ef..a9a232330a 100644 --- a/_api-reference/index-apis/rollover.md +++ b/_api-reference/index-apis/rollover.md @@ -40,7 +40,7 @@ During the index alias rollover process, if you don't specify a custom name and ## Using date math with index rollovers -When using an index alias for time-series data, you can use [date math](https://opensearch.org/docs/latest/field-types/supported-field-types/date/) in the index name to track the rollover date. For example, you can create an alias pointing to `my-index-{now/d}-000001`. If you create an alias on June 11, 2029, then the index name would be `my-index-2029.06.11-000001`. For a rollover on June 12, 2029, the new index would be named `my-index-2029.06.12-000002`. See [Roll over an index alias with a write index](#rolling-over-an-index-alias-with-a-write-index) for a practical example. +When using an index alias for time-series data, you can use [date math]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/date/) in the index name to track the rollover date. For example, you can create an alias pointing to `my-index-{now/d}-000001`. If you create an alias on June 11, 2029, then the index name would be `my-index-2029.06.11-000001`. For a rollover on June 12, 2029, the new index would be named `my-index-2029.06.12-000002`. See [Roll over an index alias with a write index](#rolling-over-an-index-alias-with-a-write-index) for a practical example. ## Path parameters @@ -81,7 +81,7 @@ Parameter | Type | Description ### `mappings` -The `mappings` parameter specifies the index field mappings. It is optional. See [Mappings and field types](https://opensearch.org/docs/latest/field-types/) for more information. +The `mappings` parameter specifies the index field mappings. It is optional. See [Mappings and field types]({{site.url}}{{site.baseurl}}/field-types/) for more information. ### `conditions` @@ -97,7 +97,7 @@ Parameter | Data type | Description ### `settings` -The `settings` parameter specifies the index configuration options. See [Index settings](https://opensearch.org/docs/latest/install-and-configure/configuring-opensearch/index-settings/) for more information. +The `settings` parameter specifies the index configuration options. See [Index settings]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/index-settings/) for more information. ## Example requests diff --git a/_automating-configurations/workflow-tutorial.md b/_automating-configurations/workflow-tutorial.md index 0074ad4691..805ecc0e87 100644 --- a/_automating-configurations/workflow-tutorial.md +++ b/_automating-configurations/workflow-tutorial.md @@ -97,7 +97,7 @@ The [Deploy Model API]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api/model- register_model_2: model_id ``` -When using the Deploy Model API directly, a task ID is returned, requiring use of the [Tasks API](https://opensearch.org/docs/latest/ml-commons-plugin/api/tasks-apis/get-task/) to determine when the deployment is complete. The automated workflow eliminates the manual status check and returns the final `model_id` directly. +When using the Deploy Model API directly, a task ID is returned, requiring use of the [Tasks API]({{site.url}}{{site.baseurl}}/ml-commons-plugin/api/tasks-apis/get-task/) to determine when the deployment is complete. The automated workflow eliminates the manual status check and returns the final `model_id` directly. ### Ordering steps diff --git a/_benchmark/reference/metrics/index.md b/_benchmark/reference/metrics/index.md index 63e5a799e8..614cc66dbe 100644 --- a/_benchmark/reference/metrics/index.md +++ b/_benchmark/reference/metrics/index.md @@ -13,7 +13,7 @@ After a workload completes, OpenSearch Benchmark stores all metric records withi ## Storing metrics -You can specify whether metrics are stored in memory or in a metrics store while running the benchmark by setting the [`datastore.type`](https://opensearch.org/docs/latest/benchmark/configuring-benchmark/#results_publishing) parameter in your `benchmark.ini` file. +You can specify whether metrics are stored in memory or in a metrics store while running the benchmark by setting the [`datastore.type`]({{site.url}}{{site.baseurl}}/benchmark/configuring-benchmark/#results_publishing) parameter in your `benchmark.ini` file. ### In memory diff --git a/_benchmark/user-guide/understanding-results/summary-reports.md b/_benchmark/user-guide/understanding-results/summary-reports.md index 28578c8c89..eed6d82e1d 100644 --- a/_benchmark/user-guide/understanding-results/summary-reports.md +++ b/_benchmark/user-guide/understanding-results/summary-reports.md @@ -120,7 +120,7 @@ OpenSearch Benchmark results are stored in-memory or in external storage. When stored in-memory, results can be found in the `/.benchmark/benchmarks/test_executions/` directory. Results are named in accordance with the `test_execution_id` of the most recent workload test. -While [running a test](https://opensearch.org/docs/latest/benchmark/reference/commands/execute-test/#general-settings), you can customize where the results are stored using any combination of the following command flags: +While [running a test]({{site.url}}{{site.baseurl}}/benchmark/reference/commands/execute-test/#general-settings), you can customize where the results are stored using any combination of the following command flags: * `--results-file`: When provided a file path, writes the summary report to the file indicated in the path. * `--results-format`: Defines the output format for the summary report results, either `markdown` or `csv`. Default is `markdown`. diff --git a/_dashboards/management/accelerate-external-data.md b/_dashboards/management/accelerate-external-data.md index a935586dce..61c08c01f8 100644 --- a/_dashboards/management/accelerate-external-data.md +++ b/_dashboards/management/accelerate-external-data.md @@ -12,9 +12,9 @@ Introduced 2.11 Query performance can be slow when using external data sources for reasons such as network latency, data transformation, and data volume. You can optimize your query performance by using OpenSearch indexes, such as a skipping index or a covering index. -- A _skipping index_ uses skip acceleration methods, such as partition, minimum and maximum values, and value sets, to ingest and create compact aggregate data structures. This makes them an economical option for direct querying scenarios. For more information, see [Skipping indexes](https://opensearch.org/docs/latest/dashboards/management/accelerate-external-data/#skipping-indexes). -- A _covering index_ ingests all or some of the data from the source into OpenSearch and makes it possible to use all OpenSearch Dashboards and plugin functionality. For more information, see [Covering indexes](https://opensearch.org/docs/latest/dashboards/management/accelerate-external-data/#covering-indexes). -- A _materialized view_ enhances query performance by storing precomputed and aggregated data from the source data. For more information, see [Materialized views](https://opensearch.org/docs/latest/dashboards/management/accelerate-external-data/#materialized-views). +- A _skipping index_ uses skip acceleration methods, such as partition, minimum and maximum values, and value sets, to ingest and create compact aggregate data structures. This makes them an economical option for direct querying scenarios. For more information, see [Skipping indexes]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/#skipping-indexes). +- A _covering index_ ingests all or some of the data from the source into OpenSearch and makes it possible to use all OpenSearch Dashboards and plugin functionality. For more information, see [Covering indexes]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/#covering-indexes). +- A _materialized view_ enhances query performance by storing precomputed and aggregated data from the source data. For more information, see [Materialized views]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/#materialized-views). For comprehensive guidance on each indexing process, see the [Flint Index Reference Manual](https://github.com/opensearch-project/opensearch-spark/blob/main/docs/index.md). @@ -29,9 +29,9 @@ To get started with accelerating query performance, perform the following steps: 1. Select **Accelerate data**. A pop-up window appears. 2. Enter your database and table details under **Select data fields**. 5. For **Acceleration type**, select the type of acceleration according to your use case. Then, enter the information for your acceleration type. For more information, see the following sections: - - [Skipping indexes](https://opensearch.org/docs/latest/dashboards/management/accelerate-external-data/#skipping-indexes) - - [Covering indexes](https://opensearch.org/docs/latest/dashboards/management/accelerate-external-data/#covering-indexes) - - [Materialized views](https://opensearch.org/docs/latest/dashboards/management/accelerate-external-data/#materialized-views) + - [Skipping indexes]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/#skipping-indexes) + - [Covering indexes]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/#covering-indexes) + - [Materialized views]({{site.url}}{{site.baseurl}}/dashboards/management/accelerate-external-data/#materialized-views) ## Skipping indexes @@ -71,7 +71,7 @@ A _covering index_ ingests all or some of the data from the source into OpenSear With a covering index, you can ingest data from a specified column in a table. This is the most performant of the three indexing types. Because OpenSearch ingests all data from your desired column, you get better performance and can perform advanced analytics. -OpenSearch creates a new index from the covering index data. You can use this new index to create visualizations, or for anomaly detection and geospatial capabilities. You can manage the covering view index with Index State Management. For more information, see [Index State Management](https://opensearch.org/docs/latest/im-plugin/ism/index/). +OpenSearch creates a new index from the covering index data. You can use this new index to create visualizations, or for anomaly detection and geospatial capabilities. You can manage the covering view index with Index State Management. For more information, see [Index State Management]({{site.url}}{{site.baseurl}}/im-plugin/ism/index/). ### Define covering index settings @@ -100,7 +100,7 @@ WITH ( ## Materialized views -With _materialized views_, you can use complex queries, such as aggregations, to power Dashboards visualizations. Materialized views ingest a small amount of your data, depending on the query, into OpenSearch. OpenSearch then forms an index from the ingested data that you can use for visualizations. You can manage the materialized view index with Index State Management. For more information, see [Index State Management](https://opensearch.org/docs/latest/im-plugin/ism/index/). +With _materialized views_, you can use complex queries, such as aggregations, to power Dashboards visualizations. Materialized views ingest a small amount of your data, depending on the query, into OpenSearch. OpenSearch then forms an index from the ingested data that you can use for visualizations. You can manage the materialized view index with Index State Management. For more information, see [Index State Management]({{site.url}}{{site.baseurl}}/im-plugin/ism/index/). ### Define materialized view settings diff --git a/_dashboards/management/advanced-settings.md b/_dashboards/management/advanced-settings.md index b4c0225c5b..5d817e1c79 100644 --- a/_dashboards/management/advanced-settings.md +++ b/_dashboards/management/advanced-settings.md @@ -18,7 +18,7 @@ To access **Advanced settings**, go to **Dashboards Management** and select **Ad ## Required permissions -To modify settings, you must have permission to make changes. See [Multi-tenancy configuration](https://opensearch.org/docs/latest/security/multi-tenancy/multi-tenancy-config/#give-roles-access-to-tenants) for guidance about assigning role access to tenants. +To modify settings, you must have permission to make changes. See [Multi-tenancy configuration]({{site.url}}{{site.baseurl}}/security/multi-tenancy/multi-tenancy-config/#give-roles-access-to-tenants) for guidance about assigning role access to tenants. ## Advanced settings descriptions diff --git a/_dashboards/visualize/area.md b/_dashboards/visualize/area.md index 5df59579ec..0da08c68c1 100644 --- a/_dashboards/visualize/area.md +++ b/_dashboards/visualize/area.md @@ -17,7 +17,7 @@ In this tutorial you'll create a simple area chart using sample data and aggrega You have several aggregation options in Dashboards, and the choice influences your analysis. The use cases for aggregations vary from analyzing data in real time to using Dashboards to create a visualization dashboard. If you need an overview of aggregations in OpenSearch, see [Aggregations]({{site.url}}{{site.baseurl}}/opensearch/aggregations/) before starting this tutorial. -Make sure you have [installed the latest version of Dashboards](https://opensearch.org/docs/latest/install-and-configure/install-dashboards/index/) and added the sample data before continuing with this tutorial. _This tutorial uses Dashboards version 2.4.1_. +Make sure you have [installed the latest version of Dashboards]({{site.url}}{{site.baseurl}}/install-and-configure/install-dashboards/index/) and added the sample data before continuing with this tutorial. _This tutorial uses Dashboards version 2.4.1_. {: .note} ## Set up the area chart diff --git a/_data-prepper/pipelines/configuration/processors/decompress.md b/_data-prepper/pipelines/configuration/processors/decompress.md index d03c236ac5..2a4b1763ef 100644 --- a/_data-prepper/pipelines/configuration/processors/decompress.md +++ b/_data-prepper/pipelines/configuration/processors/decompress.md @@ -16,7 +16,7 @@ Option | Required | Type | Description :--- | :--- | :--- | :--- `keys` | Yes | List | The fields in the event that will be decompressed. `type` | Yes | Enum | The type of decompression to use for the `keys` in the event. Only `gzip` is supported. -`decompress_when` | No | String| A [Data Prepper conditional expression](https://opensearch.org/docs/latest/data-prepper/pipelines/expression-syntax/) that determines when the `decompress` processor will run on certain events. +`decompress_when` | No | String| A [Data Prepper conditional expression]({{site.url}}{{site.baseurl}}/data-prepper/pipelines/expression-syntax/) that determines when the `decompress` processor will run on certain events. `tags_on_failure` | No | List | A list of strings with which to tag events when the processor fails to decompress the `keys` inside an event. Defaults to `_decompression_failure`. ## Usage diff --git a/_data-prepper/pipelines/configuration/processors/flatten.md b/_data-prepper/pipelines/configuration/processors/flatten.md index 0a64995286..c5b1f8e16a 100644 --- a/_data-prepper/pipelines/configuration/processors/flatten.md +++ b/_data-prepper/pipelines/configuration/processors/flatten.md @@ -21,7 +21,7 @@ Option | Required | Type | Description `exclude_keys` | No | List | The keys from the source field that should be excluded from processing. Default is an empty list (`[]`). `remove_processed_fields` | No | Boolean | When `true`, the processor removes all processed fields from the source. Default is `false`. `remove_list_indices` | No | Boolean | When `true`, the processor converts the fields from the source map into lists and puts the lists into the target field. Default is `false`. -`flatten_when` | No | String | A [conditional expression](https://opensearch.org/docs/latest/data-prepper/pipelines/expression-syntax/), such as `/some-key == "test"'`, that determines whether the `flatten` processor will be run on the event. Default is `null`, which means that all events will be processed unless otherwise stated. +`flatten_when` | No | String | A [conditional expression]({{site.url}}{{site.baseurl}}/data-prepper/pipelines/expression-syntax/), such as `/some-key == "test"'`, that determines whether the `flatten` processor will be run on the event. Default is `null`, which means that all events will be processed unless otherwise stated. `tags_on_failure` | No | List | A list of tags to add to the event metadata when the event fails to process. ## Usage diff --git a/_data-prepper/pipelines/configuration/processors/grok.md b/_data-prepper/pipelines/configuration/processors/grok.md index 3724278adf..a9bd90867e 100644 --- a/_data-prepper/pipelines/configuration/processors/grok.md +++ b/_data-prepper/pipelines/configuration/processors/grok.md @@ -54,7 +54,7 @@ processor: ``` {% include copy.html %} -The `grok_when` option can take a conditional expression. This expression is detailed in the [Expression syntax](https://opensearch.org/docs/latest/data-prepper/pipelines/expression-syntax/) documentation. +The `grok_when` option can take a conditional expression. This expression is detailed in the [Expression syntax]({{site.url}}{{site.baseurl}}/data-prepper/pipelines/expression-syntax/) documentation. ## Grok performance metadata diff --git a/_data-prepper/pipelines/configuration/processors/key-value.md b/_data-prepper/pipelines/configuration/processors/key-value.md index 52ecc7719c..bd2e52600f 100644 --- a/_data-prepper/pipelines/configuration/processors/key-value.md +++ b/_data-prepper/pipelines/configuration/processors/key-value.md @@ -37,6 +37,6 @@ destination | The destination field for the parsed source. The parsed source ove `drop_keys_with_no_value` | Specifies whether keys should be dropped if they have a null value. Default is `false`. If `drop_keys_with_no_value` is set to `true`, then `{"key1=value1&key2"}` parses to `{"key1": "value1"}`. `strict_grouping` | Specifies whether strict grouping should be enabled when the `value_grouping` or `string_literal_character` options are used. Default is `false`. | When enabled, groups with unmatched end characters yield errors. The event is ignored after the errors are logged. `string_literal_character` | Can be set to either a single quotation mark (`'`) or a double quotation mark (`"`). Default is `null`. | When this option is used, any text contained within the specified quotation mark character will be ignored and excluded from key-value parsing. For example, `text1 "key1=value1" text2 key2=value2` would parse to `{"key2": "value2"}`. -`key_value_when` | Allows you to specify a [conditional expression](https://opensearch.org/docs/latest/data-prepper/pipelines/expression-syntax/), such as `/some-key == "test"`, that will be evaluated to determine whether the processor should be applied to the event. +`key_value_when` | Allows you to specify a [conditional expression]({{site.url}}{{site.baseurl}}/data-prepper/pipelines/expression-syntax/), such as `/some-key == "test"`, that will be evaluated to determine whether the processor should be applied to the event. diff --git a/_data-prepper/pipelines/configuration/processors/map-to-list.md b/_data-prepper/pipelines/configuration/processors/map-to-list.md index f3393e6c46..9079b9087b 100644 --- a/_data-prepper/pipelines/configuration/processors/map-to-list.md +++ b/_data-prepper/pipelines/configuration/processors/map-to-list.md @@ -23,7 +23,7 @@ Option | Required | Type | Description `exclude_keys` | No | List | The keys in the source map that will be excluded from processing. Default is an empty list (`[]`). `remove_processed_fields` | No | Boolean | When `true`, the processor will remove the processed fields from the source map. Default is `false`. `convert_field_to_list` | No | Boolean | If `true`, the processor will convert the fields from the source map into lists and place them in fields in the target list. Default is `false`. -`map_to_list_when` | No | String | A [conditional expression](https://opensearch.org/docs/latest/data-prepper/pipelines/expression-syntax/), such as `/some-key == "test"'`, that will be evaluated to determine whether the processor will be run on the event. Default is `null`. All events will be processed unless otherwise stated. +`map_to_list_when` | No | String | A [conditional expression]({{site.url}}{{site.baseurl}}/data-prepper/pipelines/expression-syntax/), such as `/some-key == "test"'`, that will be evaluated to determine whether the processor will be run on the event. Default is `null`. All events will be processed unless otherwise stated. `tags_on_failure` | No | List | A list of tags to add to the event metadata when the event fails to process. ## Usage diff --git a/_data-prepper/pipelines/configuration/processors/otel-trace-group.md b/_data-prepper/pipelines/configuration/processors/otel-trace-group.md index 06bc754a98..cf3db6a730 100644 --- a/_data-prepper/pipelines/configuration/processors/otel-trace-group.md +++ b/_data-prepper/pipelines/configuration/processors/otel-trace-group.md @@ -55,8 +55,8 @@ You can configure the `otel_trace_group` processor with the following options. | `aws_sts_role_arn`| An AWS Identity and Access Management (IAM) role that the sink plugin assumes to sign the request to Amazon OpenSearch Service. If not provided, the plugin uses the [default credentials](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/auth/credentials/DefaultCredentialsProvider.html). | `null` | | `aws_sts_header_overrides` | A map of header overrides that the IAM role assumes for the sink plugin. | `null` | | `insecure` | A Boolean flag used to turn off SSL certificate verification. If set to `true`, CA certificate verification is turned off and insecure HTTP requests are sent. | `false` | -| `username` | A string that contains the username and is used in the [internal users](https://opensearch.org/docs/latest/security/access-control/users-roles/) `YAML` configuration file of your OpenSearch cluster. | `null` | -| `password` | A string that contains the password and is used in the [internal users](https://opensearch.org/docs/latest/security/access-control/users-roles/) `YAML` configuration file of your OpenSearch cluster. | `null` | +| `username` | A string that contains the username and is used in the [internal users]({{site.url}}{{site.baseurl}}/security/access-control/users-roles/) `YAML` configuration file of your OpenSearch cluster. | `null` | +| `password` | A string that contains the password and is used in the [internal users]({{site.url}}{{site.baseurl}}/security/access-control/users-roles/) `YAML` configuration file of your OpenSearch cluster. | `null` | ## Configuration option examples diff --git a/_data-prepper/pipelines/configuration/processors/select-entries.md b/_data-prepper/pipelines/configuration/processors/select-entries.md index 7566b2cb4c..4e9d1d1099 100644 --- a/_data-prepper/pipelines/configuration/processors/select-entries.md +++ b/_data-prepper/pipelines/configuration/processors/select-entries.md @@ -18,7 +18,7 @@ You can configure the `select_entries` processor using the following options. | Option | Required | Description | | :--- | :--- | :--- | | `include_keys` | Yes | A list of keys to be selected from an event. | -| `select_when` | No | A [conditional expression](https://opensearch.org/docs/latest/data-prepper/pipelines/expression-syntax/), such as `/some-key == "test"'`, that will be evaluated to determine whether the processor will be run on the event. If the condition is not met, then the event continues through the pipeline unmodified with all the original fields present. | +| `select_when` | No | A [conditional expression]({{site.url}}{{site.baseurl}}/data-prepper/pipelines/expression-syntax/), such as `/some-key == "test"'`, that will be evaluated to determine whether the processor will be run on the event. If the condition is not met, then the event continues through the pipeline unmodified with all the original fields present. | ## Usage diff --git a/_data-prepper/pipelines/configuration/sinks/opensearch.md b/_data-prepper/pipelines/configuration/sinks/opensearch.md index 67209ea5b9..c8afb6c24a 100644 --- a/_data-prepper/pipelines/configuration/sinks/opensearch.md +++ b/_data-prepper/pipelines/configuration/sinks/opensearch.md @@ -65,7 +65,7 @@ Option | Required | Type | Description `connect_timeout` | No | Integer| The timeout value, in milliseconds, when requesting a connection from the connection manager. A timeout value of `0` is interpreted as an infinite timeout. If this timeout value is negative or not set, the underlying Apache HttpClient will rely on operating system settings to manage connection timeouts. `insecure` | No | Boolean | Whether or not to verify SSL certificates. If set to `true`, then certificate authority (CA) certificate verification is disabled and insecure HTTP requests are sent instead. Default is `false`. `proxy` | No | String | The address of the [forward HTTP proxy server](https://en.wikipedia.org/wiki/Proxy_server). The format is `"<hostname or IP>:<port>"` (for example, `"example.com:8100"`, `"http://example.com:8100"`, `"112.112.112.112:8100"`). The port number cannot be omitted. -`index` | Conditionally | String | The name of the export index. Only required when the `index_type` is `custom`. The index can be a plain string, such as `my-index-name`, contain [Java date-time patterns](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html), such as `my-index-%{yyyy.MM.dd}` or `my-%{yyyy-MM-dd-HH}-index`, be formatted using field values, such as `my-index-${/my_field}`, or use [Data Prepper expressions](https://opensearch.org/docs/latest/data-prepper/pipelines/expression-syntax/), such as `my-index-${getMetadata(\"my_metadata_field\"}`. All formatting options can be combined to provide flexibility when creating static, dynamic, and rolling indexes. +`index` | Conditionally | String | The name of the export index. Only required when the `index_type` is `custom`. The index can be a plain string, such as `my-index-name`, contain [Java date-time patterns](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html), such as `my-index-%{yyyy.MM.dd}` or `my-%{yyyy-MM-dd-HH}-index`, be formatted using field values, such as `my-index-${/my_field}`, or use [Data Prepper expressions]({{site.url}}{{site.baseurl}}/data-prepper/pipelines/expression-syntax/), such as `my-index-${getMetadata(\"my_metadata_field\"}`. All formatting options can be combined to provide flexibility when creating static, dynamic, and rolling indexes. `index_type` | No | String | Tells the sink plugin what type of data it is handling. Valid values are `custom`, `trace-analytics-raw`, `trace-analytics-service-map`, or `management-disabled`. Default is `custom`. `template_type` | No | String | Defines what type of OpenSearch template to use. Available options are `v1` and `index-template`. The default value is `v1`, which uses the original OpenSearch templates available at the `_template` API endpoints. The `index-template` option uses composable [index templates]({{site.url}}{{site.baseurl}}/opensearch/index-templates/), which are available through the OpenSearch `_index_template` API. Composable index types offer more flexibility than the default and are necessary when an OpenSearch cluster contains existing index templates. Composable templates are available for all versions of OpenSearch and some later versions of Elasticsearch. When `distribution_version` is set to `es6`, Data Prepper enforces the `template_type` as `v1`. `template_file` | No | String | The path to a JSON [index template]({{site.url}}{{site.baseurl}}/opensearch/index-templates/) file, such as `/your/local/template-file.json`, when `index_type` is set to `custom`. For an example template file, see [otel-v1-apm-span-index-template.json](https://github.com/opensearch-project/data-prepper/blob/main/data-prepper-plugins/opensearch/src/main/resources/otel-v1-apm-span-index-template.json). If you supply a template file, then it must match the template format specified by the `template_type` parameter. diff --git a/_data-prepper/pipelines/configuration/sinks/s3.md b/_data-prepper/pipelines/configuration/sinks/s3.md index 43de4dc895..9e2ba9d777 100644 --- a/_data-prepper/pipelines/configuration/sinks/s3.md +++ b/_data-prepper/pipelines/configuration/sinks/s3.md @@ -159,7 +159,7 @@ Use the following options to define how object keys are constructed for objects Option | Required | Type | Description :--- | :--- | :--- | :--- -`path_prefix` | No | String | The S3 key prefix path to use for objects written to S3. Accepts date-time formatting and dynamic injection of values using [Data Prepper expressions](https://opensearch.org/docs/latest/data-prepper/pipelines/expression-syntax/). For example, you can use `/${/my_partition_key}/%{yyyy}/%{MM}/%{dd}/%{HH}/` to create hourly folders in S3 based on the `my_partition_key` value. The prefix path should end with `/`. By default, Data Prepper writes objects to the S3 bucket root. +`path_prefix` | No | String | The S3 key prefix path to use for objects written to S3. Accepts date-time formatting and dynamic injection of values using [Data Prepper expressions]({{site.url}}{{site.baseurl}}/data-prepper/pipelines/expression-syntax/). For example, you can use `/${/my_partition_key}/%{yyyy}/%{MM}/%{dd}/%{HH}/` to create hourly folders in S3 based on the `my_partition_key` value. The prefix path should end with `/`. By default, Data Prepper writes objects to the S3 bucket root. ## `codec` diff --git a/_data-prepper/pipelines/configuration/sources/opensearch.md b/_data-prepper/pipelines/configuration/sources/opensearch.md index 248e6251d0..1c8fc337ae 100644 --- a/_data-prepper/pipelines/configuration/sources/opensearch.md +++ b/_data-prepper/pipelines/configuration/sources/opensearch.md @@ -177,8 +177,8 @@ Option | Required | Type | Description ### Default search behavior By default, the `opensearch` source will look up the cluster version and distribution to determine -which `search_context_type` to use. For versions and distributions that support [Point in Time](https://opensearch.org/docs/latest/search-plugins/searching-data/paginate/#point-in-time-with-search_after), `point_in_time` will be used. -If `point_in_time` is not supported by the cluster, then [scroll](https://opensearch.org/docs/latest/search-plugins/searching-data/paginate/#scroll-search) will be used. For Amazon OpenSearch Serverless collections, [search_after](https://opensearch.org/docs/latest/search-plugins/searching-data/paginate/#the-search_after-parameter) will be used because neither `point_in_time` nor `scroll` are supported by collections. +which `search_context_type` to use. For versions and distributions that support [Point in Time]({{site.url}}{{site.baseurl}}/search-plugins/searching-data/paginate/#point-in-time-with-search_after), `point_in_time` will be used. +If `point_in_time` is not supported by the cluster, then [scroll]({{site.url}}{{site.baseurl}}/search-plugins/searching-data/paginate/#scroll-search) will be used. For Amazon OpenSearch Serverless collections, [search_after]({{site.url}}{{site.baseurl}}/search-plugins/searching-data/paginate/#the-search_after-parameter) will be used because neither `point_in_time` nor `scroll` are supported by collections. ### Connection diff --git a/_field-types/supported-field-types/star-tree.md b/_field-types/supported-field-types/star-tree.md index 2bfccb6632..af737e6447 100644 --- a/_field-types/supported-field-types/star-tree.md +++ b/_field-types/supported-field-types/star-tree.md @@ -118,7 +118,7 @@ When using the `ordered_dimesions` parameter, follow these best practices: - The order of dimensions matters. You can define the dimensions ordered from the highest cardinality to the lowest cardinality for efficient storage and query pruning. - Avoid using high-cardinality fields as dimensions. High-cardinality fields adversely affect storage space, indexing throughput, and query performance. -- Currently, fields supported by the `ordered_dimensions` parameter are all [numeric field types](https://opensearch.org/docs/latest/field-types/supported-field-types/numeric/), with the exception of `unsigned_long`. For more information, see [GitHub issue #15231](https://github.com/opensearch-project/OpenSearch/issues/15231). +- Currently, fields supported by the `ordered_dimensions` parameter are all [numeric field types]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/numeric/), with the exception of `unsigned_long`. For more information, see [GitHub issue #15231](https://github.com/opensearch-project/OpenSearch/issues/15231). - Support for other field types, such as `keyword` and `ip`, will be added in future versions. For more information, see [GitHub issue #16232](https://github.com/opensearch-project/OpenSearch/issues/16232). - A minimum of `2` and a maximum of `10` dimensions are supported per star-tree index. @@ -135,7 +135,7 @@ Configure any metric fields on which you need to perform aggregations. `Metrics` When using `metrics`, follow these best practices: -- Currently, fields supported by `metrics` are all [numeric field types](https://opensearch.org/docs/latest/field-types/supported-field-types/numeric/), with the exception of `unsigned_long`. For more information, see [GitHub issue #15231](https://github.com/opensearch-project/OpenSearch/issues/15231). +- Currently, fields supported by `metrics` are all [numeric field types]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/numeric/), with the exception of `unsigned_long`. For more information, see [GitHub issue #15231](https://github.com/opensearch-project/OpenSearch/issues/15231). - Supported metric aggregations include `Min`, `Max`, `Sum`, `Avg`, and `Value_count`. - `Avg` is a derived metric based on `Sum` and `Value_count` and is not indexed when a query is run. The remaining base metrics are indexed. - A maximum of `100` base metrics are supported per star-tree index. diff --git a/_ingest-pipelines/processors/script.md b/_ingest-pipelines/processors/script.md index ae8e0bd9c6..4b8acd16e0 100644 --- a/_ingest-pipelines/processors/script.md +++ b/_ingest-pipelines/processors/script.md @@ -7,7 +7,7 @@ nav_order: 230 # Script processor -The `script` processor executes inline and stored scripts that can modify or transform data in an OpenSearch document during the ingestion process. The processor uses script caching for improved performance because scripts may be recompiled per document. Refer to [Script APIs](https://opensearch.org/docs/latest/api-reference/script-apis/index/) for information about working with scripts in OpenSearch. +The `script` processor executes inline and stored scripts that can modify or transform data in an OpenSearch document during the ingestion process. The processor uses script caching for improved performance because scripts may be recompiled per document. Refer to [Script APIs]({{site.url}}{{site.baseurl}}/api-reference/script-apis/index/) for information about working with scripts in OpenSearch. The following is the syntax for the `script` processor: diff --git a/_integrations/index.md b/_integrations/index.md index 644f3fccd2..7e6d8a228e 100644 --- a/_integrations/index.md +++ b/_integrations/index.md @@ -31,7 +31,7 @@ A consistent telemetry data schema is crucial for effective observability, enabl OpenSearch adopted the [OpenTelemetry (OTel)](https://opentelemetry.io/) protocol as the foundation for its observability solution. OTel is a community-driven standard that defines a consistent schema and data collection approach for metrics, logs, and traces. It is widely supported by APIs, SDKs, and telemetry collectors, enabling features like auto-instrumentation for seamless observability integration. -This shared schema allows cross-correlation and analysis across different data sources. To this end, OpenSearch derived the [Simple Schema for Observability](https://github.com/opensearch-project/opensearch-catalog/tree/main/docs/schema/observability), which encodes the OTel standard as OpenSearch mappings. OpenSearch also supports the [Piped Processing Language (PPL)](https://opensearch.org/docs/latest/search-plugins/sql/ppl/index/), which is designed for high-dimensionality querying in observability use cases. +This shared schema allows cross-correlation and analysis across different data sources. To this end, OpenSearch derived the [Simple Schema for Observability](https://github.com/opensearch-project/opensearch-catalog/tree/main/docs/schema/observability), which encodes the OTel standard as OpenSearch mappings. OpenSearch also supports the [Piped Processing Language (PPL)]({{site.url}}{{site.baseurl}}/search-plugins/sql/ppl/index/), which is designed for high-dimensionality querying in observability use cases. --- diff --git a/_migration-assistant/migration-phases/backfill.md b/_migration-assistant/migration-phases/backfill.md index d2ff7cd873..e4b2ed1a1f 100644 --- a/_migration-assistant/migration-phases/backfill.md +++ b/_migration-assistant/migration-phases/backfill.md @@ -155,7 +155,7 @@ You can find the backfill dashboard in the CloudWatch console based on the AWS R ## Validating the backfill -After the backfill is complete and the workers have stopped, examine the contents of your cluster using the [Refresh API](https://opensearch.org/docs/latest/api-reference/index-apis/refresh/) and the [Flush API](https://opensearch.org/docs/latest/api-reference/index-apis/flush/). The following example uses the console CLI with the Refresh API to check the backfill status: +After the backfill is complete and the workers have stopped, examine the contents of your cluster using the [Refresh API]({{site.url}}{{site.baseurl}}/api-reference/index-apis/refresh/) and the [Flush API]({{site.url}}{{site.baseurl}}/api-reference/index-apis/flush/). The following example uses the console CLI with the Refresh API to check the backfill status: ```shell console clusters cat-indices --refresh diff --git a/_migration-assistant/migration-phases/planning-your-migration/assessing-your-cluster-for-migration.md b/_migration-assistant/migration-phases/planning-your-migration/assessing-your-cluster-for-migration.md index 23bceb7114..7d05293e9a 100644 --- a/_migration-assistant/migration-phases/planning-your-migration/assessing-your-cluster-for-migration.md +++ b/_migration-assistant/migration-phases/planning-your-migration/assessing-your-cluster-for-migration.md @@ -24,7 +24,7 @@ For migrations paths between Elasticsearch 6.8 and OpenSearch 2.x users should b * [Changes from Elasticsearch to OpenSearch fork](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/rename.html). -* [OpenSearch Breaking Changes](https://opensearch.org/docs/latest/breaking-changes/). +* [OpenSearch Breaking Changes]({{site.url}}{{site.baseurl}}/breaking-changes/). The next step is to set up a proper test bed to verify that your applications will work as expected on the target version. diff --git a/_ml-commons-plugin/tutorials/rag-chatbot.md b/_ml-commons-plugin/tutorials/rag-chatbot.md index 5dddded23a..8d6a681fb6 100644 --- a/_ml-commons-plugin/tutorials/rag-chatbot.md +++ b/_ml-commons-plugin/tutorials/rag-chatbot.md @@ -9,7 +9,7 @@ nav_order: 50 One of the known limitations of large language models (LLMs) is that their knowledge base only contains information from the period of time during which they were trained. LLMs have no knowledge of recent events or of your internal data. You can augment the LLM knowledge base by using retrieval-augmented generation (RAG). -This tutorial illustrates how to build your own chatbot using [agents and tools](https://opensearch.org/docs/latest/ml-commons-plugin/agents-tools/index/) and RAG. RAG supplements the LLM knowledge base with information contained in OpenSearch indexes. +This tutorial illustrates how to build your own chatbot using [agents and tools]({{site.url}}{{site.baseurl}}/ml-commons-plugin/agents-tools/index/) and RAG. RAG supplements the LLM knowledge base with information contained in OpenSearch indexes. Replace the placeholders beginning with the prefix `your_` with your own values. {: .note} diff --git a/_ml-commons-plugin/tutorials/reranking-bedrock.md b/_ml-commons-plugin/tutorials/reranking-bedrock.md index b46104f241..dfa5169744 100644 --- a/_ml-commons-plugin/tutorials/reranking-bedrock.md +++ b/_ml-commons-plugin/tutorials/reranking-bedrock.md @@ -367,7 +367,7 @@ By default, the Amazon Bedrock Rerank API output is formatted as follows: ] ``` -The connector `post_process_function` transforms the model's output into a format that the [Reranker processor](https://opensearch.org/docs/latest/search-plugins/search-pipelines/rerank-processor/) can interpret and orders the results by index. +The connector `post_process_function` transforms the model's output into a format that the [Reranker processor]({{site.url}}{{site.baseurl}}/search-plugins/search-pipelines/rerank-processor/) can interpret and orders the results by index. The response contains four `similarity` outputs. For each `similarity` output, the `data` array contains a relevance score for each document against the query. The `similarity` outputs are provided in the order of the input documents; the first similarity result pertains to the first document: diff --git a/_observing-your-data/alerting/dashboards-alerting.md b/_observing-your-data/alerting/dashboards-alerting.md index 3c7719edfc..4a5d01dde4 100644 --- a/_observing-your-data/alerting/dashboards-alerting.md +++ b/_observing-your-data/alerting/dashboards-alerting.md @@ -88,5 +88,5 @@ Once you've created or associated alerting monitors, verify that the monitor is ## Next steps -- [Learn more about the Dashboard application](https://opensearch.org/docs/latest/dashboards/dashboard/index/). -- [Learn more about alerting](https://opensearch.org/docs/latest/observing-your-data/alerting/index/). +- [Learn more about the Dashboard application]({{site.url}}{{site.baseurl}}/dashboards/dashboard/index/). +- [Learn more about alerting]({{site.url}}{{site.baseurl}}/observing-your-data/alerting/index/). diff --git a/_observing-your-data/alerting/per-query-bucket-monitors.md b/_observing-your-data/alerting/per-query-bucket-monitors.md index cb08c49478..d944b57525 100644 --- a/_observing-your-data/alerting/per-query-bucket-monitors.md +++ b/_observing-your-data/alerting/per-query-bucket-monitors.md @@ -13,7 +13,7 @@ Per query monitors are a type of alert monitor that can be used to identify and Per bucket monitors are a type of alert monitor that can be used to identify and alert on specific buckets of data that are created by a query against an OpenSearch index. -Both monitor types support querying remote indexes using the same `cluster-name:index-name` pattern used by [cross-cluster search](https://opensearch.org/docs/latest/security/access-control/cross-cluster-search/) or by using OpenSearch Dashboards 2.12 or later. +Both monitor types support querying remote indexes using the same `cluster-name:index-name` pattern used by [cross-cluster search]({{site.url}}{{site.baseurl}}/security/access-control/cross-cluster-search/) or by using OpenSearch Dashboards 2.12 or later. The following [permissions]({{site.url}}{{site.baseurl}}/security/access-control/permissions/) are required in order to create a cross-cluster monitor through the dashboards UI: `cluster:admin/opensearch/alerting/remote/indexes/get`, `indices:admin/resolve/index`, `cluster:monitor/health`, and `indices:admin/mappings/get`. {: .note} diff --git a/_observing-your-data/alerting/triggers.md b/_observing-your-data/alerting/triggers.md index 0cbc5d6ea5..6fb195b54e 100644 --- a/_observing-your-data/alerting/triggers.md +++ b/_observing-your-data/alerting/triggers.md @@ -145,7 +145,7 @@ Variable | Data type | Description Per bucket and per document monitors support printing sample documents in notification messages. Per document monitors support printing the list of queries that triggered the creation of the finding associated with the alert. When the monitor runs, it adds each new alert to the `ctx` variables, for example, `newAlerts` for per bucket monitors and `alerts` for per document monitors. Each alert has its own list of `sample_documents`, and each per document monitor alert has its own list of `associated_queries`. The message template can be formatted to iterate through the list of alerts, the list of `associated_queries`, and the `sample_documents` for each alert. -An alerting monitor uses the permissions of the user that created it. Be mindful of the Notifications plugin channel to which alert messages are sent and the content of the message mustache template. To learn more about security in the Alerting plugin, see [Alerting security](https://opensearch.org/docs/latest/observing-your-data/alerting/security/). +An alerting monitor uses the permissions of the user that created it. Be mindful of the Notifications plugin channel to which alert messages are sent and the content of the message mustache template. To learn more about security in the Alerting plugin, see [Alerting security]({{site.url}}{{site.baseurl}}/observing-your-data/alerting/security/). {: .note} #### Sample document variables diff --git a/_observing-your-data/trace/ta-dashboards.md b/_observing-your-data/trace/ta-dashboards.md index c7ef2117ad..64c2c60493 100644 --- a/_observing-your-data/trace/ta-dashboards.md +++ b/_observing-your-data/trace/ta-dashboards.md @@ -76,7 +76,7 @@ Certain fields, such as `serviceName`, must be present to perform correlation an ### Correlation indexes -Navigating from the service dialog to its corresponding traces or logs requires the existence of correlating fields and that the target indexes (for example, logs) follow the specified naming conventions, as described at [Simple Schema for Observability](https://opensearch.org/docs/latest/observing-your-data/ss4o/). +Navigating from the service dialog to its corresponding traces or logs requires the existence of correlating fields and that the target indexes (for example, logs) follow the specified naming conventions, as described at [Simple Schema for Observability]({{site.url}}{{site.baseurl}}/observing-your-data/ss4o/). --- diff --git a/_query-dsl/term/terms.md b/_query-dsl/term/terms.md index 2de0b71bd6..caa7320d42 100644 --- a/_query-dsl/term/terms.md +++ b/_query-dsl/term/terms.md @@ -303,7 +303,7 @@ PUT students/_doc/3 ``` {% include copy-curl.html %} -To store customer bitmap filters, you'll create a `customer_filter` [binary field](https://opensearch.org/docs/latest/field-types/supported-field-types/binary/) in the `customers` index. Specify `store` as `true` to store the field: +To store customer bitmap filters, you'll create a `customer_filter` [binary field]({{site.url}}{{site.baseurl}}/field-types/supported-field-types/binary/) in the `customers` index. Specify `store` as `true` to store the field: ```json PUT /customers diff --git a/_search-plugins/knn/knn-index.md b/_search-plugins/knn/knn-index.md index b53fa997d8..9a9f253431 100644 --- a/_search-plugins/knn/knn-index.md +++ b/_search-plugins/knn/knn-index.md @@ -369,9 +369,9 @@ Setting | Default | Updatable | Description `index.knn` | false | false | Whether the index should build native library indexes for the `knn_vector` fields. If set to false, the `knn_vector` fields will be stored in doc values, but approximate k-NN search functionality will be disabled. `index.knn.algo_param.ef_search` | 100 | true | The size of the dynamic list used during k-NN searches. Higher values result in more accurate but slower searches. Only available for NMSLIB. `index.knn.advanced.approximate_threshold` | 15,000 | true | The number of vectors a segment must have before creating specialized data structures for approximate search. Set to `-1` to disable building vector data structures and `0` to always build them. -`index.knn.algo_param.ef_construction` | 100 | false | Deprecated in 1.0.0. Instead, use the [mapping parameters](https://opensearch.org/docs/latest/search-plugins/knn/knn-index/#method-definitions) to set this value. -`index.knn.algo_param.m` | 16 | false | Deprecated in 1.0.0. Use the [mapping parameters](https://opensearch.org/docs/latest/search-plugins/knn/knn-index/#method-definitions) to set this value instead. -`index.knn.space_type` | l2 | false | Deprecated in 1.0.0. Use the [mapping parameters](https://opensearch.org/docs/latest/search-plugins/knn/knn-index/#method-definitions) to set this value instead. +`index.knn.algo_param.ef_construction` | 100 | false | Deprecated in 1.0.0. Instead, use the [mapping parameters]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-index/#method-definitions) to set this value. +`index.knn.algo_param.m` | 16 | false | Deprecated in 1.0.0. Use the [mapping parameters]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-index/#method-definitions) to set this value instead. +`index.knn.space_type` | l2 | false | Deprecated in 1.0.0. Use the [mapping parameters]({{site.url}}{{site.baseurl}}/search-plugins/knn/knn-index/#method-definitions) to set this value instead. An index created in OpenSearch version 2.11 or earlier will still use the old `ef_construction` and `ef_search` values (`512`). {: .note} diff --git a/_search-plugins/star-tree-index.md b/_search-plugins/star-tree-index.md index 23d4b11c15..d1e03208d3 100644 --- a/_search-plugins/star-tree-index.md +++ b/_search-plugins/star-tree-index.md @@ -142,19 +142,19 @@ Star-tree indexes can be used to optimize queries and aggregations. The following queries are supported as of OpenSearch 2.18: -- [Term query](https://opensearch.org/docs/latest/query-dsl/term/term/) -- [Match all docs query](https://opensearch.org/docs/latest/query-dsl/match-all/) +- [Term query]({{site.url}}{{site.baseurl}}/query-dsl/term/term/) +- [Match all docs query]({{site.url}}{{site.baseurl}}/query-dsl/match-all/) To use a query with a star-tree index, the query's fields must be present in the `ordered_dimensions` section of the star-tree configuration. Queries must also be paired with a supported aggregation. ### Supported aggregations The following metric aggregations are supported as of OpenSearch 2.18: -- [Sum](https://opensearch.org/docs/latest/aggregations/metric/sum/) -- [Minimum](https://opensearch.org/docs/latest/aggregations/metric/minimum/) -- [Maximum](https://opensearch.org/docs/latest/aggregations/metric/maximum/) -- [Value count](https://opensearch.org/docs/latest/aggregations/metric/value-count/) -- [Average](https://opensearch.org/docs/latest/aggregations/metric/average/) +- [Sum]({{site.url}}{{site.baseurl}}/aggregations/metric/sum/) +- [Minimum]({{site.url}}{{site.baseurl}}/aggregations/metric/minimum/) +- [Maximum]({{site.url}}{{site.baseurl}}/aggregations/metric/maximum/) +- [Value count]({{site.url}}{{site.baseurl}}/aggregations/metric/value-count/) +- [Average]({{site.url}}{{site.baseurl}}/aggregations/metric/average/) To use aggregations: diff --git a/_security-analytics/threat-intelligence/api/monitor.md b/_security-analytics/threat-intelligence/api/monitor.md index 9a3ba76836..672e402eaf 100644 --- a/_security-analytics/threat-intelligence/api/monitor.md +++ b/_security-analytics/threat-intelligence/api/monitor.md @@ -8,7 +8,7 @@ nav_order: 35 # Monitor API -You can use the threat intelligence Monitor API to create, search, and update [monitors](https://opensearch.org/docs/latest/observing-your-data/alerting/monitors/) for your threat intelligence feeds. +You can use the threat intelligence Monitor API to create, search, and update [monitors]({{site.url}}{{site.baseurl}}/observing-your-data/alerting/monitors/) for your threat intelligence feeds. --- diff --git a/_security/access-control/users-roles.md b/_security/access-control/users-roles.md index b182e1576a..8f5bbf3d29 100644 --- a/_security/access-control/users-roles.md +++ b/_security/access-control/users-roles.md @@ -247,7 +247,7 @@ Map the role to your user: OpenSearch user roles are essential for controlling access to cluster resources. Users can be categorized as regular users, admin users, or super admin users based on their access rights and responsibilities. -For more information about defining users, see [Defining users](https://opensearch.org/docs/latest/security/access-control/users-roles/#defining-users). For more information about defining roles, see [Defining roles](https://opensearch.org/docs/latest/security/access-control/users-roles/#defining-roles). +For more information about defining users, see [Defining users]({{site.url}}{{site.baseurl}}/security/access-control/users-roles/#defining-users). For more information about defining roles, see [Defining roles]({{site.url}}{{site.baseurl}}/security/access-control/users-roles/#defining-roles). ### Regular users @@ -259,7 +259,7 @@ Admin users have elevated permissions that allow them to perform various adminis - Configure permissions. - Adjust backend settings. -Admin users can perform these tasks by configuring settings in the `opensearch.yml` file, using OpenSearch Dashboards, or interacting with the REST API. For more information about configuring users and roles, see [predefined roles](https://opensearch.org/docs/latest/security/access-control/users-roles/#predefined-roles). +Admin users can perform these tasks by configuring settings in the `opensearch.yml` file, using OpenSearch Dashboards, or interacting with the REST API. For more information about configuring users and roles, see [predefined roles]({{site.url}}{{site.baseurl}}/security/access-control/users-roles/#predefined-roles). ### Super admin users Super admin users have the highest level of administrative authority within the OpenSearch environment. This role is typically reserved for select users and should be managed carefully. @@ -280,4 +280,4 @@ plugins.security.authcz.admin_dn: If the super admin certificate is signed by a different CA, then the admin CA must be concatenated with the node's CA in the file defined in `plugins.security.ssl.http.pemtrustedcas_filepath` in `opensearch.yml`. -For more information, see [Configuring super admin certificates](https://opensearch.org/docs/latest/security/configuration/tls/#configuring-admin-certificates). +For more information, see [Configuring super admin certificates]({{site.url}}{{site.baseurl}}/security/configuration/tls/#configuring-admin-certificates). diff --git a/_security/configuration/disable-enable-security.md b/_security/configuration/disable-enable-security.md index 38bcc01cdd..f37e148f9e 100755 --- a/_security/configuration/disable-enable-security.md +++ b/_security/configuration/disable-enable-security.md @@ -174,7 +174,7 @@ Use the following steps to reinstall the plugin: 3. Add the necessary configuration to `opensearch.yml` for TLS encryption. See [Configuration]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/security-settings/) for information about the settings that need to be configured. -4. Create the `OPENSEARCH_INITIAL_ADMIN_PASSWORD` variable. For more information, see [Setting up a custom admin password](https://opensearch.org/docs/latest/security/configuration/demo-configuration/#setting-up-a-custom-admin-password). +4. Create the `OPENSEARCH_INITIAL_ADMIN_PASSWORD` variable. For more information, see [Setting up a custom admin password]({{site.url}}{{site.baseurl}}/security/configuration/demo-configuration/#setting-up-a-custom-admin-password). 5. Restart the nodes and reenable shard allocation: diff --git a/_security/configuration/index.md b/_security/configuration/index.md index f68667d92d..e377eb35af 100644 --- a/_security/configuration/index.md +++ b/_security/configuration/index.md @@ -86,7 +86,7 @@ After initial setup, if you make changes to your security configuration or disab 1. Find the `securityadmin` script. The script is typically stored in the OpenSearch plugins directory, `plugins/opensearch-security/tools/securityadmin.[sh|bat]`. - Note: If you're using OpenSearch 1.x, the `securityadmin` script is located in the `plugins/opendistro_security/tools/` directory. - - For more information, see [Basic usage](https://opensearch.org/docs/latest/security/configuration/security-admin/#basic-usage). + - For more information, see [Basic usage]({{site.url}}{{site.baseurl}}/security/configuration/security-admin/#basic-usage). 2. Run the script by using the following command: ``` ./plugins/opensearch-security/tools/securityadmin.[sh|bat] diff --git a/_security/configuration/tls.md b/_security/configuration/tls.md index a4115b8c25..d73650b7d7 100755 --- a/_security/configuration/tls.md +++ b/_security/configuration/tls.md @@ -137,7 +137,7 @@ plugins.security.authcz.admin_dn: For security reasons, you cannot use wildcards or regular expressions as values for the `admin_dn` setting. -For more information about admin and super admin user roles, see [Admin and super admin roles](https://opensearch.org/docs/latest/security/access-control/users-roles/#admin-and-super-admin-roles). +For more information about admin and super admin user roles, see [Admin and super admin roles]({{site.url}}{{site.baseurl}}/security/access-control/users-roles/#admin-and-super-admin-roles). ## (Advanced) OpenSSL diff --git a/_tools/logstash/read-from-opensearch.md b/_tools/logstash/read-from-opensearch.md index 53024c233b..f1da9dc8ed 100644 --- a/_tools/logstash/read-from-opensearch.md +++ b/_tools/logstash/read-from-opensearch.md @@ -13,7 +13,7 @@ redirect_from: As we ship Logstash events to an OpenSearch cluster using the [OpenSearch output plugin](https://github.com/opensearch-project/logstash-output-opensearch), we can also perform read operations on an OpenSearch cluster and load data into Logstash using the [OpenSearch input plugin](https://github.com/opensearch-project/logstash-input-opensearch). The OpenSearch input plugin reads the search query results performed on an OpenSearch cluster and loads them into Logstash. This lets you replay test logs, reindex, and perform other operations based on the loaded data. You can schedule ingestions to run periodically by using -[cron expressions](https://opensearch.org/docs/latest/monitoring-plugins/alerting/cron/), or manually load data into Logstash by running the query once. +[cron expressions]({{site.url}}{{site.baseurl}}/monitoring-plugins/alerting/cron/), or manually load data into Logstash by running the query once. @@ -51,4 +51,4 @@ Like the output plugin, after adding your configuration to the `pipeline.conf` f Adding `stdout{}` to the `output{}` section of your `pipeline.conf` file prints the query results to the console. -To reindex the data into an OpenSearch domain, add the destination domain configuration in the `output{}` section like shown [here](https://opensearch.org/docs/latest/tools/logstash/index/). +To reindex the data into an OpenSearch domain, add the destination domain configuration in the `output{}` section like shown [here]({{site.url}}{{site.baseurl}}/tools/logstash/index/). diff --git a/_tools/logstash/ship-to-opensearch.md b/_tools/logstash/ship-to-opensearch.md index 6ea355b34f..ae8dbbde2e 100644 --- a/_tools/logstash/ship-to-opensearch.md +++ b/_tools/logstash/ship-to-opensearch.md @@ -158,7 +158,7 @@ The following list provides details on the credential resolution logic: The OpenSearch output plugin can store both time series datasets (such as logs, events, and metrics) and non-time series data in OpenSearch. The data stream is recommended to index time series datasets (such as logs, metrics, and events) into OpenSearch. -To learn more about data streams, see the [data stream documentation](https://opensearch.org/docs/latest/opensearch/data-streams/). +To learn more about data streams, see the [data stream documentation]({{site.url}}{{site.baseurl}}/opensearch/data-streams/). To ingest data into a data stream through Logstash, create the data stream and specify the name of the data stream and set the `action` setting to `create`, as shown in the following example configuration: diff --git a/_troubleshoot/security-admin.md b/_troubleshoot/security-admin.md index f4770c1ddb..976e435615 100644 --- a/_troubleshoot/security-admin.md +++ b/_troubleshoot/security-admin.md @@ -92,7 +92,7 @@ Connected as CN=node-0.example.com,OU=SSL,O=Test,L=Test,C=DE ERR: CN=node-0.example.com,OU=SSL,O=Test,L=Test,C=DE is not an admin user ``` -You must use an admin certificate when executing the script. To learn more, see [Configuring super admin certificates](https://opensearch.org/docs/latest/security/configuration/tls/#configuring-admin-certificates). +You must use an admin certificate when executing the script. To learn more, see [Configuring super admin certificates]({{site.url}}{{site.baseurl}}/security/configuration/tls/#configuring-admin-certificates). ## Use the diagnose option diff --git a/_tuning-your-cluster/availability-and-recovery/remote-store/index.md b/_tuning-your-cluster/availability-and-recovery/remote-store/index.md index d5dc99d5fe..63812a6327 100644 --- a/_tuning-your-cluster/availability-and-recovery/remote-store/index.md +++ b/_tuning-your-cluster/availability-and-recovery/remote-store/index.md @@ -105,7 +105,7 @@ You can use remote-backed storage to: ## Benchmarks -The OpenSearch Project has run remote store using multiple workload options available within the [OpenSearch Benchmark](https://opensearch.org/docs/latest/benchmark/index/) tool. This section summarizes the benchmark results for the following workloads: +The OpenSearch Project has run remote store using multiple workload options available within the [OpenSearch Benchmark]({{site.url}}{{site.baseurl}}/benchmark/index/) tool. This section summarizes the benchmark results for the following workloads: - [StackOverflow](https://github.com/opensearch-project/opensearch-benchmark-workloads/tree/main/so) - [HTTP logs](https://github.com/opensearch-project/opensearch-benchmark-workloads/tree/main/http_logs) diff --git a/_tuning-your-cluster/availability-and-recovery/remote-store/remote-store-stats-api.md b/_tuning-your-cluster/availability-and-recovery/remote-store/remote-store-stats-api.md index 6139ef041d..5bbd5c226c 100644 --- a/_tuning-your-cluster/availability-and-recovery/remote-store/remote-store-stats-api.md +++ b/_tuning-your-cluster/availability-and-recovery/remote-store/remote-store-stats-api.md @@ -290,9 +290,9 @@ The `segment.upload` object contains the following fields. | `total_remote_refresh` | The total number of remote refreshes. | | `total_uploads_in_bytes` | The total number of bytes in all uploads to the remote store. | | `remote_refresh_size_in_bytes.last_successful` | The size of the data uploaded during the last successful refresh. | -| `remote_refresh_size_in_bytes.moving_avg` | The average size of the data, in bytes, uploaded in the last *N* refreshes. *N* is defined in the `remote_store.moving_average_window_size` setting. For more information, see [Remote segment backpressure](https://opensearch.org/docs/latest/tuning-your-cluster/availability-and-recovery/remote-store/remote-segment-backpressure/). | -| `upload_latency_in_bytes_per_sec.moving_avg` | The average speed of remote segment uploads, in bytes per second, for the last *N* uploads. *N* is defined in the `remote_store.moving_average_window_size` setting. For more information, see [Remote segment backpressure](https://opensearch.org/docs/latest/tuning-your-cluster/availability-and-recovery/remote-store/remote-segment-backpressure/). | -| `remote_refresh_latency_in_millis.moving_avg` | The average amount of time, in milliseconds, taken by a single remote refresh during the last *N* remote refreshes. *N* is defined in the `remote_store.moving_average_window_size` setting. For more information, see [Remote segment backpressure](https://opensearch.org/docs/latest/tuning-your-cluster/availability-and-recovery/remote-store/remote-segment-backpressure/). | +| `remote_refresh_size_in_bytes.moving_avg` | The average size of the data, in bytes, uploaded in the last *N* refreshes. *N* is defined in the `remote_store.moving_average_window_size` setting. For more information, see [Remote segment backpressure]({{site.url}}{{site.baseurl}}/tuning-your-cluster/availability-and-recovery/remote-store/remote-segment-backpressure/). | +| `upload_latency_in_bytes_per_sec.moving_avg` | The average speed of remote segment uploads, in bytes per second, for the last *N* uploads. *N* is defined in the `remote_store.moving_average_window_size` setting. For more information, see [Remote segment backpressure]({{site.url}}{{site.baseurl}}/tuning-your-cluster/availability-and-recovery/remote-store/remote-segment-backpressure/). | +| `remote_refresh_latency_in_millis.moving_avg` | The average amount of time, in milliseconds, taken by a single remote refresh during the last *N* remote refreshes. *N* is defined in the `remote_store.moving_average_window_size` setting. For more information, see [Remote segment backpressure]({{site.url}}{{site.baseurl}}/tuning-your-cluster/availability-and-recovery/remote-store/remote-segment-backpressure/). | The `segment.download` object contains the following fields. diff --git a/_tuning-your-cluster/availability-and-recovery/snapshots/snapshot-restore.md b/_tuning-your-cluster/availability-and-recovery/snapshots/snapshot-restore.md index ac717633f6..e85dc80cb7 100644 --- a/_tuning-your-cluster/availability-and-recovery/snapshots/snapshot-restore.md +++ b/_tuning-your-cluster/availability-and-recovery/snapshots/snapshot-restore.md @@ -68,7 +68,7 @@ Before you can take a snapshot, you have to "register" a snapshot repository. A ``` {% include copy-curl.html %} -You will most likely not need to specify any parameters except for `location`. For allowed request parameters, see [Register or update snapshot repository API](https://opensearch.org/docs/latest/api-reference/snapshots/create-repository/). +You will most likely not need to specify any parameters except for `location`. For allowed request parameters, see [Register or update snapshot repository API]({{site.url}}{{site.baseurl}}/api-reference/snapshots/create-repository/). ### Amazon S3 @@ -218,7 +218,7 @@ You will most likely not need to specify any parameters except for `location`. F ``` {% include copy-curl.html %} -You will most likely not need to specify any parameters except for `bucket` and `base_path`. For allowed request parameters, see [Register or update snapshot repository API](https://opensearch.org/docs/latest/api-reference/snapshots/create-repository/). +You will most likely not need to specify any parameters except for `bucket` and `base_path`. For allowed request parameters, see [Register or update snapshot repository API]({{site.url}}{{site.baseurl}}/api-reference/snapshots/create-repository/). ### Registering a Microsoft Azure storage account using Helm @@ -264,7 +264,7 @@ Use the following steps to register a snapshot repository backed by an Azure sto azure-snapshot-storage-account-key: ### Insert base64 encoded key ``` -1. [Deploy OpenSearch using Helm](https://opensearch.org/docs/latest/install-and-configure/install-opensearch/helm/) with the following additional values. Specify the value of the storage account in the `AZURE_SNAPSHOT_STORAGE_ACCOUNT` environment variable: +1. [Deploy OpenSearch using Helm]({{site.url}}{{site.baseurl}}/install-and-configure/install-opensearch/helm/) with the following additional values. Specify the value of the storage account in the `AZURE_SNAPSHOT_STORAGE_ACCOUNT` environment variable: ```yaml extraInitContainers: diff --git a/_tuning-your-cluster/availability-and-recovery/workload-management/query-group-lifecycle-api.md b/_tuning-your-cluster/availability-and-recovery/workload-management/query-group-lifecycle-api.md index d59fd4ecf2..2ed40d0705 100644 --- a/_tuning-your-cluster/availability-and-recovery/workload-management/query-group-lifecycle-api.md +++ b/_tuning-your-cluster/availability-and-recovery/workload-management/query-group-lifecycle-api.md @@ -69,7 +69,7 @@ PUT /_wlm/query_group | :--- | :--- | | `_id` | The ID of the query group, which can be used to associate query requests with the group and enforce the group's resource limits. | | `name` | The name of the query group. | -| `resiliency_mode` | The resiliency mode of the query group. Valid modes are `enforced`, `soft`, and `monitor`. For more information about resiliency modes, see [Operating modes](https://opensearch.org/docs/latest/tuning-your-cluster/availability-and-recovery/workload-management/wlm-feature-overview/#operating-modes). | +| `resiliency_mode` | The resiliency mode of the query group. Valid modes are `enforced`, `soft`, and `monitor`. For more information about resiliency modes, see [Operating modes]({{site.url}}{{site.baseurl}}/tuning-your-cluster/availability-and-recovery/workload-management/wlm-feature-overview/#operating-modes). | | `resource_limits` | The resource limits for query requests in the query group. Valid resources are `cpu` and `memory`. | When creating a query group, make sure that the sum of the resource limits for a single resource, either `cpu` or `memory`, does not exceed 1.