Skip to content

Commit

Permalink
Merge branch 'main' into add-python-resources
Browse files Browse the repository at this point in the history
  • Loading branch information
vagimeli authored Jul 1, 2024
2 parents 7561ec1 + 40236a1 commit 110ecb2
Show file tree
Hide file tree
Showing 6 changed files with 143 additions and 4 deletions.
114 changes: 114 additions & 0 deletions _api-reference/render-template.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
---
layout: default
title: Render Template
nav_order: 82
---

# Render Template

The Render Template API renders a [search template]({{site.url}}{{site.baseurl}}/search-plugins/search-template/) as a search query.

## Paths and HTTP methods

```
GET /_render/template
POST /_render/template
GET /_render/template/<id>
POST /_render/template/<id>
```

## Path parameters

The Render Template API supports the following optional path parameter.

| Parameter | Type | Description |
| :--- | :--- | :--- |
| `id` | String | The ID of the search template to render. |

## Request options

The following options are supported in the request body of the Render Template API.

| Parameter | Required | Type | Description |
| :--- | :--- | :--- | :--- |
| `id` | Conditional | String | The ID of the search template to render. Is not required if the ID is provided in the path or if an inline template is specified by the `source`. |
| `params` | No | Object | A list of key-value pairs that replace Mustache variables found in the search template. The key-value pairs must exist in the documents being searched. |
| `source` | Conditional | Object | An inline search template to render if a search template is not specified. Supports the same parameters as a [Search]({{site.url}}{{site.baseurl}}/api-reference/search/) API request and [Mustache](https://mustache.github.io/mustache.5.html) variables. |

## Example request

Both of the following request examples use the search template with the template ID `play_search_template`:

```json
{
"source": {
"query": {
"match": {
"play_name": "{{play_name}}"
}
}
},
"params": {
"play_name": "Henry IV"
}
}
```

### Render template using template ID

The following example request validates a search template with the ID `play_search_template`:

```json
POST _render/template
{
"id": "play_search_template",
"params": {
"play_name": "Henry IV"
}
}
```
{% include copy.html %}

### Render template using `_source`

If you don't want to use a saved template, or want to test a template before saving, you can test a template with the `_source` parameter using [Mustache](https://mustache.github.io/mustache.5.html) variables, as shown in the following example:

```
{
"source": {
"from": "{{from}}{{^from}}10{{/from}}",
"size": "{{size}}{{^size}}10{{/size}}",
"query": {
"match": {
"play_name": "{{play_name}}"
}
}
},
"params": {
"play_name": "Henry IV"
}
}
```
{% include copy.html %}

## Example response

OpenSearch responds with information about the template's output:

```json
{
"template_output": {
"from": "0",
"size": "10",
"query": {
"match": {
"play_name": "Henry IV"
}
}
}
}
```




4 changes: 4 additions & 0 deletions _ingest-pipelines/processors/index-processors.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,3 +71,7 @@ Processor type | Description
## Batch-enabled processors

Some processors support batch ingestion---they can process multiple documents at the same time as a batch. These batch-enabled processors usually provide better performance when using batch processing. For batch processing, use the [Bulk API]({{site.url}}{{site.baseurl}}/api-reference/document-apis/bulk/) and provide a `batch_size` parameter. All batch-enabled processors have a batch mode and a single-document mode. When you ingest documents using the `PUT` method, the processor functions in single-document mode and processes documents in series. Currently, only the `text_embedding` and `sparse_encoding` processors are batch enabled. All other processors process documents one at a time.

## Selectively enabling processors

Processors defined by the [ingest-common module](https://github.com/opensearch-project/OpenSearch/blob/2.x/modules/ingest-common/src/main/java/org/opensearch/ingest/common/IngestCommonPlugin.java) can be selectively enabled by providing the `ingest-common.processors.allowed` cluster setting. If not provided, then all processors are enabled by default. Specifying an empty list disables all processors. If the setting is changed to remove previously enabled processors, then any pipeline using a disabled processor will fail after node restart when the new setting takes effect.
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,8 @@ OpenSearch supports the following dynamic cluster-level index settings:

- `indices.fielddata.cache.size` (String): The maximum size of the field data cache. May be specified as an absolute value (for example, `8GB`) or a percentage of the node heap (for example, `50%`). This value is static so you must specify it in the `opensearch.yml` file. If you don't specify this setting, the maximum size is unlimited. This value should be smaller than the `indices.breaker.fielddata.limit`. For more information, see [Field data circuit breaker]({{site.url}}{{site.baseurl}}/install-and-configure/configuring-opensearch/circuit-breaker/#field-data-circuit-breaker-settings).

- `indices.query.bool.max_clause_count` (Integer): Defines the maximum product of fields and terms that are queryable simultaneously. Before OpenSearch 2.16, a cluster restart was required in order to apply this static setting. Now dynamic, existing search thread pools may use the old static value initially, causing `TooManyClauses` exceptions. New thread pools use the updated value. Default is `1024`.

- `cluster.remote_store.index.path.type` (String): The path strategy for the data stored in the remote store. This setting is effective only for remote-store-enabled clusters. This setting supports the following values:
- `fixed`: Stores the data in path structure `<repository_base_path>/<index_uuid>/<shard_id>/`.
- `hashed_prefix`: Stores the data in path structure `hash(<shard-data-idenitifer>)/<repository_base_path>/<index_uuid>/<shard_id>/`.
Expand Down
4 changes: 4 additions & 0 deletions _search-plugins/search-pipelines/search-processors.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,3 +121,7 @@ The response contains the `search_pipelines` object that lists the available req

In addition to the processors provided by OpenSearch, additional processors may be provided by plugins.
{: .note}

## Selectively enabling processors

Processors defined by the [search-pipeline-common module](https://github.com/opensearch-project/OpenSearch/blob/2.x/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePlugin.java) are selectively enabled through the following cluster settings: `search.pipeline.common.request.processors.allowed`, `search.pipeline.common.response.processors.allowed`, or `search.pipeline.common.search.phase.results.processors.allowed`. If unspecified, then all processors are enabled. An empty list disables all processors. Removing enabled processors causes pipelines using them to fail after a node restart.
17 changes: 16 additions & 1 deletion _search-plugins/search-relevance/reranking-search-results.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,4 +115,19 @@ POST /my-index/_search
```
{% include copy-curl.html %}

Alternatively, you can provide the full path to the field containing the context. For more information, see [Rerank processor example]({{site.url}}{{site.baseurl}}/search-plugins/search-pipelines/rerank-processor/#example).
Alternatively, you can provide the full path to the field containing the context. For more information, see [Rerank processor example]({{site.url}}{{site.baseurl}}/search-plugins/search-pipelines/rerank-processor/#example).

## Using rerank and normalization processors together

When you use a rerank processor in conjunction with a [normalization processor]({{site.url}}{{site.baseurl}}/search-plugins/search-pipelines/normalization-processor/) and a hybrid query, the rerank processor alters the final document scores. This is because the rerank processor operates after the normalization processor in the search pipeline.
{: .note}

The processing order is as follows:

- Normalization processor: This processor normalizes the document scores based on the configured normalization method. For more information, see [Normalization processor]({{site.url}}{{site.baseurl}}/search-plugins/search-pipelines/normalization-processor/).
- Rerank processor: Following normalization, the rerank processor further adjusts the document scores. This adjustment can significantly impact the final ordering of search results.

This processing order has the following implications:

- Score modification: The rerank processor modifies the scores that were initially adjusted by the normalization processor, potentially leading to different ranking results than initially expected.
- Hybrid queries: In the context of hybrid queries, where multiple types of queries and scoring mechanisms are combined, this behavior is particularly noteworthy. The combined scores from the initial query are normalized first and then reranked, resulting in a two-stage scoring modification.
6 changes: 3 additions & 3 deletions _search-plugins/sql/ppl/syntax.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,15 +16,15 @@ Currently, `PPL` supports only one `search` command, which can be omitted to sim
## Syntax

```sql
search source=<index> [boolean-expression]
source=<index> [boolean-expression]
search source=<index> [where-filter]
source=<index> [where-filter]
```

Field | Description | Required
:--- | :--- |:---
`search` | Specifies search keywords. | Yes
`index` | Specifies which index to query from. | No
`bool-expression` | Specifies an expression that evaluates to a Boolean value. | No
`where-filter` | A Boolean expression that filters searches exactly like a `where` command. | No

## Examples

Expand Down

0 comments on commit 110ecb2

Please sign in to comment.