From e65268fece37744ea53f210a6f4d3aee33a827e7 Mon Sep 17 00:00:00 2001
From: "opensearch-trigger-bot[bot]"
<98922864+opensearch-trigger-bot[bot]@users.noreply.github.com>
Date: Mon, 2 Dec 2024 14:42:27 +0000
Subject: [PATCH] Add lowercase token filter docs (#8162) (#8834)
---
_analyzers/token-filters/index.md | 2 +-
_analyzers/token-filters/lowercase.md | 82 +++++++++++++++++++++++++++
2 files changed, 83 insertions(+), 1 deletion(-)
create mode 100644 _analyzers/token-filters/lowercase.md
diff --git a/_analyzers/token-filters/index.md b/_analyzers/token-filters/index.md
index 6ff348dd37..040397424b 100644
--- a/_analyzers/token-filters/index.md
+++ b/_analyzers/token-filters/index.md
@@ -40,7 +40,7 @@ Token filter | Underlying Lucene token filter| Description
`kuromoji_completion` | [JapaneseCompletionFilter](https://lucene.apache.org/core/9_10_0/analysis/kuromoji/org/apache/lucene/analysis/ja/JapaneseCompletionFilter.html) | Adds Japanese romanized terms to the token stream (in addition to the original tokens). Usually used to support autocomplete on Japanese search terms. Note that the filter has a `mode` parameter, which should be set to `index` when used in an index analyzer and `query` when used in a search analyzer. Requires the `analysis-kuromoji` plugin. For information about installing the plugin, see [Additional plugins]({{site.url}}{{site.baseurl}}/install-and-configure/plugins/#additional-plugins).
[`length`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/length/) | [LengthFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/LengthFilter.html) | Removes tokens that are shorter or longer than the length range specified by `min` and `max`.
`limit` | [LimitTokenCountFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/LimitTokenCountFilter.html) | Limits the number of output tokens. A common use case is to limit the size of document field values based on token count.
-`lowercase` | [LowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) | Converts tokens to lowercase. The default [LowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) is for the English language. You can set the `language` parameter to `greek` (uses [GreekLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/el/GreekLowerCaseFilter.html)), `irish` (uses [IrishLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/ga/IrishLowerCaseFilter.html)), or `turkish` (uses [TurkishLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/tr/TurkishLowerCaseFilter.html)).
+[`lowercase`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/lowercase/) | [LowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) | Converts tokens to lowercase. The default [LowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) processes the English language. To process other languages, set the `language` parameter to `greek` (uses [GreekLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/el/GreekLowerCaseFilter.html)), `irish` (uses [IrishLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/ga/IrishLowerCaseFilter.html)), or `turkish` (uses [TurkishLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/tr/TurkishLowerCaseFilter.html)).
[`min_hash`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/min-hash/) | [MinHashFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/minhash/MinHashFilter.html) | Uses the [MinHash technique](https://en.wikipedia.org/wiki/MinHash) to estimate document similarity. Performs the following operations on a token stream sequentially:
1. Hashes each token in the stream.
2. Assigns the hashes to buckets, keeping only the smallest hashes of each bucket.
3. Outputs the smallest hash from each bucket as a token stream.
[`multiplexer`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/multiplexer/) | N/A | Emits multiple tokens at the same position. Runs each token through each of the specified filter lists separately and outputs the results as separate tokens.
[`ngram`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/ngram/) | [NGramTokenFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/ngram/NGramTokenFilter.html) | Tokenizes the given token into n-grams of lengths between `min_gram` and `max_gram`.
diff --git a/_analyzers/token-filters/lowercase.md b/_analyzers/token-filters/lowercase.md
new file mode 100644
index 0000000000..89f0f219fa
--- /dev/null
+++ b/_analyzers/token-filters/lowercase.md
@@ -0,0 +1,82 @@
+---
+layout: default
+title: Lowercase
+parent: Token filters
+nav_order: 260
+---
+
+# Lowercase token filter
+
+The `lowercase` token filter is used to convert all characters in the token stream to lowercase, making searches case insensitive.
+
+## Parameters
+
+The `lowercase` token filter can be configured with the following parameter.
+
+Parameter | Required/Optional | Description
+:--- | :--- | :---
+ `language` | Optional | Specifies a language-specific token filter. Valid values are:
- [`greek`](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/el/GreekLowerCaseFilter.html)
- [`irish`](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/ga/IrishLowerCaseFilter.html)
- [`turkish`](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/tr/TurkishLowerCaseFilter.html).
Default is the [Lucene LowerCaseFilter](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/core/LowerCaseFilter.html).
+
+## Example
+
+The following example request creates a new index named `custom_lowercase_example`. It configures an analyzer with a `lowercase` filter and specifies `greek` as the `language`:
+
+```json
+PUT /custom_lowercase_example
+{
+ "settings": {
+ "analysis": {
+ "analyzer": {
+ "greek_lowercase_example": {
+ "type": "custom",
+ "tokenizer": "standard",
+ "filter": ["greek_lowercase"]
+ }
+ },
+ "filter": {
+ "greek_lowercase": {
+ "type": "lowercase",
+ "language": "greek"
+ }
+ }
+ }
+ }
+}
+```
+{% include copy-curl.html %}
+
+## Generated tokens
+
+Use the following request to examine the tokens generated using the analyzer:
+
+```json
+GET /custom_lowercase_example/_analyze
+{
+ "analyzer": "greek_lowercase_example",
+ "text": "Αθήνα ΕΛΛΑΔΑ"
+}
+```
+{% include copy-curl.html %}
+
+The response contains the generated tokens:
+
+```json
+{
+ "tokens": [
+ {
+ "token": "αθηνα",
+ "start_offset": 0,
+ "end_offset": 5,
+ "type": "",
+ "position": 0
+ },
+ {
+ "token": "ελλαδα",
+ "start_offset": 6,
+ "end_offset": 12,
+ "type": "",
+ "position": 1
+ }
+ ]
+}
+```