Skip to content

Commit

Permalink
adding examples in greek to lowercase token filter #8154
Browse files Browse the repository at this point in the history
Signed-off-by: Anton Rubin <[email protected]>
  • Loading branch information
AntonEliatra committed Sep 3, 2024
1 parent 5dddda8 commit 167d063
Show file tree
Hide file tree
Showing 2 changed files with 19 additions and 29 deletions.
2 changes: 1 addition & 1 deletion _analyzers/token-filters/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Token filter | Underlying Lucene token filter| Description
`kuromoji_completion` | [JapaneseCompletionFilter](https://lucene.apache.org/core/9_10_0/analysis/kuromoji/org/apache/lucene/analysis/ja/JapaneseCompletionFilter.html) | Adds Japanese romanized terms to the token stream (in addition to the original tokens). Usually used to support autocomplete on Japanese search terms. Note that the filter has a `mode` parameter, which should be set to `index` when used in an index analyzer and `query` when used in a search analyzer. Requires the `analysis-kuromoji` plugin. For information about installing the plugin, see [Additional plugins]({{site.url}}{{site.baseurl}}/install-and-configure/plugins/#additional-plugins).
`length` | [LengthFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/LengthFilter.html) | Removes tokens whose lengths are shorter or longer than the length range specified by `min` and `max`.
`limit` | [LimitTokenCountFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/LimitTokenCountFilter.html) | Limits the number of output tokens. A common use case is to limit the size of document field values based on token count.
`lowercase` | [LowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) | Converts tokens to lowercase. The default [LowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) is for the English language. You can set the `language` parameter to `greek` (uses [GreekLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/el/GreekLowerCaseFilter.html)), `irish` (uses [IrishLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/ga/IrishLowerCaseFilter.html)), or `turkish` (uses [TurkishLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/tr/TurkishLowerCaseFilter.html)).
[`lowercase`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/lowercase/) | [LowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) | Converts tokens to lowercase. The default [LowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) is for the English language. You can set the `language` parameter to `greek` (uses [GreekLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/el/GreekLowerCaseFilter.html)), `irish` (uses [IrishLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/ga/IrishLowerCaseFilter.html)), or `turkish` (uses [TurkishLowerCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/tr/TurkishLowerCaseFilter.html)).
`min_hash` | [MinHashFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/minhash/MinHashFilter.html) | Uses the [MinHash technique](https://en.wikipedia.org/wiki/MinHash) to estimate document similarity. Performs the following operations on a token stream sequentially: <br> 1. Hashes each token in the stream. <br> 2. Assigns the hashes to buckets, keeping only the smallest hashes of each bucket. <br> 3. Outputs the smallest hash from each bucket as a token stream.
`multiplexer` | N/A | Emits multiple tokens at the same position. Runs each token through each of the specified filter lists separately and outputs the results as separate tokens.
`ngram` | [NGramTokenFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/ngram/NGramTokenFilter.html) | Tokenizes the given token into n-grams of lengths between `min_gram` and `max_gram`.
Expand Down
46 changes: 18 additions & 28 deletions _analyzers/token-filters/lowercase.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,35 +7,32 @@ nav_order: 260

# Lowercase token filter

The `lowercase` token filter in OpenSearch is used to limit the number of tokens that are passed through the analysis chain.
The `lowercase` token filter in OpenSearch is used to convert all characters in the token stream to lowercase, making searches case-insensitive.

## Parameters

The `lowercase` token filter in OpenSearch can be configured with the following parameters:

- `max_token_count`: Maximum number of tokens that will be generated. Default is `1` (Integer, _Optional_)
- `consume_all_tokens`: Use all token, even if result exceeds `max_token_count`. Default is `false` (Boolean, _Optional_)

The `lowercase` token filter in OpenSearch can be configured with parameter `language`. The possible options are: [`greek`](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/el/GreekLowerCaseFilter.html), [`irish`](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/ga/IrishLowerCaseFilter.html) and [`turkish`](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/tr/TurkishLowerCaseFilter.html). Default is [Lucene’s LowerCaseFilter](https://lucene.apache.org/core/8_7_0/analyzers-common/org/apache/lucene/analysis/core/LowerCaseFilter.html). (String, _Optional_)

## Example

The following example request creates a new index named `my_index` and configures an analyzer with `lowercase` filter:
The following example request creates a new index named `custom_lowercase_example` and configures an analyzer with `lowercase` filter with greek `language`:

Check failure on line 18 in _analyzers/token-filters/lowercase.md

View workflow job for this annotation

GitHub Actions / style-job

[vale] reported by reviewdog 🐶 [OpenSearch.Spelling] Error: greek. If you are referencing a setting, variable, format, function, or repository, surround it with tic marks. Raw Output: {"message": "[OpenSearch.Spelling] Error: greek. If you are referencing a setting, variable, format, function, or repository, surround it with tic marks.", "location": {"path": "_analyzers/token-filters/lowercase.md", "range": {"start": {"line": 18, "column": 140}}}, "severity": "ERROR"}

```json
PUT my_index
PUT /custom_lowercase_example
{
"settings": {
"analysis": {
"analyzer": {
"three_token_limit": {
"greek_lowercase_example": {
"type": "custom",
"tokenizer": "standard",
"filter": [ "custom_token_limit" ]
"filter": ["greek_lowercase"]
}
},
"filter": {
"custom_token_limit": {
"type": "limit",
"max_token_count": 3
"greek_lowercase": {
"type": "lowercase",
"language": "greek"
}
}
}
Expand All @@ -49,10 +46,10 @@ PUT my_index
Use the following request to examine the tokens generated using the created analyzer:

```json
GET /my_index/_analyze
GET /custom_lowercase_example/_analyze
{
"analyzer": "three_token_limit",
"text": "OpenSearch is a powerful and flexible search engine."
"analyzer": "greek_lowercase_example",
"text": "Αθήνα ΕΛΛΑΔΑ"
}
```
{% include copy-curl.html %}
Expand All @@ -63,25 +60,18 @@ The response contains the generated tokens:
{
"tokens": [
{
"token": "OpenSearch",
"token": "αθηνα",
"start_offset": 0,
"end_offset": 10,
"end_offset": 5,
"type": "<ALPHANUM>",
"position": 0
},
{
"token": "is",
"start_offset": 11,
"end_offset": 13,
"token": "ελλαδα",
"start_offset": 6,
"end_offset": 12,
"type": "<ALPHANUM>",
"position": 1
},
{
"token": "a",
"start_offset": 14,
"end_offset": 15,
"type": "<ALPHANUM>",
"position": 2
}
]
}
Expand Down

0 comments on commit 167d063

Please sign in to comment.