Skip to content

Commit

Permalink
add unique token filter docs #8451
Browse files Browse the repository at this point in the history
Signed-off-by: Anton Rubin <[email protected]>
  • Loading branch information
AntonEliatra committed Oct 3, 2024
1 parent 76486a4 commit e9d1faa
Show file tree
Hide file tree
Showing 2 changed files with 103 additions and 1 deletion.
2 changes: 1 addition & 1 deletion _analyzers/token-filters/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ Normalization | `arabic_normalization`: [ArabicNormalizer](https://lucene.apache
`synonym_graph` | N/A | Supplies a synonym list, including multiword synonyms, for the analysis process.
`trim` | [TrimFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/TrimFilter.html) | Trims leading and trailing white space from each token in a stream.
`truncate` | [TruncateTokenFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/TruncateTokenFilter.html) | Truncates tokens whose length exceeds the specified character limit.
`unique` | N/A | Ensures each token is unique by removing duplicate tokens from a stream.
[`unique`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/unique/) | N/A | Ensures each token is unique by removing duplicate tokens from a stream.
`uppercase` | [UpperCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) | Converts tokens to uppercase.
`word_delimiter` | [WordDelimiterFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/WordDelimiterFilter.html) | Splits tokens at non-alphanumeric characters and performs normalization based on the specified rules.
`word_delimiter_graph` | [WordDelimiterGraphFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/WordDelimiterGraphFilter.html) | Splits tokens at non-alphanumeric characters and performs normalization based on the specified rules. Assigns multi-position tokens a `positionLength` attribute.
102 changes: 102 additions & 0 deletions _analyzers/token-filters/unique.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
---
layout: default
title: Unique
parent: Token filters
nav_order: 450
---

# Unique token filter

The `unique` token filter ensures that only unique tokens are kept during the analysis process, removing duplicate tokens that appear within a single field or text block.

## Parameters

The `unique` token filter can be configured with `only_on_same_position` parameter. If set to `true`, this token filter will act as `remove_duplicates` token filter and will only remove tokens that are in the same position.

## Example

The following example request creates a new index named `unique_example` and configures an analyzer with `unique` filter:

```json
PUT /unique_example
{
"settings": {
"analysis": {
"filter": {
"unique_filter": {
"type": "unique",
"only_on_same_position": false
}
},
"analyzer": {
"unique_analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"unique_filter"
]
}
}
}
}
}
```
{% include copy-curl.html %}

## Generated tokens

Use the following request to examine the tokens generated using the analyzer:

```json
GET /unique_example/_analyze
{
"analyzer": "unique_analyzer",
"text": "OpenSearch OpenSearch is powerful powerful and scalable"
}
```
{% include copy-curl.html %}

The response contains the generated tokens:

```json
{
"tokens": [
{
"token": "opensearch",
"start_offset": 0,
"end_offset": 10,
"type": "<ALPHANUM>",
"position": 0
},
{
"token": "is",
"start_offset": 22,
"end_offset": 24,
"type": "<ALPHANUM>",
"position": 1
},
{
"token": "powerful",
"start_offset": 25,
"end_offset": 33,
"type": "<ALPHANUM>",
"position": 2
},
{
"token": "and",
"start_offset": 43,
"end_offset": 46,
"type": "<ALPHANUM>",
"position": 3
},
{
"token": "scalable",
"start_offset": 47,
"end_offset": 55,
"type": "<ALPHANUM>",
"position": 4
}
]
}
```

0 comments on commit e9d1faa

Please sign in to comment.