Class KeywordTokenizer
- java.lang.Object
-
- com.azure.search.documents.indexes.models.LexicalTokenizer
-
- com.azure.search.documents.indexes.models.KeywordTokenizer
-
public final class KeywordTokenizer extends LexicalTokenizer
Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene.
-
-
Constructor Summary
Constructors Constructor Description KeywordTokenizer(String name)
Constructor ofKeywordTokenizer
.
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description Integer
getMaxTokenLength()
Get the maxTokenLength property: The maximum token length.KeywordTokenizer
setMaxTokenLength(Integer maxTokenLength)
Set the maxTokenLength property: The maximum token length.-
Methods inherited from class com.azure.search.documents.indexes.models.LexicalTokenizer
getName
-
-
-
-
Constructor Detail
-
KeywordTokenizer
public KeywordTokenizer(String name)
Constructor ofKeywordTokenizer
.- Parameters:
name
- The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
-
-
Method Detail
-
getMaxTokenLength
public Integer getMaxTokenLength()
Get the maxTokenLength property: The maximum token length. Default is 256. Tokens longer than the maximum length are split. The maximum token length that can be used is 300 characters.- Returns:
- the maxTokenLength value.
-
setMaxTokenLength
public KeywordTokenizer setMaxTokenLength(Integer maxTokenLength)
Set the maxTokenLength property: The maximum token length. Default is 256. Tokens longer than the maximum length are split. The maximum token length that can be used is 300 characters.- Parameters:
maxTokenLength
- the maxTokenLength value to set.- Returns:
- the KeywordTokenizerV2 object itself.
-
-