Class LuceneStandardTokenizer
java.lang.Object
com.azure.search.documents.indexes.models.LexicalTokenizer
com.azure.search.documents.indexes.models.LuceneStandardTokenizer
Breaks text following the Unicode Text Segmentation rules. This tokenizer is
implemented using Apache Lucene.
-
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionGet the maxTokenLength property: The maximum token length.setMaxTokenLength
(Integer maxTokenLength) Set the maxTokenLength property: The maximum token length.Methods inherited from class com.azure.search.documents.indexes.models.LexicalTokenizer
getName
-
Constructor Details
-
LuceneStandardTokenizer
Constructor ofLuceneStandardTokenizer
.- Parameters:
name
- The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
-
-
Method Details
-
getMaxTokenLength
Get the maxTokenLength property: The maximum token length. Default is 255. Tokens longer than the maximum length are split.- Returns:
- the maxTokenLength value.
-
setMaxTokenLength
Set the maxTokenLength property: The maximum token length. Default is 255. Tokens longer than the maximum length are split.- Parameters:
maxTokenLength
- the maxTokenLength value to set.- Returns:
- the LuceneStandardTokenizer object itself.
-