Uses of Class
com.azure.search.documents.indexes.models.LexicalTokenizer
Package
Description
Package containing classes for SearchServiceClient.
-
Uses of LexicalTokenizer in com.azure.search.documents.indexes.models
Modifier and TypeClassDescriptionfinal class
Grammar-based tokenizer that is suitable for processing most European-language documents.final class
Tokenizes the input from an edge into n-grams of the given size(s).final class
Emits the entire input as a single token.final class
Breaks text following the Unicode Text Segmentation rules.final class
Divides text using language-specific rules and reduces words to their base forms.final class
Divides text using language-specific rules.final class
Tokenizes the input into n-grams of the given size(s).final class
Tokenizer for path-like hierarchies.final class
Tokenizer that uses regex pattern matching to construct distinct tokens.final class
Tokenizes urls and emails as one token.Modifier and TypeMethodDescriptionSearchIndex.getTokenizers()
Get the tokenizers property: The tokenizers for the index.Modifier and TypeMethodDescriptionSearchIndex.setTokenizers
(LexicalTokenizer... tokenizers) Set the tokenizers property: The tokenizers for the index.Modifier and TypeMethodDescriptionSearchIndex.setTokenizers
(List<LexicalTokenizer> tokenizers) Set the tokenizers property: The tokenizers for the index.