Uses of Class
com.azure.search.documents.indexes.models.LexicalTokenizer
Packages that use LexicalTokenizer
Package
Description
Package containing classes for SearchServiceClient.
-
Uses of LexicalTokenizer in com.azure.search.documents.indexes.models
Subclasses of LexicalTokenizer in com.azure.search.documents.indexes.modelsModifier and TypeClassDescriptionfinal class
Grammar-based tokenizer that is suitable for processing most European-language documents.final class
Tokenizes the input from an edge into n-grams of the given size(s).final class
Emits the entire input as a single token.final class
Breaks text following the Unicode Text Segmentation rules.final class
Divides text using language-specific rules and reduces words to their base forms.final class
Divides text using language-specific rules.final class
Tokenizes the input into n-grams of the given size(s).final class
Tokenizer for path-like hierarchies.final class
Tokenizer that uses regex pattern matching to construct distinct tokens.final class
Tokenizes urls and emails as one token.Methods in com.azure.search.documents.indexes.models that return types with arguments of type LexicalTokenizerModifier and TypeMethodDescriptionSearchIndex.getTokenizers()
Get the tokenizers property: The tokenizers for the index.Methods in com.azure.search.documents.indexes.models with parameters of type LexicalTokenizerModifier and TypeMethodDescriptionSearchIndex.setTokenizers
(LexicalTokenizer... tokenizers) Set the tokenizers property: The tokenizers for the index.Method parameters in com.azure.search.documents.indexes.models with type arguments of type LexicalTokenizerModifier and TypeMethodDescriptionSearchIndex.setTokenizers
(List<LexicalTokenizer> tokenizers) Set the tokenizers property: The tokenizers for the index.