Class MicrosoftLanguageStemmingTokenizer
Divides text using language-specific rules and reduces words to their base forms.
Inherited Members
Namespace: System.Dynamic.ExpandoObject
Assembly: Azure.Search.Documents.dll
Syntax
public class MicrosoftLanguageStemmingTokenizer : Azure.Search.Documents.Indexes.Models.LexicalTokenizer
Constructors
MicrosoftLanguageStemmingTokenizer(String)
Initializes a new instance of MicrosoftLanguageStemmingTokenizer.
Declaration
public MicrosoftLanguageStemmingTokenizer (string name);
Parameters
System.String
name
The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters. |
Exceptions
System.ArgumentNullException
|
Properties
IsSearchTokenizer
A value indicating how the tokenizer is used. Set to true if used as the search tokenizer, set to false if used as the indexing tokenizer. Default is false.
Declaration
public Nullable<bool> IsSearchTokenizer { get; set; }
Property Value
System.Nullable<System.Boolean>
|
Language
The language to use. The default is English.
Declaration
public Nullable<Azure.Search.Documents.Indexes.Models.MicrosoftStemmingTokenizerLanguage> Language { get; set; }
Property Value
System.Nullable<MicrosoftStemmingTokenizerLanguage>
|
MaxTokenLength
The maximum token length. Tokens longer than the maximum length are split. Maximum token length that can be used is 300 characters. Tokens longer than 300 characters are first split into tokens of length 300 and then each of those tokens is split based on the max token length set. Default is 255.
Declaration
public Nullable<int> MaxTokenLength { get; set; }
Property Value
System.Nullable<System.Int32>
|