Class ClassicTokenizer
Grammar-based tokenizer that is suitable for processing most European-language documents. This tokenizer is implemented using Apache Lucene.
Inherited Members
Namespace: System.Dynamic.ExpandoObject
Assembly: Azure.Search.Documents.dll
Syntax
public class ClassicTokenizer : Azure.Search.Documents.Indexes.Models.LexicalTokenizer
Constructors
ClassicTokenizer(String)
Initializes a new instance of ClassicTokenizer.
Declaration
public ClassicTokenizer (string name);
Parameters
System.String
name
The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters. |
Exceptions
System.ArgumentNullException
|
Properties
MaxTokenLength
The maximum token length. Default is 255. Tokens longer than the maximum length are split. The maximum token length that can be used is 300 characters.
Declaration
public Nullable<int> MaxTokenLength { get; set; }
Property Value
System.Nullable<System.Int32>
|