Class KeywordTokenizer
Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene.
Inherited Members
Namespace: System.Dynamic.ExpandoObject
Assembly: Azure.Search.Documents.dll
Syntax
public class KeywordTokenizer : Azure.Search.Documents.Indexes.Models.LexicalTokenizer
Constructors
KeywordTokenizer(String)
Initializes a new instance of KeywordTokenizer.
Declaration
public KeywordTokenizer (string name);
Parameters
System.String
name
The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters. |
Properties
BufferSize
The read buffer size in bytes. Default is 256. Setting this property on new instances of KeywordTokenizer may result in an error when sending new requests to the Azure Cognitive Search service.
Declaration
public Nullable<int> BufferSize { get; set; }
Property Value
System.Nullable<System.Int32>
|
MaxTokenLength
The maximum token length. Default is 256. Tokens longer than the maximum length are split. The maximum token length that can be used is 300 characters.
Declaration
public Nullable<int> MaxTokenLength { get; set; }
Property Value
System.Nullable<System.Int32>
|