Class EdgeNGramTokenizer
Tokenizes the input from an edge into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene.
Inherited Members
Namespace: System.Dynamic.ExpandoObject
Assembly: Azure.Search.Documents.dll
Syntax
public class EdgeNGramTokenizer : Azure.Search.Documents.Indexes.Models.LexicalTokenizer
Constructors
EdgeNGramTokenizer(String)
Initializes a new instance of EdgeNGramTokenizer.
Declaration
public EdgeNGramTokenizer (string name);
Parameters
System.String
name
The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters. |
Exceptions
System.ArgumentNullException
|
Properties
MaxGram
The maximum n-gram length. Default is 2. Maximum is 300.
Declaration
public Nullable<int> MaxGram { get; set; }
Property Value
System.Nullable<System.Int32>
|
MinGram
The minimum n-gram length. Default is 1. Maximum is 300. Must be less than the value of maxGram.
Declaration
public Nullable<int> MinGram { get; set; }
Property Value
System.Nullable<System.Int32>
|
TokenChars
Character classes to keep in the tokens.
Declaration
public System.Collections.Generic.IList<Azure.Search.Documents.Indexes.Models.TokenCharacterKind> TokenChars { get; }
Property Value
System.Collections.Generic.IList<TokenCharacterKind>
|