Struct LexicalTokenizerName
Defines the names of all tokenizers supported by Azure Cognitive Search.
Namespace: System.Dynamic.ExpandoObject
Assembly: Azure.Search.Documents.dll
Syntax
public struct LexicalTokenizerName : IEquatable<Azure.Search.Documents.Indexes.Models.LexicalTokenizerName>
Constructors
LexicalTokenizerName(String)
Initializes a new instance of LexicalTokenizerName.
Declaration
public LexicalTokenizerName (string value);
Parameters
System.String
value
|
Exceptions
System.ArgumentNullException
|
Properties
Classic
Grammar-based tokenizer that is suitable for processing most European-language documents. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/ClassicTokenizer.html.
Declaration
public static Azure.Search.Documents.Indexes.Models.LexicalTokenizerName Classic { get; }
Property Value
LexicalTokenizerName
|
EdgeNGram
Tokenizes the input from an edge into n-grams of the given size(s). See https://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/EdgeNGramTokenizer.html.
Declaration
public static Azure.Search.Documents.Indexes.Models.LexicalTokenizerName EdgeNGram { get; }
Property Value
LexicalTokenizerName
|
Keyword
Emits the entire input as a single token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/KeywordTokenizer.html.
Declaration
public static Azure.Search.Documents.Indexes.Models.LexicalTokenizerName Keyword { get; }
Property Value
LexicalTokenizerName
|
Letter
Divides text at non-letters. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/LetterTokenizer.html.
Declaration
public static Azure.Search.Documents.Indexes.Models.LexicalTokenizerName Letter { get; }
Property Value
LexicalTokenizerName
|
Lowercase
Divides text at non-letters and converts them to lower case. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/LowerCaseTokenizer.html.
Declaration
public static Azure.Search.Documents.Indexes.Models.LexicalTokenizerName Lowercase { get; }
Property Value
LexicalTokenizerName
|
MicrosoftLanguageStemmingTokenizer
Divides text using language-specific rules and reduces words to their base forms.
Declaration
public static Azure.Search.Documents.Indexes.Models.LexicalTokenizerName MicrosoftLanguageStemmingTokenizer { get; }
Property Value
LexicalTokenizerName
|
MicrosoftLanguageTokenizer
Divides text using language-specific rules.
Declaration
public static Azure.Search.Documents.Indexes.Models.LexicalTokenizerName MicrosoftLanguageTokenizer { get; }
Property Value
LexicalTokenizerName
|
NGram
Tokenizes the input into n-grams of the given size(s). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/NGramTokenizer.html.
Declaration
public static Azure.Search.Documents.Indexes.Models.LexicalTokenizerName NGram { get; }
Property Value
LexicalTokenizerName
|
PathHierarchy
Tokenizer for path-like hierarchies. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/path/PathHierarchyTokenizer.html.
Declaration
public static Azure.Search.Documents.Indexes.Models.LexicalTokenizerName PathHierarchy { get; }
Property Value
LexicalTokenizerName
|
Pattern
Tokenizer that uses regex pattern matching to construct distinct tokens. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/pattern/PatternTokenizer.html.
Declaration
public static Azure.Search.Documents.Indexes.Models.LexicalTokenizerName Pattern { get; }
Property Value
LexicalTokenizerName
|
Standard
Standard Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/StandardTokenizer.html.
Declaration
public static Azure.Search.Documents.Indexes.Models.LexicalTokenizerName Standard { get; }
Property Value
LexicalTokenizerName
|
UaxUrlEmail
Tokenizes urls and emails as one token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/UAX29URLEmailTokenizer.html.
Declaration
public static Azure.Search.Documents.Indexes.Models.LexicalTokenizerName UaxUrlEmail { get; }
Property Value
LexicalTokenizerName
|
Whitespace
Divides text at whitespace. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/WhitespaceTokenizer.html.
Declaration
public static Azure.Search.Documents.Indexes.Models.LexicalTokenizerName Whitespace { get; }
Property Value
LexicalTokenizerName
|
Methods
Equals(LexicalTokenizerName)
Indicates whether the current object is equal to another object of the same type.
Declaration
public bool Equals (Azure.Search.Documents.Indexes.Models.LexicalTokenizerName other);
Parameters
LexicalTokenizerName
other
An object to compare with this object. |
Returns
System.Boolean
|
Equals(Object)
Indicates whether this instance and a specified object are equal.
Declaration
[System.ComponentModel.EditorBrowsable]
public override bool Equals (object obj);
Parameters
System.Object
obj
The object to compare with the current instance. |
Returns
System.Boolean
|
GetHashCode()
Returns the hash code for this instance.
Declaration
[System.ComponentModel.EditorBrowsable]
public override int GetHashCode ();
Returns
System.Int32
A 32-bit signed integer that is the hash code for this instance. |
ToString()
Returns the fully qualified type name of this instance.
Declaration
public override string ToString ();
Returns
System.String
The fully qualified type name. |
Operators
Equality(LexicalTokenizerName, LexicalTokenizerName)
Determines if two LexicalTokenizerName values are the same.
Declaration
public static bool operator == (Azure.Search.Documents.Indexes.Models.LexicalTokenizerName left, Azure.Search.Documents.Indexes.Models.LexicalTokenizerName right);
Parameters
LexicalTokenizerName
left
|
LexicalTokenizerName
right
|
Returns
System.Boolean
|
Implicit(String to LexicalTokenizerName)
Converts a string to a LexicalTokenizerName.
Declaration
public static implicit operator Azure.Search.Documents.Indexes.Models.LexicalTokenizerName (string value);
Parameters
System.String
value
|
Returns
LexicalTokenizerName
|
Inequality(LexicalTokenizerName, LexicalTokenizerName)
Determines if two LexicalTokenizerName values are not the same.
Declaration
public static bool operator != (Azure.Search.Documents.Indexes.Models.LexicalTokenizerName left, Azure.Search.Documents.Indexes.Models.LexicalTokenizerName right);
Parameters
LexicalTokenizerName
left
|
LexicalTokenizerName
right
|
Returns
System.Boolean
|