Class KeywordTokenizer

java.lang.Object
com.azure.search.documents.indexes.models.LexicalTokenizer
com.azure.search.documents.indexes.models.KeywordTokenizer

public final class KeywordTokenizer extends LexicalTokenizer
Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene.
  • Constructor Details

    • KeywordTokenizer

      public KeywordTokenizer(String name)
      Constructor of KeywordTokenizer.
      Parameters:
      name - The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
  • Method Details

    • getMaxTokenLength

      public Integer getMaxTokenLength()
      Get the maxTokenLength property: The maximum token length. Default is 256. Tokens longer than the maximum length are split. The maximum token length that can be used is 300 characters.
      Returns:
      the maxTokenLength value.
    • setMaxTokenLength

      public KeywordTokenizer setMaxTokenLength(Integer maxTokenLength)
      Set the maxTokenLength property: The maximum token length. Default is 256. Tokens longer than the maximum length are split. The maximum token length that can be used is 300 characters.
      Parameters:
      maxTokenLength - the maxTokenLength value to set.
      Returns:
      the KeywordTokenizerV2 object itself.