Class MicrosoftLanguageStemmingTokenizer

java.lang.Object
com.azure.search.documents.indexes.models.LexicalTokenizer
com.azure.search.documents.indexes.models.MicrosoftLanguageStemmingTokenizer

public final class MicrosoftLanguageStemmingTokenizer extends LexicalTokenizer
Divides text using language-specific rules and reduces words to their base forms.
  • Constructor Details

    • MicrosoftLanguageStemmingTokenizer

      public MicrosoftLanguageStemmingTokenizer(String name)
      Parameters:
      name - The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
  • Method Details

    • getMaxTokenLength

      public Integer getMaxTokenLength()
      Get the maxTokenLength property: The maximum token length. Tokens longer than the maximum length are split. Maximum token length that can be used is 300 characters. Tokens longer than 300 characters are first split into tokens of length 300 and then each of those tokens is split based on the max token length set. Default is 255.
      Returns:
      the maxTokenLength value.
    • setMaxTokenLength

      public MicrosoftLanguageStemmingTokenizer setMaxTokenLength(Integer maxTokenLength)
      Set the maxTokenLength property: The maximum token length. Tokens longer than the maximum length are split. Maximum token length that can be used is 300 characters. Tokens longer than 300 characters are first split into tokens of length 300 and then each of those tokens is split based on the max token length set. Default is 255.
      Parameters:
      maxTokenLength - the maxTokenLength value to set.
      Returns:
      the MicrosoftLanguageStemmingTokenizer object itself.
    • isSearchTokenizer

      public Boolean isSearchTokenizer()
      Get the isSearchTokenizer property: A value indicating how the tokenizer is used. Set to true if used as the search tokenizer, set to false if used as the indexing tokenizer. Default is false.
      Returns:
      the isSearchTokenizer value.
    • setIsSearchTokenizerUsed

      public MicrosoftLanguageStemmingTokenizer setIsSearchTokenizerUsed(Boolean isSearchTokenizerUsed)
      Set the isSearchTokenizer property: A value indicating how the tokenizer is used. Set to true if used as the search tokenizer, set to false if used as the indexing tokenizer. Default is false.
      Parameters:
      isSearchTokenizerUsed - the isSearchTokenizer value to set.
      Returns:
      the MicrosoftLanguageStemmingTokenizer object itself.
    • getLanguage

      public MicrosoftStemmingTokenizerLanguage getLanguage()
      Get the language property: The language to use. The default is English. Possible values include: 'Arabic', 'Bangla', 'Bulgarian', 'Catalan', 'Croatian', 'Czech', 'Danish', 'Dutch', 'English', 'Estonian', 'Finnish', 'French', 'German', 'Greek', 'Gujarati', 'Hebrew', 'Hindi', 'Hungarian', 'Icelandic', 'Indonesian', 'Italian', 'Kannada', 'Latvian', 'Lithuanian', 'Malay', 'Malayalam', 'Marathi', 'NorwegianBokmaal', 'Polish', 'Portuguese', 'PortugueseBrazilian', 'Punjabi', 'Romanian', 'Russian', 'SerbianCyrillic', 'SerbianLatin', 'Slovak', 'Slovenian', 'Spanish', 'Swedish', 'Tamil', 'Telugu', 'Turkish', 'Ukrainian', 'Urdu'.
      Returns:
      the language value.
    • setLanguage

      Set the language property: The language to use. The default is English. Possible values include: 'Arabic', 'Bangla', 'Bulgarian', 'Catalan', 'Croatian', 'Czech', 'Danish', 'Dutch', 'English', 'Estonian', 'Finnish', 'French', 'German', 'Greek', 'Gujarati', 'Hebrew', 'Hindi', 'Hungarian', 'Icelandic', 'Indonesian', 'Italian', 'Kannada', 'Latvian', 'Lithuanian', 'Malay', 'Malayalam', 'Marathi', 'NorwegianBokmaal', 'Polish', 'Portuguese', 'PortugueseBrazilian', 'Punjabi', 'Romanian', 'Russian', 'SerbianCyrillic', 'SerbianLatin', 'Slovak', 'Slovenian', 'Spanish', 'Swedish', 'Tamil', 'Telugu', 'Turkish', 'Ukrainian', 'Urdu'.
      Parameters:
      language - the language value to set.
      Returns:
      the MicrosoftLanguageStemmingTokenizer object itself.