• Public
  • Public/Protected
  • All

Interface DictionaryDecompounderTokenFilter

Package version

Decomposes compound words found in many Germanic languages. This token filter is implemented using Apache Lucene.


  • DictionaryDecompounderTokenFilter



Optional maxSubwordSize

maxSubwordSize: undefined | number

The maximum subword size. Only subwords shorter than this are outputted. Default is 15. Maximum is 300. Default value: 15.

Optional minSubwordSize

minSubwordSize: undefined | number

The minimum subword size. Only subwords longer than this are outputted. Default is 2. Maximum is 300. Default value: 2.

Optional minWordSize

minWordSize: undefined | number

The minimum word size. Only words longer than this get processed. Default is 5. Maximum is 300. Default value: 5.


name: string

The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.


odatatype: "#Microsoft.Azure.Search.DictionaryDecompounderTokenFilter"

Polymorphic Discriminator

Optional onlyLongestMatch

onlyLongestMatch: undefined | false | true

A value indicating whether to add only the longest matching subword to the output. Default is false. Default value: false.


wordList: string[]

The list of words to match against.

Generated using TypeDoc