Options
All
  • Public
  • Public/Protected
  • All
Menu

@azure/search-documents

Package version

Index

Enumerations

Classes

Interfaces

Type aliases

Variables

Functions

Object literals

Type aliases

AnalyzeTextOptions

AnalyzeTextOptions: OperationOptions & AnalyzeRequest

Options for analyze text operation.

Answers

Answers: string

Defines values for Answers.
KnownAnswers can be used interchangeably with Answers, this enum contains the known values that the service supports.

Known values supported by the service

none: Do not return answers for the query.
extractive: Extracts answer candidates from the contents of the documents returned in response to a query expressed as a question in natural language.

ApiVersion20210430Preview

ApiVersion20210430Preview: string

Defines values for ApiVersion20210430Preview.
KnownApiVersion20210430Preview can be used interchangeably with ApiVersion20210430Preview, this enum contains the known values that the service supports.

Known values supported by the service

2021-04-30-Preview: Api Version '2021-04-30-Preview' Defines values for ApiVersion20210430Preview.
KnownApiVersion20210430Preview can be used interchangeably with ApiVersion20210430Preview, this enum contains the known values that the service supports.

Known values supported by the service

2021-04-30-Preview: Api Version '2021-04-30-Preview'

AsciiFoldingTokenFilter

AsciiFoldingTokenFilter: TokenFilter & { odatatype: "#Microsoft.Azure.Search.AsciiFoldingTokenFilter"; preserveOriginal?: undefined | false | true }

Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. This token filter is implemented using Apache Lucene.

AutocompleteMode

AutocompleteMode: "oneTerm" | "twoTerms" | "oneTermWithContext"

Defines values for AutocompleteMode.

BM25Similarity

BM25Similarity: Similarity & { b?: undefined | number; k1?: undefined | number; odatatype: "#Microsoft.Azure.Search.BM25Similarity" }

Ranking function based on the Okapi BM25 similarity algorithm. BM25 is a TF-IDF-like algorithm that includes length normalization (controlled by the 'b' parameter) as well as term frequency saturation (controlled by the 'k1' parameter).

BlobIndexerDataToExtract

BlobIndexerDataToExtract: string

Defines values for BlobIndexerDataToExtract.
KnownBlobIndexerDataToExtract can be used interchangeably with BlobIndexerDataToExtract, this enum contains the known values that the service supports.

Known values supported by the service

storageMetadata: Indexes just the standard blob properties and user-specified metadata.
allMetadata: Extracts metadata provided by the Azure blob storage subsystem and the content-type specific metadata (for example, metadata unique to just .png files are indexed).
contentAndMetadata: Extracts all metadata and textual content from each blob.

BlobIndexerImageAction

BlobIndexerImageAction: string

Defines values for BlobIndexerImageAction.
KnownBlobIndexerImageAction can be used interchangeably with BlobIndexerImageAction, this enum contains the known values that the service supports.

Known values supported by the service

none: Ignores embedded images or image files in the data set. This is the default.
generateNormalizedImages: Extracts text from images (for example, the word "STOP" from a traffic stop sign), and embeds it into the content field. This action requires that "dataToExtract" is set to "contentAndMetadata". A normalized image refers to additional processing resulting in uniform image output, sized and rotated to promote consistent rendering when you include images in visual search results. This information is generated for each image when you use this option.
generateNormalizedImagePerPage: Extracts text from images (for example, the word "STOP" from a traffic stop sign), and embeds it into the content field, but treats PDF files differently in that each page will be rendered as an image and normalized accordingly, instead of extracting embedded images. Non-PDF file types will be treated the same as if "generateNormalizedImages" was set.

BlobIndexerPDFTextRotationAlgorithm

BlobIndexerPDFTextRotationAlgorithm: string

Defines values for BlobIndexerPDFTextRotationAlgorithm.
KnownBlobIndexerPDFTextRotationAlgorithm can be used interchangeably with BlobIndexerPDFTextRotationAlgorithm, this enum contains the known values that the service supports.

Known values supported by the service

none: Leverages normal text extraction. This is the default.
detectAngles: May produce better and more readable text extraction from PDF files that have rotated text within them. Note that there may be a small performance speed impact when this parameter is used. This parameter only applies to PDF files, and only to PDFs with embedded text. If the rotated text appears within an embedded image in the PDF, this parameter does not apply.

BlobIndexerParsingMode

BlobIndexerParsingMode: string

Defines values for BlobIndexerParsingMode.
KnownBlobIndexerParsingMode can be used interchangeably with BlobIndexerParsingMode, this enum contains the known values that the service supports.

Known values supported by the service

default: Set to default for normal file processing.
text: Set to text to improve indexing performance on plain text files in blob storage.
delimitedText: Set to delimitedText when blobs are plain CSV files.
json: Set to json to extract structured content from JSON files.
jsonArray: Set to jsonArray to extract individual elements of a JSON array as separate documents in Azure Cognitive Search.
jsonLines: Set to jsonLines to extract individual JSON entities, separated by a new line, as separate documents in Azure Cognitive Search.

Captions

Captions: string

Defines values for Captions.
KnownCaptions can be used interchangeably with Captions, this enum contains the known values that the service supports.

Known values supported by the service

none: Do not return captions for the query.
extractive: Extracts captions from the matching documents that contain passages relevant to the search query.

CharFilterName

CharFilterName: string

Defines values for CharFilterName.
KnownCharFilterName can be used interchangeably with CharFilterName, this enum contains the known values that the service supports.

Known values supported by the service

html_strip: A character filter that attempts to strip out HTML constructs. See https://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/charfilter/HTMLStripCharFilter.html

CharFilterUnion

CjkBigramTokenFilter

CjkBigramTokenFilter: TokenFilter & { ignoreScripts?: CjkBigramTokenFilterScripts[]; odatatype: "#Microsoft.Azure.Search.CjkBigramTokenFilter"; outputUnigrams?: undefined | false | true }

Forms bigrams of CJK terms that are generated from the standard tokenizer. This token filter is implemented using Apache Lucene.

CjkBigramTokenFilterScripts

CjkBigramTokenFilterScripts: "han" | "hiragana" | "katakana" | "hangul"

Defines values for CjkBigramTokenFilterScripts.

ClassicSimilarity

ClassicSimilarity: Similarity & { odatatype: "#Microsoft.Azure.Search.ClassicSimilarity" }

Legacy similarity algorithm which uses the Lucene TFIDFSimilarity implementation of TF-IDF. This variation of TF-IDF introduces static document length normalization as well as coordinating factors that penalize documents that only partially match the searched queries.

ClassicTokenizer

ClassicTokenizer: LexicalTokenizer & { maxTokenLength?: undefined | number; odatatype: "#Microsoft.Azure.Search.ClassicTokenizer" }

Grammar-based tokenizer that is suitable for processing most European-language documents. This tokenizer is implemented using Apache Lucene.

CognitiveServicesAccountKey

CognitiveServicesAccountKey: CognitiveServicesAccount & { key: string; odatatype: "#Microsoft.Azure.Search.CognitiveServicesByKey" }

A cognitive service resource provisioned with a key that is attached to a skillset.

CognitiveServicesAccountUnion

CommonGramTokenFilter

CommonGramTokenFilter: TokenFilter & { commonWords: string[]; ignoreCase?: undefined | false | true; odatatype: "#Microsoft.Azure.Search.CommonGramTokenFilter"; useQueryMode?: undefined | false | true }

Construct bigrams for frequently occurring terms while indexing. Single terms are still indexed too, with bigrams overlaid. This token filter is implemented using Apache Lucene.

ComplexDataType

ComplexDataType: "Edm.ComplexType" | "Collection(Edm.ComplexType)"

Defines values for ComplexDataType. Possible values include: 'Edm.ComplexType', 'Collection(Edm.ComplexType)'

readonly

ConditionalSkill

ConditionalSkill: SearchIndexerSkill & { odatatype: "#Microsoft.Skills.Util.ConditionalSkill" }

A skill that enables scenarios that require a Boolean operation to determine the data to assign to an output.

CountDocumentsOptions

CountDocumentsOptions: OperationOptions

Options for performing the count operation on the index.

CreateDataSourceConnectionOptions

CreateDataSourceConnectionOptions: OperationOptions

Options for create datasource operation.

CreateIndexOptions

CreateIndexOptions: OperationOptions

Options for create index operation.

CreateIndexerOptions

CreateIndexerOptions: OperationOptions

Options for create indexer operation.

CreateSkillsetOptions

CreateSkillsetOptions: OperationOptions

Options for create skillset operation.

CreateSynonymMapOptions

CreateSynonymMapOptions: OperationOptions

Options for create synonymmap operation.

CustomAnalyzer

CustomAnalyzer: LexicalAnalyzer & { charFilters?: string[]; odatatype: "#Microsoft.Azure.Search.CustomAnalyzer"; tokenFilters?: string[]; tokenizerName: string }

Allows you to take control over the process of converting text into indexable/searchable tokens. It's a user-defined configuration consisting of a single predefined tokenizer and one or more filters. The tokenizer is responsible for breaking text into tokens, and the filters for modifying tokens emitted by the tokenizer. Allows you to take control over the process of converting text into indexable/searchable tokens. It's a user-defined configuration consisting of a single predefined tokenizer and one or more filters. The tokenizer is responsible for breaking text into tokens, and the filters for modifying tokens emitted by the tokenizer.

Optional charFilters

charFilters: string[]

A list of character filters used to prepare input text before it is processed by the tokenizer. For instance, they can replace certain characters or symbols. The filters are run in the order in which they are listed.

name

name: string

The name of the analyzer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

odatatype

odatatype: "#Microsoft.Azure.Search.CustomAnalyzer"

Polymorphic Discriminator

Optional tokenFilters

tokenFilters: string[]

A list of token filters used to filter out or modify the tokens generated by a tokenizer. For example, you can specify a lowercase filter that converts all characters to lowercase. The filters are run in the order in which they are listed.

tokenizerName

tokenizerName: string

The name of the tokenizer to use to divide continuous text into a sequence of tokens, such as breaking a sentence into words. KnownTokenizerNames is an enum containing known values.

CustomEntityLookupSkill

CustomEntityLookupSkill: SearchIndexerSkill & { defaultLanguageCode?: CustomEntityLookupSkillLanguage; entitiesDefinitionUri?: undefined | string; globalDefaultAccentSensitive?: undefined | false | true; globalDefaultCaseSensitive?: undefined | false | true; globalDefaultFuzzyEditDistance?: undefined | number; inlineEntitiesDefinition?: CustomEntity[]; odatatype: "#Microsoft.Skills.Text.CustomEntityLookupSkill" }

A skill looks for text from a custom, user-defined list of words and phrases.

CustomEntityLookupSkillLanguage

CustomEntityLookupSkillLanguage: string

Defines values for CustomEntityLookupSkillLanguage.
KnownCustomEntityLookupSkillLanguage can be used interchangeably with CustomEntityLookupSkillLanguage, this enum contains the known values that the service supports.

Known values supported by the service

da: Danish
de: German
en: English
es: Spanish
fi: Finnish
fr: French
it: Italian
ko: Korean
pt: Portuguese

CustomNormalizer

CustomNormalizer: LexicalNormalizer & { charFilters?: CharFilterName[]; odatatype: "#Microsoft.Azure.Search.CustomNormalizer"; tokenFilters?: TokenFilterName[] }

Allows you to configure normalization for filterable, sortable, and facetable fields, which by default operate with strict matching. This is a user-defined configuration consisting of at least one or more filters, which modify the token that is stored.

DataChangeDetectionPolicyUnion

DataDeletionDetectionPolicyUnion

DataSourcesCreateOrUpdateResponse

DataSourcesCreateOrUpdateResponse: SearchIndexerDataSource

Contains response data for the createOrUpdate operation.

DataSourcesCreateResponse

DataSourcesCreateResponse: SearchIndexerDataSource

Contains response data for the create operation.

DataSourcesGetResponse

DataSourcesGetResponse: SearchIndexerDataSource

Contains response data for the get operation.

DataSourcesListResponse

DataSourcesListResponse: ListDataSourcesResult

Contains response data for the list operation.

DefaultCognitiveServicesAccount

DefaultCognitiveServicesAccount: CognitiveServicesAccount & { odatatype: "#Microsoft.Azure.Search.DefaultCognitiveServices" }

An empty object that represents the default cognitive service resource for a skillset.

DeleteDocumentsOptions

DeleteDocumentsOptions: IndexDocumentsOptions

Options for the delete documents operation.

DictionaryDecompounderTokenFilter

DictionaryDecompounderTokenFilter: TokenFilter & { maxSubwordSize?: undefined | number; minSubwordSize?: undefined | number; minWordSize?: undefined | number; odatatype: "#Microsoft.Azure.Search.DictionaryDecompounderTokenFilter"; onlyLongestMatch?: undefined | false | true; wordList: string[] }

Decomposes compound words found in many Germanic languages. This token filter is implemented using Apache Lucene.

DistanceScoringFunction

DistanceScoringFunction: ScoringFunction & { parameters: DistanceScoringParameters; type: "distance" }

Defines a function that boosts scores based on distance from a geographic location.

DocumentExtractionSkill

DocumentExtractionSkill: SearchIndexerSkill & { configuration?: undefined | {}; dataToExtract?: undefined | string; odatatype: "#Microsoft.Skills.Util.DocumentExtractionSkill"; parsingMode?: undefined | string }

A skill that extracts content from a file within the enrichment pipeline.

DocumentsAutocompleteGetResponse

DocumentsAutocompleteGetResponse: AutocompleteResult

Contains response data for the autocompleteGet operation.

DocumentsAutocompletePostResponse

DocumentsAutocompletePostResponse: AutocompleteResult

Contains response data for the autocompletePost operation.

DocumentsCountResponse

DocumentsCountResponse: { body: number }

Contains response data for the count operation.

Type declaration

  • body: number

    The parsed response body.

DocumentsGetResponse

DocumentsGetResponse: Record<string, unknown>

Contains response data for the get operation.

DocumentsIndexResponse

DocumentsIndexResponse: IndexDocumentsResult

Contains response data for the index operation.

DocumentsSearchGetResponse

DocumentsSearchGetResponse: SearchDocumentsResult

Contains response data for the searchGet operation.

DocumentsSearchPostResponse

DocumentsSearchPostResponse: SearchDocumentsResult

Contains response data for the searchPost operation.

DocumentsSuggestGetResponse

DocumentsSuggestGetResponse: SuggestDocumentsResult

Contains response data for the suggestGet operation.

DocumentsSuggestPostResponse

DocumentsSuggestPostResponse: SuggestDocumentsResult

Contains response data for the suggestPost operation.

EdgeNGramTokenFilter

EdgeNGramTokenFilter: TokenFilter & { maxGram?: undefined | number; minGram?: undefined | number; odatatype: "#Microsoft.Azure.Search.EdgeNGramTokenFilter"; side?: EdgeNGramTokenFilterSide }

Generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is implemented using Apache Lucene. Generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is implemented using Apache Lucene.

Optional maxGram

maxGram: undefined | number

The maximum n-gram length. Default is 2. Maximum is 300. Default value: 2.

Optional minGram

minGram: undefined | number

The minimum n-gram length. Default is 1. Maximum is 300. Must be less than the value of maxGram. Default value: 1.

name

name: string

The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

odatatype

odatatype: "#Microsoft.Azure.Search.EdgeNGramTokenFilterV2" | "#Microsoft.Azure.Search.EdgeNGramTokenFilter"

Polymorphic Discriminator

Optional side

Specifies which side of the input the n-gram should be generated from. Default is "front". Possible values include: 'Front', 'Back'

EdgeNGramTokenFilterSide

EdgeNGramTokenFilterSide: "front" | "back"

Defines values for EdgeNGramTokenFilterSide.

EdgeNGramTokenFilterV2

EdgeNGramTokenFilterV2: TokenFilter & { maxGram?: undefined | number; minGram?: undefined | number; odatatype: "#Microsoft.Azure.Search.EdgeNGramTokenFilterV2"; side?: EdgeNGramTokenFilterSide }

Generates n-grams of the given size(s) starting from the front or the back of an input token. This token filter is implemented using Apache Lucene.

EdgeNGramTokenizer

EdgeNGramTokenizer: LexicalTokenizer & { maxGram?: undefined | number; minGram?: undefined | number; odatatype: "#Microsoft.Azure.Search.EdgeNGramTokenizer"; tokenChars?: TokenCharacterKind[] }

Tokenizes the input from an edge into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene.

ElisionTokenFilter

ElisionTokenFilter: TokenFilter & { articles?: string[]; odatatype: "#Microsoft.Azure.Search.ElisionTokenFilter" }

Removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). This token filter is implemented using Apache Lucene.

EntityCategory

EntityCategory: string

Defines values for EntityCategory.
KnownEntityCategory can be used interchangeably with EntityCategory, this enum contains the known values that the service supports.

Known values supported by the service

location: Entities describing a physical location.
organization: Entities describing an organization.
person: Entities describing a person.
quantity: Entities describing a quantity.
datetime: Entities describing a date and time.
url: Entities describing a URL.
email: Entities describing an email address.

EntityLinkingSkill

EntityLinkingSkill: SearchIndexerSkill & { defaultLanguageCode?: undefined | string; minimumPrecision?: undefined | number; modelVersion?: undefined | string; odatatype: "#Microsoft.Skills.Text.V3.EntityLinkingSkill" }

Using the Text Analytics API, extracts linked entities from text.

EntityRecognitionSkill

EntityRecognitionSkill: SearchIndexerSkill & { categories?: EntityCategory[]; defaultLanguageCode?: EntityRecognitionSkillLanguage; includeTypelessEntities?: undefined | false | true; minimumPrecision?: undefined | number; odatatype: "#Microsoft.Skills.Text.EntityRecognitionSkill" }

Text analytics entity recognition.

EntityRecognitionSkillLanguage

EntityRecognitionSkillLanguage: string

Defines values for EntityRecognitionSkillLanguage.
KnownEntityRecognitionSkillLanguage can be used interchangeably with EntityRecognitionSkillLanguage, this enum contains the known values that the service supports.

Known values supported by the service

ar: Arabic
cs: Czech
zh-Hans: Chinese-Simplified
zh-Hant: Chinese-Traditional
da: Danish
nl: Dutch
en: English
fi: Finnish
fr: French
de: German
el: Greek
hu: Hungarian
it: Italian
ja: Japanese
ko: Korean
no: Norwegian (Bokmaal)
pl: Polish
pt-PT: Portuguese (Portugal)
pt-BR: Portuguese (Brazil)
ru: Russian
es: Spanish
sv: Swedish
tr: Turkish

EntityRecognitionSkillV3

EntityRecognitionSkillV3: SearchIndexerSkill & { categories?: string[]; defaultLanguageCode?: undefined | string; minimumPrecision?: undefined | number; modelVersion?: undefined | string; odatatype: "#Microsoft.Skills.Text.V3.EntityRecognitionSkill" }

Using the Text Analytics API, extracts entities of different types from text.

FreshnessScoringFunction

FreshnessScoringFunction: ScoringFunction & { parameters: FreshnessScoringParameters; type: "freshness" }

Defines a function that boosts scores based on the value of a date-time field.

GetDataSourceConnectionOptions

GetDataSourceConnectionOptions: OperationOptions

Options for get datasource operation.

GetIndexOptions

GetIndexOptions: OperationOptions

Options for get index operation.

GetIndexStatisticsOptions

GetIndexStatisticsOptions: OperationOptions

Options for get index statistics operation.

GetIndexerOptions

GetIndexerOptions: OperationOptions

Options for get indexer operation.

GetIndexerStatusOptions

GetIndexerStatusOptions: OperationOptions

Options for get indexer status operation.

GetServiceStatisticsOptions

GetServiceStatisticsOptions: OperationOptions

Options for get service statistics operation.

GetSkillSetOptions

GetSkillSetOptions: OperationOptions

Options for get skillset operation.

GetSynonymMapsOptions

GetSynonymMapsOptions: OperationOptions

Options for get synonymmaps operation.

HighWaterMarkChangeDetectionPolicy

HighWaterMarkChangeDetectionPolicy: DataChangeDetectionPolicy & { highWaterMarkColumnName: string; odatatype: "#Microsoft.Azure.Search.HighWaterMarkChangeDetectionPolicy" }

Defines a data change detection policy that captures changes based on the value of a high water mark column.

ImageAnalysisSkill

ImageAnalysisSkill: SearchIndexerSkill & { defaultLanguageCode?: ImageAnalysisSkillLanguage; details?: ImageDetail[]; odatatype: "#Microsoft.Skills.Vision.ImageAnalysisSkill"; visualFeatures?: VisualFeature[] }

A skill that analyzes image files. It extracts a rich set of visual features based on the image content.

ImageAnalysisSkillLanguage

ImageAnalysisSkillLanguage: string

Defines values for ImageAnalysisSkillLanguage.
KnownImageAnalysisSkillLanguage can be used interchangeably with ImageAnalysisSkillLanguage, this enum contains the known values that the service supports.

Known values supported by the service

en: English
es: Spanish
ja: Japanese
pt: Portuguese
zh: Chinese

ImageDetail

ImageDetail: string

Defines values for ImageDetail.
KnownImageDetail can be used interchangeably with ImageDetail, this enum contains the known values that the service supports.

Known values supported by the service

celebrities: Details recognized as celebrities.
landmarks: Details recognized as landmarks.

IndexActionType

IndexActionType: "upload" | "merge" | "mergeOrUpload" | "delete"

Defines values for IndexActionType.

IndexDocumentsAction

IndexDocumentsAction<T>: { __actionType: IndexActionType } & Partial<T>

Represents an index action that operates on a document.

Type parameters

  • T

IndexIterator

IndexIterator: PagedAsyncIterableIterator<SearchIndex, SearchIndex[], {}>

An iterator for listing the indexes that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration.

IndexNameIterator

IndexNameIterator: PagedAsyncIterableIterator<string, string[], {}>

An iterator for listing the indexes that exist in the Search service. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration.

IndexerExecutionEnvironment

IndexerExecutionEnvironment: string

Defines values for IndexerExecutionEnvironment.
KnownIndexerExecutionEnvironment can be used interchangeably with IndexerExecutionEnvironment, this enum contains the known values that the service supports.

Known values supported by the service

standard: Indicates that Azure Cognitive Search can determine where the indexer should execute. This is the default environment when nothing is specified and is the recommended value.
private: Indicates that the indexer should run with the environment provisioned specifically for the search service. This should only be specified as the execution environment if the indexer needs to access resources securely over shared private link resources.

IndexerExecutionStatus

IndexerExecutionStatus: "transientFailure" | "success" | "inProgress" | "reset"

Defines values for IndexerExecutionStatus.

IndexerExecutionStatusDetail

IndexerExecutionStatusDetail: string

Defines values for IndexerExecutionStatusDetail.
KnownIndexerExecutionStatusDetail can be used interchangeably with IndexerExecutionStatusDetail, this enum contains the known values that the service supports.

Known values supported by the service

resetDocs: Indicates that the reset that occurred was for a call to ResetDocs.

IndexerStatus

IndexerStatus: "unknown" | "error" | "running"

Defines values for IndexerStatus.

IndexersCreateOrUpdateResponse

IndexersCreateOrUpdateResponse: SearchIndexer

Contains response data for the createOrUpdate operation.

IndexersCreateResponse

IndexersCreateResponse: SearchIndexer

Contains response data for the create operation.

IndexersGetResponse

IndexersGetResponse: SearchIndexer

Contains response data for the get operation.

IndexersGetStatusResponse

IndexersGetStatusResponse: SearchIndexerStatus

Contains response data for the getStatus operation.

IndexersListResponse

IndexersListResponse: ListIndexersResult

Contains response data for the list operation.

IndexesAnalyzeResponse

IndexesAnalyzeResponse: AnalyzeResult

Contains response data for the analyze operation.

IndexesCreateOrUpdateResponse

IndexesCreateOrUpdateResponse: SearchIndex

Contains response data for the createOrUpdate operation.

IndexesCreateResponse

IndexesCreateResponse: SearchIndex

Contains response data for the create operation.

IndexesGetResponse

IndexesGetResponse: SearchIndex

Contains response data for the get operation.

IndexesGetStatisticsResponse

IndexesGetStatisticsResponse: GetIndexStatisticsResult

Contains response data for the getStatistics operation.

IndexesListResponse

IndexesListResponse: ListIndexesResult

Contains response data for the list operation.

IndexingMode

IndexingMode: string

Defines values for IndexingMode.
KnownIndexingMode can be used interchangeably with IndexingMode, this enum contains the known values that the service supports.

Known values supported by the service

indexingAllDocs: The indexer is indexing all documents in the datasource.
indexingResetDocs: The indexer is indexing selective, reset documents in the datasource. The documents being indexed are defined on indexer status.

KeepTokenFilter

KeepTokenFilter: TokenFilter & { keepWords: string[]; lowerCaseKeepWords?: undefined | false | true; odatatype: "#Microsoft.Azure.Search.KeepTokenFilter" }

A token filter that only keeps tokens with text contained in a specified list of words. This token filter is implemented using Apache Lucene.

KeyPhraseExtractionSkill

KeyPhraseExtractionSkill: SearchIndexerSkill & { defaultLanguageCode?: KeyPhraseExtractionSkillLanguage; maxKeyPhraseCount?: undefined | number; modelVersion?: undefined | string; odatatype: "#Microsoft.Skills.Text.KeyPhraseExtractionSkill" }

A skill that uses text analytics for key phrase extraction.

KeyPhraseExtractionSkillLanguage

KeyPhraseExtractionSkillLanguage: string

Defines values for KeyPhraseExtractionSkillLanguage.
KnownKeyPhraseExtractionSkillLanguage can be used interchangeably with KeyPhraseExtractionSkillLanguage, this enum contains the known values that the service supports.

Known values supported by the service

da: Danish
nl: Dutch
en: English
fi: Finnish
fr: French
de: German
it: Italian
ja: Japanese
ko: Korean
no: Norwegian (Bokmaal)
pl: Polish
pt-PT: Portuguese (Portugal)
pt-BR: Portuguese (Brazil)
ru: Russian
es: Spanish
sv: Swedish

KeywordMarkerTokenFilter

KeywordMarkerTokenFilter: TokenFilter & { ignoreCase?: undefined | false | true; keywords: string[]; odatatype: "#Microsoft.Azure.Search.KeywordMarkerTokenFilter" }

Marks terms as keywords. This token filter is implemented using Apache Lucene.

KeywordTokenizer

KeywordTokenizer: LexicalTokenizer & { bufferSize?: undefined | number; odatatype: "#Microsoft.Azure.Search.KeywordTokenizer" }

Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene. Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene.

Optional maxTokenLength

maxTokenLength: undefined | number

The maximum token length. Default is 256. Tokens longer than the maximum length are split. The maximum token length that can be used is 300 characters. Default value: 256.

name

name: string

The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

odatatype

odatatype: "#Microsoft.Azure.Search.KeywordTokenizerV2" | "#Microsoft.Azure.Search.KeywordTokenizer"

Polymorphic Discriminator

KeywordTokenizerV2

KeywordTokenizerV2: LexicalTokenizer & { maxTokenLength?: undefined | number; odatatype: "#Microsoft.Azure.Search.KeywordTokenizerV2" }

Emits the entire input as a single token. This tokenizer is implemented using Apache Lucene.

LanguageDetectionSkill

LanguageDetectionSkill: SearchIndexerSkill & { defaultCountryHint?: undefined | string; modelVersion?: undefined | string; odatatype: "#Microsoft.Skills.Text.LanguageDetectionSkill" }

A skill that detects the language of input text and reports a single language code for every document submitted on the request. The language code is paired with a score indicating the confidence of the analysis.

LengthTokenFilter

LengthTokenFilter: TokenFilter & { maxLength?: undefined | number; minLength?: undefined | number; odatatype: "#Microsoft.Azure.Search.LengthTokenFilter" }

Removes words that are too long or too short. This token filter is implemented using Apache Lucene.

LexicalAnalyzerName

LexicalAnalyzerName: string

Defines values for LexicalAnalyzerName.
KnownLexicalAnalyzerName can be used interchangeably with LexicalAnalyzerName, this enum contains the known values that the service supports.

Known values supported by the service

ar.microsoft: Microsoft analyzer for Arabic.
ar.lucene: Lucene analyzer for Arabic.
hy.lucene: Lucene analyzer for Armenian.
bn.microsoft: Microsoft analyzer for Bangla.
eu.lucene: Lucene analyzer for Basque.
bg.microsoft: Microsoft analyzer for Bulgarian.
bg.lucene: Lucene analyzer for Bulgarian.
ca.microsoft: Microsoft analyzer for Catalan.
ca.lucene: Lucene analyzer for Catalan.
zh-Hans.microsoft: Microsoft analyzer for Chinese (Simplified).
zh-Hans.lucene: Lucene analyzer for Chinese (Simplified).
zh-Hant.microsoft: Microsoft analyzer for Chinese (Traditional).
zh-Hant.lucene: Lucene analyzer for Chinese (Traditional).
hr.microsoft: Microsoft analyzer for Croatian.
cs.microsoft: Microsoft analyzer for Czech.
cs.lucene: Lucene analyzer for Czech.
da.microsoft: Microsoft analyzer for Danish.
da.lucene: Lucene analyzer for Danish.
nl.microsoft: Microsoft analyzer for Dutch.
nl.lucene: Lucene analyzer for Dutch.
en.microsoft: Microsoft analyzer for English.
en.lucene: Lucene analyzer for English.
et.microsoft: Microsoft analyzer for Estonian.
fi.microsoft: Microsoft analyzer for Finnish.
fi.lucene: Lucene analyzer for Finnish.
fr.microsoft: Microsoft analyzer for French.
fr.lucene: Lucene analyzer for French.
gl.lucene: Lucene analyzer for Galician.
de.microsoft: Microsoft analyzer for German.
de.lucene: Lucene analyzer for German.
el.microsoft: Microsoft analyzer for Greek.
el.lucene: Lucene analyzer for Greek.
gu.microsoft: Microsoft analyzer for Gujarati.
he.microsoft: Microsoft analyzer for Hebrew.
hi.microsoft: Microsoft analyzer for Hindi.
hi.lucene: Lucene analyzer for Hindi.
hu.microsoft: Microsoft analyzer for Hungarian.
hu.lucene: Lucene analyzer for Hungarian.
is.microsoft: Microsoft analyzer for Icelandic.
id.microsoft: Microsoft analyzer for Indonesian (Bahasa).
id.lucene: Lucene analyzer for Indonesian.
ga.lucene: Lucene analyzer for Irish.
it.microsoft: Microsoft analyzer for Italian.
it.lucene: Lucene analyzer for Italian.
ja.microsoft: Microsoft analyzer for Japanese.
ja.lucene: Lucene analyzer for Japanese.
kn.microsoft: Microsoft analyzer for Kannada.
ko.microsoft: Microsoft analyzer for Korean.
ko.lucene: Lucene analyzer for Korean.
lv.microsoft: Microsoft analyzer for Latvian.
lv.lucene: Lucene analyzer for Latvian.
lt.microsoft: Microsoft analyzer for Lithuanian.
ml.microsoft: Microsoft analyzer for Malayalam.
ms.microsoft: Microsoft analyzer for Malay (Latin).
mr.microsoft: Microsoft analyzer for Marathi.
nb.microsoft: Microsoft analyzer for Norwegian (Bokmål).
no.lucene: Lucene analyzer for Norwegian.
fa.lucene: Lucene analyzer for Persian.
pl.microsoft: Microsoft analyzer for Polish.
pl.lucene: Lucene analyzer for Polish.
pt-BR.microsoft: Microsoft analyzer for Portuguese (Brazil).
pt-BR.lucene: Lucene analyzer for Portuguese (Brazil).
pt-PT.microsoft: Microsoft analyzer for Portuguese (Portugal).
pt-PT.lucene: Lucene analyzer for Portuguese (Portugal).
pa.microsoft: Microsoft analyzer for Punjabi.
ro.microsoft: Microsoft analyzer for Romanian.
ro.lucene: Lucene analyzer for Romanian.
ru.microsoft: Microsoft analyzer for Russian.
ru.lucene: Lucene analyzer for Russian.
sr-cyrillic.microsoft: Microsoft analyzer for Serbian (Cyrillic).
sr-latin.microsoft: Microsoft analyzer for Serbian (Latin).
sk.microsoft: Microsoft analyzer for Slovak.
sl.microsoft: Microsoft analyzer for Slovenian.
es.microsoft: Microsoft analyzer for Spanish.
es.lucene: Lucene analyzer for Spanish.
sv.microsoft: Microsoft analyzer for Swedish.
sv.lucene: Lucene analyzer for Swedish.
ta.microsoft: Microsoft analyzer for Tamil.
te.microsoft: Microsoft analyzer for Telugu.
th.microsoft: Microsoft analyzer for Thai.
th.lucene: Lucene analyzer for Thai.
tr.microsoft: Microsoft analyzer for Turkish.
tr.lucene: Lucene analyzer for Turkish.
uk.microsoft: Microsoft analyzer for Ukrainian.
ur.microsoft: Microsoft analyzer for Urdu.
vi.microsoft: Microsoft analyzer for Vietnamese.
standard.lucene: Standard Lucene analyzer.
standardasciifolding.lucene: Standard ASCII Folding Lucene analyzer. See https://docs.microsoft.com/rest/api/searchservice/Custom-analyzers-in-Azure-Search#Analyzers
keyword: Treats the entire content of a field as a single token. This is useful for data like zip codes, ids, and some product names. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/KeywordAnalyzer.html
pattern: Flexibly separates text into terms via a regular expression pattern. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/PatternAnalyzer.html
simple: Divides text at non-letters and converts them to lower case. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/SimpleAnalyzer.html
stop: Divides text at non-letters; Applies the lowercase and stopword token filters. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/StopAnalyzer.html
whitespace: An analyzer that uses the whitespace tokenizer. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/WhitespaceAnalyzer.html

LexicalAnalyzerUnion

LexicalNormalizerName

LexicalNormalizerName: string

Defines values for LexicalNormalizerName.
KnownLexicalNormalizerName can be used interchangeably with LexicalNormalizerName, this enum contains the known values that the service supports.

Known values supported by the service

asciifolding: Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html
elision: Removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/util/ElisionFilter.html
lowercase: Normalizes token text to lowercase. See https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/LowerCaseFilter.html
standard: Standard normalizer, which consists of lowercase and asciifolding. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/reverse/ReverseStringFilter.html
uppercase: Normalizes token text to uppercase. See https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/UpperCaseFilter.html

LexicalNormalizerUnion

LexicalNormalizerUnion: LexicalNormalizer | CustomNormalizer

LexicalTokenizerName

LexicalTokenizerName: string

Defines values for LexicalTokenizerName.
KnownLexicalTokenizerName can be used interchangeably with LexicalTokenizerName, this enum contains the known values that the service supports.

Known values supported by the service

classic: Grammar-based tokenizer that is suitable for processing most European-language documents. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/ClassicTokenizer.html
edgeNGram: Tokenizes the input from an edge into n-grams of the given size(s). See https://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/EdgeNGramTokenizer.html
keyword_v2: Emits the entire input as a single token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/KeywordTokenizer.html
letter: Divides text at non-letters. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/LetterTokenizer.html
lowercase: Divides text at non-letters and converts them to lower case. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/LowerCaseTokenizer.html
microsoft_language_tokenizer: Divides text using language-specific rules.
microsoft_language_stemming_tokenizer: Divides text using language-specific rules and reduces words to their base forms.
nGram: Tokenizes the input into n-grams of the given size(s). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/NGramTokenizer.html
path_hierarchy_v2: Tokenizer for path-like hierarchies. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/path/PathHierarchyTokenizer.html
pattern: Tokenizer that uses regex pattern matching to construct distinct tokens. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/pattern/PatternTokenizer.html
standard_v2: Standard Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/StandardTokenizer.html
uax_url_email: Tokenizes urls and emails as one token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/UAX29URLEmailTokenizer.html
whitespace: Divides text at whitespace. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/WhitespaceTokenizer.html

LexicalTokenizerUnion

LimitTokenFilter

LimitTokenFilter: TokenFilter & { consumeAllTokens?: undefined | false | true; maxTokenCount?: undefined | number; odatatype: "#Microsoft.Azure.Search.LimitTokenFilter" }

Limits the number of tokens while indexing. This token filter is implemented using Apache Lucene.

LineEnding

LineEnding: string

Defines values for LineEnding.
KnownLineEnding can be used interchangeably with LineEnding, this enum contains the known values that the service supports.

Known values supported by the service

space: Lines are separated by a single space character.
carriageReturn: Lines are separated by a carriage return ('\r') character.
lineFeed: Lines are separated by a single line feed ('\n') character.
carriageReturnLineFeed: Lines are separated by a carriage return and a line feed ('\r\n') character.

ListDataSourceConnectionsOptions

ListDataSourceConnectionsOptions: OperationOptions

Options for a list data sources operation.

ListIndexersOptions

ListIndexersOptions: OperationOptions

Options for a list indexers operation.

ListIndexesOptions

ListIndexesOptions: OperationOptions

Options for a list indexes operation.

ListSkillsetsOptions

ListSkillsetsOptions: OperationOptions

Options for a list skillsets operation.

ListSynonymMapsOptions

ListSynonymMapsOptions: OperationOptions

Options for a list synonymMaps operation.

LuceneStandardAnalyzer

LuceneStandardAnalyzer: LexicalAnalyzer & { maxTokenLength?: undefined | number; odatatype: "#Microsoft.Azure.Search.StandardAnalyzer"; stopwords?: string[] }

Standard Apache Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter.

LuceneStandardTokenizer

LuceneStandardTokenizer: LexicalTokenizer & { maxTokenLength?: undefined | number; odatatype: "#Microsoft.Azure.Search.StandardTokenizer" }

Breaks text following the Unicode Text Segmentation rules. This tokenizer is implemented using Apache Lucene. Breaks text following the Unicode Text Segmentation rules. This tokenizer is implemented using Apache Lucene.

Optional maxTokenLength

maxTokenLength: undefined | number

The maximum token length. Default is 255. Tokens longer than the maximum length are split. The maximum token length that can be used is 300 characters. Default value: 255.

name

name: string

The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

odatatype

odatatype: "#Microsoft.Azure.Search.StandardTokenizerV2" | "#Microsoft.Azure.Search.StandardTokenizer"

Polymorphic Discriminator

LuceneStandardTokenizerV2

LuceneStandardTokenizerV2: LexicalTokenizer & { maxTokenLength?: undefined | number; odatatype: "#Microsoft.Azure.Search.StandardTokenizerV2" }

Breaks text following the Unicode Text Segmentation rules. This tokenizer is implemented using Apache Lucene.

MagnitudeScoringFunction

MagnitudeScoringFunction: ScoringFunction & { parameters: MagnitudeScoringParameters; type: "magnitude" }

Defines a function that boosts scores based on the magnitude of a numeric field.

MappingCharFilter

MappingCharFilter: CharFilter & { mappings: string[]; odatatype: "#Microsoft.Azure.Search.MappingCharFilter" }

A character filter that applies mappings defined with the mappings option. Matching is greedy (longest pattern matching at a given point wins). Replacement is allowed to be the empty string. This character filter is implemented using Apache Lucene.

MergeDocumentsOptions

MergeDocumentsOptions: IndexDocumentsOptions

Options for the merge documents operation.

MergeOrUploadDocumentsOptions

MergeOrUploadDocumentsOptions: IndexDocumentsOptions

Options for the merge or upload documents operation.

MergeSkill

MergeSkill: SearchIndexerSkill & { insertPostTag?: undefined | string; insertPreTag?: undefined | string; odatatype: "#Microsoft.Skills.Text.MergeSkill" }

A skill for merging two or more strings into a single unified string, with an optional user-defined delimiter separating each component part.

MetadataLevel

MetadataLevel: "none" | "minimal"

MicrosoftLanguageStemmingTokenizer

MicrosoftLanguageStemmingTokenizer: LexicalTokenizer & { isSearchTokenizer?: undefined | false | true; language?: MicrosoftStemmingTokenizerLanguage; maxTokenLength?: undefined | number; odatatype: "#Microsoft.Azure.Search.MicrosoftLanguageStemmingTokenizer" }

Divides text using language-specific rules and reduces words to their base forms.

MicrosoftLanguageTokenizer

MicrosoftLanguageTokenizer: LexicalTokenizer & { isSearchTokenizer?: undefined | false | true; language?: MicrosoftTokenizerLanguage; maxTokenLength?: undefined | number; odatatype: "#Microsoft.Azure.Search.MicrosoftLanguageTokenizer" }

Divides text using language-specific rules.

MicrosoftStemmingTokenizerLanguage

MicrosoftStemmingTokenizerLanguage: "arabic" | "bangla" | "bulgarian" | "catalan" | "croatian" | "czech" | "danish" | "dutch" | "english" | "estonian" | "finnish" | "french" | "german" | "greek" | "gujarati" | "hebrew" | "hindi" | "hungarian" | "icelandic" | "indonesian" | "italian" | "kannada" | "latvian" | "lithuanian" | "malay" | "malayalam" | "marathi" | "norwegianBokmaal" | "polish" | "portuguese" | "portugueseBrazilian" | "punjabi" | "romanian" | "russian" | "serbianCyrillic" | "serbianLatin" | "slovak" | "slovenian" | "spanish" | "swedish" | "tamil" | "telugu" | "turkish" | "ukrainian" | "urdu"

Defines values for MicrosoftStemmingTokenizerLanguage.

MicrosoftTokenizerLanguage

MicrosoftTokenizerLanguage: "bangla" | "bulgarian" | "catalan" | "chineseSimplified" | "chineseTraditional" | "croatian" | "czech" | "danish" | "dutch" | "english" | "french" | "german" | "greek" | "gujarati" | "hindi" | "icelandic" | "indonesian" | "italian" | "japanese" | "kannada" | "korean" | "malay" | "malayalam" | "marathi" | "norwegianBokmaal" | "polish" | "portuguese" | "portugueseBrazilian" | "punjabi" | "romanian" | "russian" | "serbianCyrillic" | "serbianLatin" | "slovenian" | "spanish" | "swedish" | "tamil" | "telugu" | "thai" | "ukrainian" | "urdu" | "vietnamese"

Defines values for MicrosoftTokenizerLanguage.

NGramTokenFilter

NGramTokenFilter: TokenFilter & { maxGram?: undefined | number; minGram?: undefined | number; odatatype: "#Microsoft.Azure.Search.NGramTokenFilter" }

Generates n-grams of the given size(s). This token filter is implemented using Apache Lucene. Generates n-grams of the given size(s). This token filter is implemented using Apache Lucene.

Optional maxGram

maxGram: undefined | number

The maximum n-gram length. Default is 2. Maximum is 300. Default value: 2.

Optional minGram

minGram: undefined | number

The minimum n-gram length. Default is 1. Maximum is 300. Must be less than the value of maxGram. Default value: 1.

name

name: string

The name of the token filter. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

odatatype

odatatype: "#Microsoft.Azure.Search.NGramTokenFilterV2" | "#Microsoft.Azure.Search.NGramTokenFilter"

Polymorphic Discriminator

NGramTokenFilterV2

NGramTokenFilterV2: TokenFilter & { maxGram?: undefined | number; minGram?: undefined | number; odatatype: "#Microsoft.Azure.Search.NGramTokenFilterV2" }

Generates n-grams of the given size(s). This token filter is implemented using Apache Lucene.

NGramTokenizer

NGramTokenizer: LexicalTokenizer & { maxGram?: undefined | number; minGram?: undefined | number; odatatype: "#Microsoft.Azure.Search.NGramTokenizer"; tokenChars?: TokenCharacterKind[] }

Tokenizes the input into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene.

OcrSkill

OcrSkill: SearchIndexerSkill & { defaultLanguageCode?: OcrSkillLanguage; lineEnding?: LineEnding; odatatype: "#Microsoft.Skills.Vision.OcrSkill"; shouldDetectOrientation?: undefined | false | true }

A skill that extracts text from image files.

OcrSkillLanguage

OcrSkillLanguage: string

Defines values for OcrSkillLanguage.
KnownOcrSkillLanguage can be used interchangeably with OcrSkillLanguage, this enum contains the known values that the service supports.

Known values supported by the service

zh-Hans: Chinese-Simplified
zh-Hant: Chinese-Traditional
cs: Czech
da: Danish
nl: Dutch
en: English
fi: Finnish
fr: French
de: German
el: Greek
hu: Hungarian
it: Italian
ja: Japanese
ko: Korean
nb: Norwegian (Bokmaal)
pl: Polish
pt: Portuguese
ru: Russian
es: Spanish
sv: Swedish
tr: Turkish
ar: Arabic
ro: Romanian
sr-Cyrl: Serbian (Cyrillic, Serbia)
sr-Latn: Serbian (Latin, Serbia)
sk: Slovak

PIIDetectionSkill

PIIDetectionSkill: SearchIndexerSkill & { defaultLanguageCode?: undefined | string; domain?: undefined | string; maskingCharacter?: undefined | string; maskingMode?: PIIDetectionSkillMaskingMode; minimumPrecision?: undefined | number; modelVersion?: undefined | string; odatatype: "#Microsoft.Skills.Text.PIIDetectionSkill"; piiCategories?: string[] }

Using the Text Analytics API, extracts personal information from an input text and gives you the option of masking it.

PIIDetectionSkillMaskingMode

PIIDetectionSkillMaskingMode: string

Defines values for PIIDetectionSkillMaskingMode.
KnownPIIDetectionSkillMaskingMode can be used interchangeably with PIIDetectionSkillMaskingMode, this enum contains the known values that the service supports.

Known values supported by the service

none: No masking occurs and the maskedText output will not be returned.
replace: Replaces the detected entities with the character given in the maskingCharacter parameter. The character will be repeated to the length of the detected entity so that the offsets will correctly correspond to both the input text as well as the output maskedText.

PathHierarchyTokenizerV2

PathHierarchyTokenizerV2: LexicalTokenizer & { delimiter?: undefined | string; maxTokenLength?: undefined | number; numberOfTokensToSkip?: undefined | number; odatatype: "#Microsoft.Azure.Search.PathHierarchyTokenizerV2"; replacement?: undefined | string; reverseTokenOrder?: undefined | false | true }

Tokenizer for path-like hierarchies. This tokenizer is implemented using Apache Lucene.

PatternAnalyzer

PatternAnalyzer: LexicalAnalyzer & { flags?: undefined | string; lowerCaseTerms?: undefined | false | true; odatatype: "#Microsoft.Azure.Search.PatternAnalyzer"; pattern?: undefined | string; stopwords?: string[] }

Flexibly separates text into terms via a regular expression pattern. This analyzer is implemented using Apache Lucene. Flexibly separates text into terms via a regular expression pattern. This analyzer is implemented using Apache Lucene.

Optional flags

flags: RegexFlags[]

Regular expression flags. Possible values include: 'CANON_EQ', 'CASE_INSENSITIVE', 'COMMENTS', 'DOTALL', 'LITERAL', 'MULTILINE', 'UNICODE_CASE', 'UNIX_LINES'

Optional lowerCaseTerms

lowerCaseTerms: undefined | false | true

A value indicating whether terms should be lower-cased. Default is true. Default value: true.

name

name: string

The name of the analyzer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

odatatype

odatatype: "#Microsoft.Azure.Search.PatternAnalyzer"

Polymorphic Discriminator

Optional pattern

pattern: undefined | string

A regular expression pattern to match token separators. Default is an expression that matches one or more whitespace characters. Default value: \W+.

Optional stopwords

stopwords: string[]

A list of stopwords.

PatternCaptureTokenFilter

PatternCaptureTokenFilter: TokenFilter & { odatatype: "#Microsoft.Azure.Search.PatternCaptureTokenFilter"; patterns: string[]; preserveOriginal?: undefined | false | true }

Uses Java regexes to emit multiple tokens - one for each capture group in one or more patterns. This token filter is implemented using Apache Lucene.

PatternReplaceCharFilter

PatternReplaceCharFilter: CharFilter & { odatatype: "#Microsoft.Azure.Search.PatternReplaceCharFilter"; pattern: string; replacement: string }

A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This character filter is implemented using Apache Lucene.

PatternReplaceTokenFilter

PatternReplaceTokenFilter: TokenFilter & { odatatype: "#Microsoft.Azure.Search.PatternReplaceTokenFilter"; pattern: string; replacement: string }

A character filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, given the input text "aa bb aa bb", pattern "(aa)\s+(bb)", and replacement "$1#$2", the result would be "aa#bb aa#bb". This token filter is implemented using Apache Lucene.

PatternTokenizer

PatternTokenizer: LexicalTokenizer & { flags?: undefined | string; group?: undefined | number; odatatype: "#Microsoft.Azure.Search.PatternTokenizer"; pattern?: undefined | string }

Tokenizer that uses regex pattern matching to construct distinct tokens. This tokenizer is implemented using Apache Lucene. Tokenizer that uses regex pattern matching to construct distinct tokens. This tokenizer is implemented using Apache Lucene.

Optional flags

flags: RegexFlags[]

Regular expression flags. Possible values include: 'CANON_EQ', 'CASE_INSENSITIVE', 'COMMENTS', 'DOTALL', 'LITERAL', 'MULTILINE', 'UNICODE_CASE', 'UNIX_LINES'

Optional group

group: undefined | number

The zero-based ordinal of the matching group in the regular expression pattern to extract into tokens. Use -1 if you want to use the entire pattern to split the input into tokens, irrespective of matching groups. Default is -1. Default value: -1.

name

name: string

The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

odatatype

odatatype: "#Microsoft.Azure.Search.PatternTokenizer"

Polymorphic Discriminator

Optional pattern

pattern: undefined | string

A regular expression pattern to match token separators. Default is an expression that matches one or more whitespace characters. Default value: \W+.

PhoneticEncoder

PhoneticEncoder: "metaphone" | "doubleMetaphone" | "soundex" | "refinedSoundex" | "caverphone1" | "caverphone2" | "cologne" | "nysiis" | "koelnerPhonetik" | "haasePhonetik" | "beiderMorse"

Defines values for PhoneticEncoder.

PhoneticTokenFilter

PhoneticTokenFilter: TokenFilter & { encoder?: PhoneticEncoder; odatatype: "#Microsoft.Azure.Search.PhoneticTokenFilter"; replaceOriginalTokens?: undefined | false | true }

Create tokens for phonetic matches. This token filter is implemented using Apache Lucene.

QueryAnswerType

QueryAnswerType: string

Defines values for QueryAnswerType.
KnownQueryAnswerType can be used interchangeably with QueryAnswerType, this enum contains the known values that the service supports.

Known values supported by the service

none: Do not return answers for the query.
extractive: Extracts answer candidates from the contents of the documents returned in response to a query expressed as a question in natural language.

QueryCaptionType

QueryCaptionType: string

Defines values for QueryCaptionType.
KnownQueryCaptionType can be used interchangeably with QueryCaptionType, this enum contains the known values that the service supports.

Known values supported by the service

none: Do not return captions for the query.
extractive: Extracts captions from the matching documents that contain passages relevant to the search query.

QueryLanguage

QueryLanguage: string

Defines values for QueryLanguage.
KnownQueryLanguage can be used interchangeably with QueryLanguage, this enum contains the known values that the service supports.

Known values supported by the service

none: Query language not specified.
en-us: Query language value for English (United States).
en-gb: Query language value for English (Great Britain).
en-in: Query language value for English (India).
en-ca: Query language value for English (Canada).
en-au: Query language value for English (Australia).
fr-fr: Query language value for French (France).
fr-ca: Query language value for French (Canada).
de-de: Query language value for German (Germany).
es-es: Query language value for Spanish (Spain).
es-mx: Query language value for Spanish (Mexico).
zh-cn: Query language value for Chinese (China).
zh-tw: Query language value for Chinese (Taiwan).
pt-br: Query language value for Portuguese (Brazil).
pt-pt: Query language value for Portuguese (Portugal).
it-it: Query language value for Italian (Italy).
ja-jp: Query language value for Japanese (Japan).
ko-kr: Query language value for Korean (Korea).
ru-ru: Query language value for Russian (Russia).
cs-cz: Query language value for Czech (Czech Republic).
nl-be: Query language value for Dutch (Belgium).
nl-nl: Query language value for Dutch (Netherlands).
hu-hu: Query language value for Hungarian (Hungary).
pl-pl: Query language value for Polish (Poland).
sv-se: Query language value for Swedish (Sweden).
tr-tr: Query language value for Turkish (Turkey).
hi-in: Query language value for Hindi (India).
ar-sa: Query language value for Arabic (Saudi Arabia).
ar-eg: Query language value for Arabic (Egypt).
ar-ma: Query language value for Arabic (Morocco).
ar-kw: Query language value for Arabic (Kuwait).
ar-jo: Query language value for Arabic (Jordan).
da-dk: Query language value for Danish (Denmark).
no-no: Query language value for Norwegian (Normway).
bg-bg: Query language value for Bulgarian (Bulgary).
hr-hr: Query language value for Croatian (Croatia).
hr-ba: Query language value for Croatian (Bosnia and Herzegovina).
ms-my: Query language value for Malay (Malaysia).
ms-bn: Query language value for Malay (Brunei Darussalam).
sl-sl: Query language value for Slovenian (Slovenia).
ta-in: Query language value for Tamil (India).
vi-vn: Query language value for Vietnamese (Viet Nam).
el-gr: Query language value for Greek (Greece).
ro-ro: Query language value for Romanian (Romania).
is-is: Query language value for Icelandic (Iceland).
id-id: Query language value for Indonesian (Indonesia).
th-th: Query language value for Thai (Thailand).
lt-lt: Query language value for Lithuanian (Lithuania).
uk-ua: Query language value for Ukrainian (Ukraine).
lv-lv: Query language value for Latvian (Latvia).
et-ee: Query language value for Estonian (Estonia).
ca-es: Query language value for Catalan (Spain).
fi-fi: Query language value for Finnish (Finland).
sr-ba: Query language value for Serbian (Bosnia and Herzegovina).
sr-me: Query language value for Serbian (Montenegro).
sr-rs: Query language value for Serbian (Serbia).
sk-sk: Query language value for Slovak (Slovakia).
nb-no: Query language value for Norwegian (Norway).
hy-am: Query language value for Armenian (Armenia).
bn-in: Query language value for Bengali (India).
eu-es: Query language value for Basque (Spain).
gl-es: Query language value for Galician (Spain).
gu-in: Query language value for Gujarati (India).
he-il: Query language value for Hebrew (Israel).
ga-ie: Query language value for Irish (Ireland).
kn-in: Query language value for Kannada (India).
ml-in: Query language value for Malayalam (India).
mr-in: Query language value for Marathi (India).
fa-ae: Query language value for Persian (U.A.E.).
pa-in: Query language value for Punjabi (India).
te-in: Query language value for Telugu (India).
ur-pk: Query language value for Urdu (Pakistan).

QuerySpellerType

QuerySpellerType: string

Defines values for QuerySpellerType.
KnownQuerySpellerType can be used interchangeably with QuerySpellerType, this enum contains the known values that the service supports.

Known values supported by the service

none: Speller not enabled.
lexicon: Speller corrects individual query terms using a static lexicon for the language specified by the queryLanguage parameter.

QueryType

QueryType: "simple" | "full" | "semantic"

Defines values for QueryType.

RegexFlags

RegexFlags: string

Defines values for RegexFlags.
KnownRegexFlags can be used interchangeably with RegexFlags, this enum contains the known values that the service supports.

Known values supported by the service

CANON_EQ: Enables canonical equivalence.
CASE_INSENSITIVE: Enables case-insensitive matching.
COMMENTS: Permits whitespace and comments in the pattern.
DOTALL: Enables dotall mode.
LITERAL: Enables literal parsing of the pattern.
MULTILINE: Enables multiline mode.
UNICODE_CASE: Enables Unicode-aware case folding.
UNIX_LINES: Enables Unix lines mode.

ResetIndexerOptions

ResetIndexerOptions: OperationOptions

Options for reset indexer operation.

RunIndexerOptions

RunIndexerOptions: OperationOptions

Options for run indexer operation.

ScoringFunctionAggregation

ScoringFunctionAggregation: "sum" | "average" | "minimum" | "maximum" | "firstMatching"

Defines values for ScoringFunctionAggregation.

ScoringFunctionInterpolation

ScoringFunctionInterpolation: "linear" | "constant" | "quadratic" | "logarithmic"

Defines values for ScoringFunctionInterpolation.

ScoringFunctionUnion

ScoringStatistics

ScoringStatistics: "local" | "global"

Defines values for ScoringStatistics.

SearchFieldDataType

SearchFieldDataType: "Edm.String" | "Edm.Int32" | "Edm.Int64" | "Edm.Double" | "Edm.Boolean" | "Edm.DateTimeOffset" | "Edm.GeographyPoint" | "Collection(Edm.String)" | "Collection(Edm.Int32)" | "Collection(Edm.Int64)" | "Collection(Edm.Double)" | "Collection(Edm.Boolean)" | "Collection(Edm.DateTimeOffset)" | "Collection(Edm.GeographyPoint)"

Defines values for SearchFieldDataType.
KnownSearchFieldDataType can be used interchangeably with SearchFieldDataType, this enum contains the known values that the service supports.

Known values supported by the service

Edm.String: Indicates that a field contains a string.
Edm.Int32: Indicates that a field contains a 32-bit signed integer.
Edm.Int64: Indicates that a field contains a 64-bit signed integer.
Edm.Double: Indicates that a field contains an IEEE double-precision floating point number.
Edm.Boolean: Indicates that a field contains a Boolean value (true or false).
Edm.DateTimeOffset: Indicates that a field contains a date/time value, including timezone information.
Edm.GeographyPoint: Indicates that a field contains a geo-location in terms of longitude and latitude.
Edm.ComplexType: Indicates that a field contains one or more complex objects that in turn have sub-fields of other types.
Collection(Edm.String)
Collection(Edm.Int32)
Collection(Edm.Int64)
Collection(Edm.Double)
Collection(Edm.Boolean)
Collection(Edm.DateTimeOffset)
Collection(Edm.GeographyPoint)
Collection(Edm.ComplexType) Defines values for SearchFieldDataType. Possible values include: 'Edm.String', 'Edm.Int32', 'Edm.Int64', 'Edm.Double', 'Edm.Boolean', 'Edm.DateTimeOffset', 'Edm.GeographyPoint', 'Collection(Edm.String)', 'Collection(Edm.Int32)', 'Collection(Edm.Int64)', 'Collection(Edm.Double)', 'Collection(Edm.Boolean)', 'Collection(Edm.DateTimeOffset)', 'Collection(Edm.GeographyPoint)'

readonly

SearchIndexerDataIdentityUnion

SearchIndexerDataNoneIdentity

SearchIndexerDataNoneIdentity: SearchIndexerDataIdentity & { odatatype: "#Microsoft.Azure.Search.SearchIndexerDataNoneIdentity" }

Clears the identity property of a datasource.

SearchIndexerDataSourceType

SearchIndexerDataSourceType: string

Defines values for SearchIndexerDataSourceType.
KnownSearchIndexerDataSourceType can be used interchangeably with SearchIndexerDataSourceType, this enum contains the known values that the service supports.

Known values supported by the service

azuresql: Indicates an Azure SQL datasource.
cosmosdb: Indicates a CosmosDB datasource.
azureblob: Indicates an Azure Blob datasource.
azuretable: Indicates an Azure Table datasource.
mysql: Indicates a MySql datasource.
adlsgen2: Indicates an ADLS Gen2 datasource.

SearchIndexerDataUserAssignedIdentity

SearchIndexerDataUserAssignedIdentity: SearchIndexerDataIdentity & { odatatype: "#Microsoft.Azure.Search.SearchIndexerDataUserAssignedIdentity"; userAssignedIdentity: string }

Specifies the identity for a datasource to use.

SearchIndexerKnowledgeStoreBlobProjectionSelector

SearchIndexerKnowledgeStoreBlobProjectionSelector: SearchIndexerKnowledgeStoreProjectionSelector & { storageContainer: string }

Abstract class to share properties between concrete selectors.

SearchIndexerKnowledgeStoreFileProjectionSelector

SearchIndexerKnowledgeStoreFileProjectionSelector: SearchIndexerKnowledgeStoreBlobProjectionSelector & {}

Projection definition for what data to store in Azure Files.

SearchIndexerKnowledgeStoreObjectProjectionSelector

SearchIndexerKnowledgeStoreObjectProjectionSelector: SearchIndexerKnowledgeStoreBlobProjectionSelector & {}

Projection definition for what data to store in Azure Blob.

SearchIndexerKnowledgeStoreTableProjectionSelector

SearchIndexerKnowledgeStoreTableProjectionSelector: SearchIndexerKnowledgeStoreProjectionSelector & { tableName: string }

Description for what data to store in Azure Tables.

SearchIndexerSkillUnion

SearchIndexingBufferedSenderDeleteDocumentsOptions

SearchIndexingBufferedSenderDeleteDocumentsOptions: OperationOptions

Options for SearchIndexingBufferedSenderDeleteDocuments.

SearchIndexingBufferedSenderFlushDocumentsOptions

SearchIndexingBufferedSenderFlushDocumentsOptions: OperationOptions

Options for SearchIndexingBufferedSenderFlushDocuments.

SearchIndexingBufferedSenderMergeDocumentsOptions

SearchIndexingBufferedSenderMergeDocumentsOptions: OperationOptions

Options for SearchIndexingBufferedSenderMergeDocuments.

SearchIndexingBufferedSenderMergeOrUploadDocumentsOptions

SearchIndexingBufferedSenderMergeOrUploadDocumentsOptions: OperationOptions

Options for SearchIndexingBufferedSenderMergeOrUploadDocuments.

SearchIndexingBufferedSenderUploadDocumentsOptions

SearchIndexingBufferedSenderUploadDocumentsOptions: OperationOptions

Options for SearchIndexingBufferedSenderUploadDocuments.

SearchIterator

SearchIterator<Fields>: PagedAsyncIterableIterator<SearchResult<Fields>, SearchDocumentsPageResult<Fields>, ListSearchResultsPageSettings>

An iterator for search results of a paticular query. Will make requests as needed during iteration. Use .byPage() to make one request to the server per iteration.

Type parameters

  • Fields

SearchMode

SearchMode: "any" | "all"

Defines values for SearchMode.

SearchServiceClientGetServiceStatisticsResponse

SearchServiceClientGetServiceStatisticsResponse: ServiceStatistics

Contains response data for the getServiceStatistics operation.

SentimentSkill

SentimentSkill: SearchIndexerSkill & { defaultLanguageCode?: SentimentSkillLanguage; odatatype: "#Microsoft.Skills.Text.SentimentSkill" }

Text analytics positive-negative sentiment analysis, scored as a floating point value in a range of zero to 1.

SentimentSkillLanguage

SentimentSkillLanguage: string

Defines values for SentimentSkillLanguage.
KnownSentimentSkillLanguage can be used interchangeably with SentimentSkillLanguage, this enum contains the known values that the service supports.

Known values supported by the service

da: Danish
nl: Dutch
en: English
fi: Finnish
fr: French
de: German
el: Greek
it: Italian
no: Norwegian (Bokmaal)
pl: Polish
pt-PT: Portuguese (Portugal)
ru: Russian
es: Spanish
sv: Swedish
tr: Turkish

SentimentSkillV3

SentimentSkillV3: SearchIndexerSkill & { defaultLanguageCode?: undefined | string; includeOpinionMining?: undefined | false | true; modelVersion?: undefined | string; odatatype: "#Microsoft.Skills.Text.V3.SentimentSkill" }

Using the Text Analytics API, evaluates unstructured text and for each record, provides sentiment labels (such as "negative", "neutral" and "positive") based on the highest confidence score found by the service at a sentence and document-level.

ShaperSkill

ShaperSkill: SearchIndexerSkill & { odatatype: "#Microsoft.Skills.Util.ShaperSkill" }

A skill for reshaping the outputs. It creates a complex type to support composite fields (also known as multipart fields).

ShingleTokenFilter

ShingleTokenFilter: TokenFilter & { filterToken?: undefined | string; maxShingleSize?: undefined | number; minShingleSize?: undefined | number; odatatype: "#Microsoft.Azure.Search.ShingleTokenFilter"; outputUnigrams?: undefined | false | true; outputUnigramsIfNoShingles?: undefined | false | true; tokenSeparator?: undefined | string }

Creates combinations of tokens as a single token. This token filter is implemented using Apache Lucene.

SimilarityAlgorithm

SimilarityAlgorithm: ClassicSimilarity | BM25Similarity

Contains the possible cases for Similarity.

SimilarityUnion

SkillsetsCreateOrUpdateResponse

SkillsetsCreateOrUpdateResponse: SearchIndexerSkillset

Contains response data for the createOrUpdate operation.

SkillsetsCreateResponse

SkillsetsCreateResponse: SearchIndexerSkillset

Contains response data for the create operation.

SkillsetsGetResponse

SkillsetsGetResponse: SearchIndexerSkillset

Contains response data for the get operation.

SkillsetsListResponse

SkillsetsListResponse: ListSkillsetsResult

Contains response data for the list operation.

SnowballTokenFilter

SnowballTokenFilter: TokenFilter & { language: SnowballTokenFilterLanguage; odatatype: "#Microsoft.Azure.Search.SnowballTokenFilter" }

A filter that stems words using a Snowball-generated stemmer. This token filter is implemented using Apache Lucene.

SnowballTokenFilterLanguage

SnowballTokenFilterLanguage: "armenian" | "basque" | "catalan" | "danish" | "dutch" | "english" | "finnish" | "french" | "german" | "german2" | "hungarian" | "italian" | "kp" | "lovins" | "norwegian" | "porter" | "portuguese" | "romanian" | "russian" | "spanish" | "swedish" | "turkish"

Defines values for SnowballTokenFilterLanguage.

SoftDeleteColumnDeletionDetectionPolicy

SoftDeleteColumnDeletionDetectionPolicy: DataDeletionDetectionPolicy & { odatatype: "#Microsoft.Azure.Search.SoftDeleteColumnDeletionDetectionPolicy"; softDeleteColumnName?: undefined | string; softDeleteMarkerValue?: undefined | string }

Defines a data deletion detection policy that implements a soft-deletion strategy. It determines whether an item should be deleted based on the value of a designated 'soft delete' column.

Speller

Speller: string

Defines values for Speller.
KnownSpeller can be used interchangeably with Speller, this enum contains the known values that the service supports.

Known values supported by the service

none: Speller not enabled.
lexicon: Speller corrects individual query terms using a static lexicon for the language specified by the queryLanguage parameter.

SplitSkill

SplitSkill: SearchIndexerSkill & { defaultLanguageCode?: SplitSkillLanguage; maxPageLength?: undefined | number; odatatype: "#Microsoft.Skills.Text.SplitSkill"; textSplitMode?: TextSplitMode }

A skill to split a string into chunks of text.

SplitSkillLanguage

SplitSkillLanguage: string

Defines values for SplitSkillLanguage.
KnownSplitSkillLanguage can be used interchangeably with SplitSkillLanguage, this enum contains the known values that the service supports.

Known values supported by the service

da: Danish
de: German
en: English
es: Spanish
fi: Finnish
fr: French
it: Italian
ko: Korean
pt: Portuguese

SqlIntegratedChangeTrackingPolicy

SqlIntegratedChangeTrackingPolicy: DataChangeDetectionPolicy & { odatatype: "#Microsoft.Azure.Search.SqlIntegratedChangeTrackingPolicy" }

Defines a data change detection policy that captures changes using the Integrated Change Tracking feature of Azure SQL Database.

StemmerOverrideTokenFilter

StemmerOverrideTokenFilter: TokenFilter & { odatatype: "#Microsoft.Azure.Search.StemmerOverrideTokenFilter"; rules: string[] }

Provides the ability to override other stemming filters with custom dictionary-based stemming. Any dictionary-stemmed terms will be marked as keywords so that they will not be stemmed with stemmers down the chain. Must be placed before any stemming filters. This token filter is implemented using Apache Lucene.

StemmerTokenFilter

StemmerTokenFilter: TokenFilter & { language: StemmerTokenFilterLanguage; odatatype: "#Microsoft.Azure.Search.StemmerTokenFilter" }

Language specific stemming filter. This token filter is implemented using Apache Lucene.

StemmerTokenFilterLanguage

StemmerTokenFilterLanguage: "arabic" | "armenian" | "basque" | "brazilian" | "bulgarian" | "catalan" | "czech" | "danish" | "dutch" | "dutchKp" | "english" | "lightEnglish" | "minimalEnglish" | "possessiveEnglish" | "porter2" | "lovins" | "finnish" | "lightFinnish" | "french" | "lightFrench" | "minimalFrench" | "galician" | "minimalGalician" | "german" | "german2" | "lightGerman" | "minimalGerman" | "greek" | "hindi" | "hungarian" | "lightHungarian" | "indonesian" | "irish" | "italian" | "lightItalian" | "sorani" | "latvian" | "norwegian" | "lightNorwegian" | "minimalNorwegian" | "lightNynorsk" | "minimalNynorsk" | "portuguese" | "lightPortuguese" | "minimalPortuguese" | "portugueseRslp" | "romanian" | "russian" | "lightRussian" | "spanish" | "lightSpanish" | "swedish" | "lightSwedish" | "turkish"

Defines values for StemmerTokenFilterLanguage.

StopAnalyzer

StopAnalyzer: LexicalAnalyzer & { odatatype: "#Microsoft.Azure.Search.StopAnalyzer"; stopwords?: string[] }

Divides text at non-letters; Applies the lowercase and stopword token filters. This analyzer is implemented using Apache Lucene.

StopwordsList

StopwordsList: "arabic" | "armenian" | "basque" | "brazilian" | "bulgarian" | "catalan" | "czech" | "danish" | "dutch" | "english" | "finnish" | "french" | "galician" | "german" | "greek" | "hindi" | "hungarian" | "indonesian" | "irish" | "italian" | "latvian" | "norwegian" | "persian" | "portuguese" | "romanian" | "russian" | "sorani" | "spanish" | "swedish" | "thai" | "turkish"

Defines values for StopwordsList.

StopwordsTokenFilter

StopwordsTokenFilter: TokenFilter & { ignoreCase?: undefined | false | true; odatatype: "#Microsoft.Azure.Search.StopwordsTokenFilter"; removeTrailingStopWords?: undefined | false | true; stopwords?: string[]; stopwordsList?: StopwordsList }

Removes stop words from a token stream. This token filter is implemented using Apache Lucene.

SynonymMapsCreateOrUpdateResponse

SynonymMapsCreateOrUpdateResponse: SynonymMap

Contains response data for the createOrUpdate operation.

SynonymMapsCreateResponse

SynonymMapsCreateResponse: SynonymMap

Contains response data for the create operation.

SynonymMapsGetResponse

SynonymMapsGetResponse: SynonymMap

Contains response data for the get operation.

SynonymMapsListResponse

SynonymMapsListResponse: ListSynonymMapsResult

Contains response data for the list operation.

SynonymTokenFilter

SynonymTokenFilter: TokenFilter & { expand?: undefined | false | true; ignoreCase?: undefined | false | true; odatatype: "#Microsoft.Azure.Search.SynonymTokenFilter"; synonyms: string[] }

Matches single or multi-word synonyms in a token stream. This token filter is implemented using Apache Lucene.

TagScoringFunction

TagScoringFunction: ScoringFunction & { parameters: TagScoringParameters; type: "tag" }

Defines a function that boosts scores of documents with string values matching a given list of tags.

TextSplitMode

TextSplitMode: string

Defines values for TextSplitMode.
KnownTextSplitMode can be used interchangeably with TextSplitMode, this enum contains the known values that the service supports.

Known values supported by the service

pages: Split the text into individual pages.
sentences: Split the text into individual sentences.

TextTranslationSkill

TextTranslationSkill: SearchIndexerSkill & { defaultFromLanguageCode?: TextTranslationSkillLanguage; defaultToLanguageCode: TextTranslationSkillLanguage; odatatype: "#Microsoft.Skills.Text.TranslationSkill"; suggestedFrom?: TextTranslationSkillLanguage }

A skill to translate text from one language to another.

TextTranslationSkillLanguage

TextTranslationSkillLanguage: string

Defines values for TextTranslationSkillLanguage.
KnownTextTranslationSkillLanguage can be used interchangeably with TextTranslationSkillLanguage, this enum contains the known values that the service supports.

Known values supported by the service

af: Afrikaans
ar: Arabic
bn: Bangla
bs: Bosnian (Latin)
bg: Bulgarian
yue: Cantonese (Traditional)
ca: Catalan
zh-Hans: Chinese Simplified
zh-Hant: Chinese Traditional
hr: Croatian
cs: Czech
da: Danish
nl: Dutch
en: English
et: Estonian
fj: Fijian
fil: Filipino
fi: Finnish
fr: French
de: German
el: Greek
ht: Haitian Creole
he: Hebrew
hi: Hindi
mww: Hmong Daw
hu: Hungarian
is: Icelandic
id: Indonesian
it: Italian
ja: Japanese
sw: Kiswahili
tlh: Klingon
tlh-Latn: Klingon (Latin script)
tlh-Piqd: Klingon (Klingon script)
ko: Korean
lv: Latvian
lt: Lithuanian
mg: Malagasy
ms: Malay
mt: Maltese
nb: Norwegian
fa: Persian
pl: Polish
pt: Portuguese
pt-br: Portuguese (Brazil)
pt-PT: Portuguese (Portugal)
otq: Queretaro Otomi
ro: Romanian
ru: Russian
sm: Samoan
sr-Cyrl: Serbian (Cyrillic)
sr-Latn: Serbian (Latin)
sk: Slovak
sl: Slovenian
es: Spanish
sv: Swedish
ty: Tahitian
ta: Tamil
te: Telugu
th: Thai
to: Tongan
tr: Turkish
uk: Ukrainian
ur: Urdu
vi: Vietnamese
cy: Welsh
yua: Yucatec Maya
ga: Irish
kn: Kannada
mi: Maori
ml: Malayalam
pa: Punjabi

TokenCharacterKind

TokenCharacterKind: "letter" | "digit" | "whitespace" | "punctuation" | "symbol"

Defines values for TokenCharacterKind.

TokenFilterName

TokenFilterName: string

Defines values for TokenFilterName.
KnownTokenFilterName can be used interchangeably with TokenFilterName, this enum contains the known values that the service supports.

Known values supported by the service

arabic_normalization: A token filter that applies the Arabic normalizer to normalize the orthography. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ar/ArabicNormalizationFilter.html
apostrophe: Strips all characters after an apostrophe (including the apostrophe itself). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/tr/ApostropheFilter.html
asciifolding: Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html
cjk_bigram: Forms bigrams of CJK terms that are generated from the standard tokenizer. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/cjk/CJKBigramFilter.html
cjk_width: Normalizes CJK width differences. Folds fullwidth ASCII variants into the equivalent basic Latin, and half-width Katakana variants into the equivalent Kana. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/cjk/CJKWidthFilter.html
classic: Removes English possessives, and dots from acronyms. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/standard/ClassicFilter.html
common_grams: Construct bigrams for frequently occurring terms while indexing. Single terms are still indexed too, with bigrams overlaid. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/commongrams/CommonGramsFilter.html
edgeNGram_v2: Generates n-grams of the given size(s) starting from the front or the back of an input token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.html
elision: Removes elisions. For example, "l'avion" (the plane) will be converted to "avion" (plane). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/util/ElisionFilter.html
german_normalization: Normalizes German characters according to the heuristics of the German2 snowball algorithm. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/de/GermanNormalizationFilter.html
hindi_normalization: Normalizes text in Hindi to remove some differences in spelling variations. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/hi/HindiNormalizationFilter.html
indic_normalization: Normalizes the Unicode representation of text in Indian languages. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/in/IndicNormalizationFilter.html
keyword_repeat: Emits each incoming token twice, once as keyword and once as non-keyword. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/KeywordRepeatFilter.html
kstem: A high-performance kstem filter for English. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/en/KStemFilter.html
length: Removes words that are too long or too short. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/LengthFilter.html
limit: Limits the number of tokens while indexing. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/LimitTokenCountFilter.html
lowercase: Normalizes token text to lower case. See https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/LowerCaseFilter.html
nGram_v2: Generates n-grams of the given size(s). See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ngram/NGramTokenFilter.html
persian_normalization: Applies normalization for Persian. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/fa/PersianNormalizationFilter.html
phonetic: Create tokens for phonetic matches. See https://lucene.apache.org/core/4_10_3/analyzers-phonetic/org/apache/lucene/analysis/phonetic/package-tree.html
porter_stem: Uses the Porter stemming algorithm to transform the token stream. See http://tartarus.org/~martin/PorterStemmer
reverse: Reverses the token string. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/reverse/ReverseStringFilter.html
scandinavian_normalization: Normalizes use of the interchangeable Scandinavian characters. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianNormalizationFilter.html
scandinavian_folding: Folds Scandinavian characters åÅäæÄÆ->a and öÖøØ->o. It also discriminates against use of double vowels aa, ae, ao, oe and oo, leaving just the first one. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianFoldingFilter.html
shingle: Creates combinations of tokens as a single token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/shingle/ShingleFilter.html
snowball: A filter that stems words using a Snowball-generated stemmer. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/snowball/SnowballFilter.html
sorani_normalization: Normalizes the Unicode representation of Sorani text. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/ckb/SoraniNormalizationFilter.html
stemmer: Language specific stemming filter. See https://docs.microsoft.com/rest/api/searchservice/Custom-analyzers-in-Azure-Search#TokenFilters
stopwords: Removes stop words from a token stream. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/core/StopFilter.html
trim: Trims leading and trailing whitespace from tokens. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/TrimFilter.html
truncate: Truncates the terms to a specific length. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/TruncateTokenFilter.html
unique: Filters out tokens with same text as the previous token. See http://lucene.apache.org/core/4_10_3/analyzers-common/org/apache/lucene/analysis/miscellaneous/RemoveDuplicatesTokenFilter.html
uppercase: Normalizes token text to upper case. See https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/UpperCaseFilter.html
word_delimiter: Splits words into subwords and performs optional transformations on subword groups.

TokenFilterUnion

TruncateTokenFilter

TruncateTokenFilter: TokenFilter & { length?: undefined | number; odatatype: "#Microsoft.Azure.Search.TruncateTokenFilter" }

Truncates the terms to a specific length. This token filter is implemented using Apache Lucene.

UaxUrlEmailTokenizer

UaxUrlEmailTokenizer: LexicalTokenizer & { maxTokenLength?: undefined | number; odatatype: "#Microsoft.Azure.Search.UaxUrlEmailTokenizer" }

Tokenizes urls and emails as one token. This tokenizer is implemented using Apache Lucene.

UniqueTokenFilter

UniqueTokenFilter: TokenFilter & { odatatype: "#Microsoft.Azure.Search.UniqueTokenFilter"; onlyOnSamePosition?: undefined | false | true }

Filters out tokens with same text as the previous token. This token filter is implemented using Apache Lucene.

UploadDocumentsOptions

UploadDocumentsOptions: IndexDocumentsOptions

Options for the upload documents operation.

VisualFeature

VisualFeature: string

Defines values for VisualFeature.
KnownVisualFeature can be used interchangeably with VisualFeature, this enum contains the known values that the service supports.

Known values supported by the service

adult: Visual features recognized as adult persons.
brands: Visual features recognized as commercial brands.
categories: Categories.
description: Description.
faces: Visual features recognized as people faces.
objects: Visual features recognized as objects.
tags: Tags.

WebApiSkill

WebApiSkill: SearchIndexerSkill & { batchSize?: undefined | number; degreeOfParallelism?: undefined | number; httpHeaders?: undefined | {}; httpMethod?: undefined | string; odatatype: "#Microsoft.Skills.Custom.WebApiSkill"; timeout?: undefined | string; uri: string }

A skill that can call a Web API endpoint, allowing you to extend a skillset by having it call your custom code.

WordDelimiterTokenFilter

WordDelimiterTokenFilter: TokenFilter & { catenateAll?: undefined | false | true; catenateNumbers?: undefined | false | true; catenateWords?: undefined | false | true; generateNumberParts?: undefined | false | true; generateWordParts?: undefined | false | true; odatatype: "#Microsoft.Azure.Search.WordDelimiterTokenFilter"; preserveOriginal?: undefined | false | true; protectedWords?: string[]; splitOnCaseChange?: undefined | false | true; splitOnNumerics?: undefined | false | true; stemEnglishPossessive?: undefined | false | true }

Splits words into subwords and performs optional transformations on subword groups. This token filter is implemented using Apache Lucene.

Variables

Const API_KEY_HEADER_NAME

API_KEY_HEADER_NAME: "api-key" = "api-key"

Const AcceptHeaderName

AcceptHeaderName: "Accept" = "Accept"

Const DEFAULT_BATCH_SIZE

DEFAULT_BATCH_SIZE: number = 512

Default Batch Size

Const DEFAULT_FLUSH_WINDOW

DEFAULT_FLUSH_WINDOW: number = 60000

Default window flush interval

Const DEFAULT_MAX_RETRY_DELAY

DEFAULT_MAX_RETRY_DELAY: number = 60000

Default Max Delay between retries.

Const DEFAULT_RETRY_COUNT

DEFAULT_RETRY_COUNT: number = 3

Default number of times to retry.

Const DEFAULT_RETRY_DELAY

DEFAULT_RETRY_DELAY: number = 800

Default retry delay.

Const DEFAULT_SEARCH_SCOPE

DEFAULT_SEARCH_SCOPE: "https://search.azure.com/.default" = "https://search.azure.com/.default"

Const GeoJSONPointTypeName

GeoJSONPointTypeName: "Point" = "Point"

Const ISO8601DateRegex

ISO8601DateRegex: Object = /^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}(\.\d{3})?Z$/i

Const SDK_VERSION

SDK_VERSION: string = "11.3.0-beta.5"

Const WorldGeodeticSystem1984

WorldGeodeticSystem1984: "EPSG:4326" = "EPSG:4326"

Const logger

logger: any = createClientLogger("search")

The @azure/logger configuration for this package.

Const odataMetadataPolicy

odataMetadataPolicy: "OdataMetadataPolicy" = "OdataMetadataPolicy"

Const readFileAsync

readFileAsync: any = promisify(fs.readFile)

Const searchApiKeyCredentialPolicy

searchApiKeyCredentialPolicy: "SearchApiKeyCredentialPolicy" = "SearchApiKeyCredentialPolicy"

Const serializer

serializer: any = coreClient.createSerializer(Mappers, /* isXml */ false)

Functions

convertAnalyzersToGenerated

convertAnalyzersToPublic

convertCognitiveServicesAccountToGenerated

convertCognitiveServicesAccountToPublic

convertDataChangeDetectionPolicyToPublic

convertDataDeletionDetectionPolicyToPublic

convertEncryptionKeyToGenerated

convertEncryptionKeyToPublic

  • convertEncryptionKeyToPublic(encryptionKey?: GeneratedSearchResourceEncryptionKey): SearchResourceEncryptionKey | undefined
  • Parameters

    • Optional encryptionKey: GeneratedSearchResourceEncryptionKey

    Returns SearchResourceEncryptionKey | undefined

convertFieldsToGenerated

  • convertFieldsToGenerated(fields: SearchField[]): GeneratedSearchField[]

convertFieldsToPublic

  • convertFieldsToPublic(fields: GeneratedSearchField[]): SearchField[]

convertSearchIndexerDataIdentityToPublic

convertSimilarityToGenerated

convertSimilarityToPublic

convertSkillsToPublic

convertTokenFiltersToGenerated

convertTokenizersToGenerated

convertTokenizersToPublic

createSearchApiKeyCredentialPolicy

  • createSearchApiKeyCredentialPolicy(credential: KeyCredential): PipelinePolicy
  • Create an HTTP pipeline policy to authenticate a request using an AzureKeyCredential for Azure Cognitive Search

    Parameters

    • credential: KeyCredential

    Returns PipelinePolicy

createSynonymMapFromFile

  • createSynonymMapFromFile(name: string, filePath: string): Promise<SynonymMap>
  • Helper method to create a SynonymMap object. This is a NodeJS only method.

    Parameters

    • name: string

      Name of the SynonymMap.

    • filePath: string

      Path of the file that contains the Synonyms (seperated by new lines)

    Returns Promise<SynonymMap>

    SynonymMap object

decode

  • decode(value: string): string
  • Decodes a base64 string into a regular string.

    Parameters

    • value: string

      The base64 string to decode.

    Returns string

delay

  • delay(timeInMs: number): Promise<void>
  • A wrapper for setTimeout that resolves a promise after timeInMs milliseconds.

    Parameters

    • timeInMs: number

      The number of milliseconds to be delayed.

    Returns Promise<void>

    Promise that is resolved after timeInMs

deserialize

  • deserialize<OutputT>(obj: unknown): OutputT
  • Type parameters

    • OutputT

    Parameters

    • obj: unknown

    Returns OutputT

deserializeDates

  • deserializeDates(input: unknown): Date | unknown

deserializeGeoPoint

deserializeSpecialNumbers

  • deserializeSpecialNumbers(input: unknown): unknown

encode

  • encode(value: string): string
  • Encodes a string in base64 format.

    Parameters

    • value: string

      The string to encode.

    Returns string

escapeQuotesIfString

  • escapeQuotesIfString(input: unknown, previous: string): string | unknown
  • Parameters

    • input: unknown
    • previous: string

    Returns string | unknown

formatNullAndUndefined

  • formatNullAndUndefined(input: unknown): string | unknown
  • Parameters

    • input: unknown

    Returns string | unknown

generatedDataSourceToPublicDataSource

generatedIndexToPublicIndex

  • generatedIndexToPublicIndex(generatedIndex: GeneratedSearchIndex): SearchIndex
  • Parameters

    • generatedIndex: GeneratedSearchIndex

    Returns SearchIndex

generatedSearchIndexerToPublicSearchIndexer

  • generatedSearchIndexerToPublicSearchIndexer(indexer: GeneratedSearchIndexer): SearchIndexer
  • Parameters

    • indexer: GeneratedSearchIndexer

    Returns SearchIndexer

generatedSearchResultToPublicSearchResult

  • generatedSearchResultToPublicSearchResult<T>(results: GeneratedSearchResult[]): SearchResult<T>[]

generatedSkillsetToPublicSkillset

  • generatedSkillsetToPublicSkillset(generatedSkillset: GeneratedSearchIndexerSkillset): SearchIndexerSkillset
  • Parameters

    • generatedSkillset: GeneratedSearchIndexerSkillset

    Returns SearchIndexerSkillset

generatedSuggestDocumentsResultToPublicSuggestDocumentsResult

  • generatedSuggestDocumentsResultToPublicSuggestDocumentsResult<T>(searchDocumentsResult: GeneratedSuggestDocumentsResult): SuggestDocumentsResult<T>
  • Type parameters

    • T

    Parameters

    • searchDocumentsResult: GeneratedSuggestDocumentsResult

    Returns SuggestDocumentsResult<T>

generatedSynonymMapToPublicSynonymMap

  • generatedSynonymMapToPublicSynonymMap(synonymMap: GeneratedSynonymMap): SynonymMap

getRandomIntegerInclusive

  • getRandomIntegerInclusive(min: number, max: number): number

isComplexField

  • isComplexField(field: SearchField): field is ComplexField

isCoordinateArray

  • isCoordinateArray(maybeCoordinates: any): boolean

isCrs

  • isCrs(maybeCrs: any): boolean

isCrsProperties

  • isCrsProperties(maybeProperties: any): boolean

isGeoJSONPoint

  • isGeoJSONPoint(obj: any): obj is GeoJSONPoint

isValidObject

  • isValidObject(obj: any, options?: { propertyValidator?: undefined | ((keyName: string) => boolean); requiredKeys?: string[] }): boolean
  • Parameters

    • obj: any
    • Default value options: { propertyValidator?: undefined | ((keyName: string) => boolean); requiredKeys?: string[] } = {}
      • Optional propertyValidator?: undefined | ((keyName: string) => boolean)
      • Optional requiredKeys?: string[]

    Returns boolean

odata

  • odata(strings: TemplateStringsArray, ...values: unknown[]): string
  • Escapes an odata filter expression to avoid errors with quoting string literals. Example usage:

    const baseRateMax = 200;
    const ratingMin = 4;
    const filter = odata`Rooms/any(room: room/BaseRate lt ${baseRateMax}) and Rating ge ${ratingMin}`;

    For more information on supported syntax see: https://docs.microsoft.com/en-us/azure/search/search-query-odata-filter

    Parameters

    • strings: TemplateStringsArray

      Array of strings for the expression

    • Rest ...values: unknown[]

      Array of values for the expression

    Returns string

publicDataSourceToGeneratedDataSource

publicIndexToGeneratedIndex

  • publicIndexToGeneratedIndex(index: SearchIndex): GeneratedSearchIndex

publicSearchIndexerToGeneratedSearchIndexer

  • publicSearchIndexerToGeneratedSearchIndexer(indexer: SearchIndexer): GeneratedSearchIndexer
  • Parameters

    • indexer: SearchIndexer

    Returns GeneratedSearchIndexer

publicSkillsetToGeneratedSkillset

  • publicSkillsetToGeneratedSkillset(skillset: SearchIndexerSkillset): GeneratedSearchIndexerSkillset
  • Parameters

    • skillset: SearchIndexerSkillset

    Returns GeneratedSearchIndexerSkillset

publicSynonymMapToGeneratedSynonymMap

  • publicSynonymMapToGeneratedSynonymMap(synonymMap: SynonymMap): GeneratedSynonymMap

serialize

  • serialize<OutputT>(obj: unknown): OutputT
  • Type parameters

    • OutputT

    Parameters

    • obj: unknown

    Returns OutputT

serializeSpecialNumbers

  • serializeSpecialNumbers(input: unknown): unknown

walk

  • walk(start: unknown, mapper: (val: any) => any): any
  • Parameters

    • start: unknown
    • mapper: (val: any) => any
        • (val: any): any
        • Parameters

          • val: any

          Returns any

    Returns any

Object literals

Const AnalyzeRequest

AnalyzeRequest: object

type

type: object

className

className: string = "AnalyzeRequest"

name

name: string = "Composite"

modelProperties

modelProperties: object

analyzer

analyzer: object

serializedName

serializedName: string = "analyzer"

type

type: object

name

name: string = "String"

charFilters

charFilters: object

serializedName

serializedName: string = "charFilters"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

normalizer

normalizer: object

serializedName

serializedName: string = "normalizer"

type

type: object

name

name: string = "String"

text

text: object

required

required: boolean = true

serializedName

serializedName: string = "text"

type

type: object

name

name: string = "String"

tokenFilters

tokenFilters: object

serializedName

serializedName: string = "tokenFilters"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

tokenizer

tokenizer: object

serializedName

serializedName: string = "tokenizer"

type

type: object

name

name: string = "String"

Const AnalyzeResult

AnalyzeResult: object

type

type: object

className

className: string = "AnalyzeResult"

name

name: string = "Composite"

modelProperties

modelProperties: object

tokens

tokens: object

required

required: boolean = true

serializedName

serializedName: string = "tokens"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "AnalyzedTokenInfo"

name

name: string = "Composite"

Const AnalyzedTokenInfo

AnalyzedTokenInfo: object

type

type: object

className

className: string = "AnalyzedTokenInfo"

name

name: string = "Composite"

modelProperties

modelProperties: object

endOffset

endOffset: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "endOffset"

type

type: object

name

name: string = "Number"

position

position: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "position"

type

type: object

name

name: string = "Number"

startOffset

startOffset: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "startOffset"

type

type: object

name

name: string = "Number"

token

token: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "token"

type

type: object

name

name: string = "String"

Const AnswerResult

AnswerResult: object

type

type: object

className

className: string = "AnswerResult"

name

name: string = "Composite"

additionalProperties

additionalProperties: object

type

type: object

name

name: string = "Object"

modelProperties

modelProperties: object

highlights

highlights: object

nullable

nullable: boolean = true

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "highlights"

type

type: object

name

name: string = "String"

key

key: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "key"

type

type: object

name

name: string = "String"

score

score: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "score"

type

type: object

name

name: string = "Number"

text

text: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "text"

type

type: object

name

name: string = "String"

Const AsciiFoldingTokenFilter

AsciiFoldingTokenFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.AsciiFoldingTokenFilter"

type

type: object

className

className: string = "AsciiFoldingTokenFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

preserveOriginal

preserveOriginal: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "preserveOriginal"

type

type: object

name

name: string = "Boolean"

Const AutocompleteItem

AutocompleteItem: object

type

type: object

className

className: string = "AutocompleteItem"

name

name: string = "Composite"

modelProperties

modelProperties: object

queryPlusText

queryPlusText: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "queryPlusText"

type

type: object

name

name: string = "String"

text

text: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "text"

type

type: object

name

name: string = "String"

Const AutocompleteRequest

AutocompleteRequest: object

type

type: object

className

className: string = "AutocompleteRequest"

name

name: string = "Composite"

modelProperties

modelProperties: object

autocompleteMode

autocompleteMode: object

serializedName

serializedName: string = "autocompleteMode"

type

type: object

allowedValues

allowedValues: string[] = ["oneTerm", "twoTerms", "oneTermWithContext"]

name

name: string = "Enum"

filter

filter: object

serializedName

serializedName: string = "filter"

type

type: object

name

name: string = "String"

highlightPostTag

highlightPostTag: object

serializedName

serializedName: string = "highlightPostTag"

type

type: object

name

name: string = "String"

highlightPreTag

highlightPreTag: object

serializedName

serializedName: string = "highlightPreTag"

type

type: object

name

name: string = "String"

minimumCoverage

minimumCoverage: object

serializedName

serializedName: string = "minimumCoverage"

type

type: object

name

name: string = "Number"

searchFields

searchFields: object

serializedName

serializedName: string = "searchFields"

type

type: object

name

name: string = "String"

searchText

searchText: object

required

required: boolean = true

serializedName

serializedName: string = "search"

type

type: object

name

name: string = "String"

suggesterName

suggesterName: object

required

required: boolean = true

serializedName

serializedName: string = "suggesterName"

type

type: object

name

name: string = "String"

top

top: object

serializedName

serializedName: string = "top"

type

type: object

name

name: string = "Number"

useFuzzyMatching

useFuzzyMatching: object

serializedName

serializedName: string = "fuzzy"

type

type: object

name

name: string = "Boolean"

Const AutocompleteResult

AutocompleteResult: object

type

type: object

className

className: string = "AutocompleteResult"

name

name: string = "Composite"

modelProperties

modelProperties: object

coverage

coverage: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "@search\.coverage"

type

type: object

name

name: string = "Number"

results

results: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "value"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "AutocompleteItem"

name

name: string = "Composite"

Const AzureActiveDirectoryApplicationCredentials

AzureActiveDirectoryApplicationCredentials: object

type

type: object

className

className: string = "AzureActiveDirectoryApplicationCredentials"

name

name: string = "Composite"

modelProperties

modelProperties: object

applicationId

applicationId: object

required

required: boolean = true

serializedName

serializedName: string = "applicationId"

type

type: object

name

name: string = "String"

applicationSecret

applicationSecret: object

serializedName

serializedName: string = "applicationSecret"

type

type: object

name

name: string = "String"

Const BM25Similarity

BM25Similarity: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.BM25Similarity"

type

type: object

className

className: string = "BM25Similarity"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = Similarity.type.polymorphicDiscriminator

uberParent

uberParent: string = "Similarity"

modelProperties

modelProperties: object

b

b: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "b"

type

type: object

name

name: string = "Number"

k1

k1: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "k1"

type

type: object

name

name: string = "Number"

Const CaptionResult

CaptionResult: object

type

type: object

className

className: string = "CaptionResult"

name

name: string = "Composite"

additionalProperties

additionalProperties: object

type

type: object

name

name: string = "Object"

modelProperties

modelProperties: object

highlights

highlights: object

nullable

nullable: boolean = true

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "highlights"

type

type: object

name

name: string = "String"

text

text: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "text"

type

type: object

name

name: string = "String"

Const CharFilter

CharFilter: object

type

type: object

className

className: string = "CharFilter"

name

name: string = "Composite"

uberParent

uberParent: string = "CharFilter"

modelProperties

modelProperties: object

name

name: object

required

required: boolean = true

serializedName

serializedName: string = "name"

type

type: object

name

name: string = "String"

odatatype

odatatype: object

required

required: boolean = true

serializedName

serializedName: string = "@odata\.type"

type

type: object

name

name: string = "String"

polymorphicDiscriminator

polymorphicDiscriminator: object

clientName

clientName: string = "odatatype"

serializedName

serializedName: string = "@odata\.type"

Const CjkBigramTokenFilter

CjkBigramTokenFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.CjkBigramTokenFilter"

type

type: object

className

className: string = "CjkBigramTokenFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

ignoreScripts

ignoreScripts: object

serializedName

serializedName: string = "ignoreScripts"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

allowedValues

allowedValues: string[] = ["han", "hiragana", "katakana", "hangul"]

name

name: string = "Enum"

outputUnigrams

outputUnigrams: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "outputUnigrams"

type

type: object

name

name: string = "Boolean"

Const ClassicSimilarity

ClassicSimilarity: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.ClassicSimilarity"

type

type: object

className

className: string = "ClassicSimilarity"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = Similarity.type.polymorphicDiscriminator

uberParent

uberParent: string = "Similarity"

modelProperties

modelProperties: object

Const ClassicTokenizer

ClassicTokenizer: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.ClassicTokenizer"

type

type: object

className

className: string = "ClassicTokenizer"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = LexicalTokenizer.type.polymorphicDiscriminator

uberParent

uberParent: string = "LexicalTokenizer"

modelProperties

modelProperties: object

maxTokenLength

maxTokenLength: object

defaultValue

defaultValue: number = 255

serializedName

serializedName: string = "maxTokenLength"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 300

type

type: object

name

name: string = "Number"

Const CognitiveServicesAccount

CognitiveServicesAccount: object

type

type: object

className

className: string = "CognitiveServicesAccount"

name

name: string = "Composite"

uberParent

uberParent: string = "CognitiveServicesAccount"

modelProperties

modelProperties: object

description

description: object

serializedName

serializedName: string = "description"

type

type: object

name

name: string = "String"

odatatype

odatatype: object

required

required: boolean = true

serializedName

serializedName: string = "@odata\.type"

type

type: object

name

name: string = "String"

polymorphicDiscriminator

polymorphicDiscriminator: object

clientName

clientName: string = "odatatype"

serializedName

serializedName: string = "@odata\.type"

Const CognitiveServicesAccountKey

CognitiveServicesAccountKey: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.CognitiveServicesByKey"

type

type: object

className

className: string = "CognitiveServicesAccountKey"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = CognitiveServicesAccount.type.polymorphicDiscriminator

uberParent

uberParent: string = "CognitiveServicesAccount"

modelProperties

modelProperties: object

key

key: object

required

required: boolean = true

serializedName

serializedName: string = "key"

type

type: object

name

name: string = "String"

Const CommonGramTokenFilter

CommonGramTokenFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.CommonGramTokenFilter"

type

type: object

className

className: string = "CommonGramTokenFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

commonWords

commonWords: object

required

required: boolean = true

serializedName

serializedName: string = "commonWords"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

ignoreCase

ignoreCase: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "ignoreCase"

type

type: object

name

name: string = "Boolean"

useQueryMode

useQueryMode: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "queryMode"

type

type: object

name

name: string = "Boolean"

Const ConditionalSkill

ConditionalSkill: object

serializedName

serializedName: string = "#Microsoft.Skills.Util.ConditionalSkill"

type

type: object

className

className: string = "ConditionalSkill"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = SearchIndexerSkill.type.polymorphicDiscriminator

uberParent

uberParent: string = "SearchIndexerSkill"

modelProperties

modelProperties: object

Const CorsOptions

CorsOptions: object

type

type: object

className

className: string = "CorsOptions"

name

name: string = "Composite"

modelProperties

modelProperties: object

allowedOrigins

allowedOrigins: object

required

required: boolean = true

serializedName

serializedName: string = "allowedOrigins"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

maxAgeInSeconds

maxAgeInSeconds: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "maxAgeInSeconds"

type

type: object

name

name: string = "Number"

Const CustomAnalyzer

CustomAnalyzer: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.CustomAnalyzer"

type

type: object

className

className: string = "CustomAnalyzer"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = LexicalAnalyzer.type.polymorphicDiscriminator

uberParent

uberParent: string = "LexicalAnalyzer"

modelProperties

modelProperties: object

charFilters

charFilters: object

serializedName

serializedName: string = "charFilters"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

tokenFilters

tokenFilters: object

serializedName

serializedName: string = "tokenFilters"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

tokenizerName

tokenizerName: object

required

required: boolean = true

serializedName

serializedName: string = "tokenizer"

type

type: object

name

name: string = "String"

Const CustomEntity

CustomEntity: object

type

type: object

className

className: string = "CustomEntity"

name

name: string = "Composite"

modelProperties

modelProperties: object

accentSensitive

accentSensitive: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "accentSensitive"

type

type: object

name

name: string = "Boolean"

aliases

aliases: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "aliases"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "CustomEntityAlias"

name

name: string = "Composite"

caseSensitive

caseSensitive: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "caseSensitive"

type

type: object

name

name: string = "Boolean"

defaultAccentSensitive

defaultAccentSensitive: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "defaultAccentSensitive"

type

type: object

name

name: string = "Boolean"

defaultCaseSensitive

defaultCaseSensitive: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "defaultCaseSensitive"

type

type: object

name

name: string = "Boolean"

defaultFuzzyEditDistance

defaultFuzzyEditDistance: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "defaultFuzzyEditDistance"

type

type: object

name

name: string = "Number"

description

description: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "description"

type

type: object

name

name: string = "String"

fuzzyEditDistance

fuzzyEditDistance: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "fuzzyEditDistance"

type

type: object

name

name: string = "Number"

id

id: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "id"

type

type: object

name

name: string = "String"

name

name: object

required

required: boolean = true

serializedName

serializedName: string = "name"

type

type: object

name

name: string = "String"

subtype

subtype: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "subtype"

type

type: object

name

name: string = "String"

type

type: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "type"

type

type: object

name

name: string = "String"

Const CustomEntityAlias

CustomEntityAlias: object

type

type: object

className

className: string = "CustomEntityAlias"

name

name: string = "Composite"

modelProperties

modelProperties: object

accentSensitive

accentSensitive: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "accentSensitive"

type

type: object

name

name: string = "Boolean"

caseSensitive

caseSensitive: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "caseSensitive"

type

type: object

name

name: string = "Boolean"

fuzzyEditDistance

fuzzyEditDistance: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "fuzzyEditDistance"

type

type: object

name

name: string = "Number"

text

text: object

required

required: boolean = true

serializedName

serializedName: string = "text"

type

type: object

name

name: string = "String"

Const CustomEntityLookupSkill

CustomEntityLookupSkill: object

serializedName

serializedName: string = "#Microsoft.Skills.Text.CustomEntityLookupSkill"

type

type: object

className

className: string = "CustomEntityLookupSkill"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = SearchIndexerSkill.type.polymorphicDiscriminator

uberParent

uberParent: string = "SearchIndexerSkill"

modelProperties

modelProperties: object

defaultLanguageCode

defaultLanguageCode: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "defaultLanguageCode"

type

type: object

name

name: string = "String"

entitiesDefinitionUri

entitiesDefinitionUri: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "entitiesDefinitionUri"

type

type: object

name

name: string = "String"

globalDefaultAccentSensitive

globalDefaultAccentSensitive: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "globalDefaultAccentSensitive"

type

type: object

name

name: string = "Boolean"

globalDefaultCaseSensitive

globalDefaultCaseSensitive: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "globalDefaultCaseSensitive"

type

type: object

name

name: string = "Boolean"

globalDefaultFuzzyEditDistance

globalDefaultFuzzyEditDistance: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "globalDefaultFuzzyEditDistance"

type

type: object

name

name: string = "Number"

inlineEntitiesDefinition

inlineEntitiesDefinition: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "inlineEntitiesDefinition"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "CustomEntity"

name

name: string = "Composite"

Const CustomNormalizer

CustomNormalizer: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.CustomNormalizer"

type

type: object

className

className: string = "CustomNormalizer"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = LexicalNormalizer.type.polymorphicDiscriminator

uberParent

uberParent: string = "LexicalNormalizer"

modelProperties

modelProperties: object

charFilters

charFilters: object

serializedName

serializedName: string = "charFilters"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

tokenFilters

tokenFilters: object

serializedName

serializedName: string = "tokenFilters"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const DataChangeDetectionPolicy

DataChangeDetectionPolicy: object

type

type: object

className

className: string = "DataChangeDetectionPolicy"

name

name: string = "Composite"

uberParent

uberParent: string = "DataChangeDetectionPolicy"

modelProperties

modelProperties: object

odatatype

odatatype: object

required

required: boolean = true

serializedName

serializedName: string = "@odata\.type"

type

type: object

name

name: string = "String"

polymorphicDiscriminator

polymorphicDiscriminator: object

clientName

clientName: string = "odatatype"

serializedName

serializedName: string = "@odata\.type"

Const DataDeletionDetectionPolicy

DataDeletionDetectionPolicy: object

type

type: object

className

className: string = "DataDeletionDetectionPolicy"

name

name: string = "Composite"

uberParent

uberParent: string = "DataDeletionDetectionPolicy"

modelProperties

modelProperties: object

odatatype

odatatype: object

required

required: boolean = true

serializedName

serializedName: string = "@odata\.type"

type

type: object

name

name: string = "String"

polymorphicDiscriminator

polymorphicDiscriminator: object

clientName

clientName: string = "odatatype"

serializedName

serializedName: string = "@odata\.type"

Const DataSourceCredentials

DataSourceCredentials: object

type

type: object

className

className: string = "DataSourceCredentials"

name

name: string = "Composite"

modelProperties

modelProperties: object

connectionString

connectionString: object

serializedName

serializedName: string = "connectionString"

type

type: object

name

name: string = "String"

Const DefaultCognitiveServicesAccount

DefaultCognitiveServicesAccount: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.DefaultCognitiveServices"

type

type: object

className

className: string = "DefaultCognitiveServicesAccount"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = CognitiveServicesAccount.type.polymorphicDiscriminator

uberParent

uberParent: string = "CognitiveServicesAccount"

modelProperties

modelProperties: object

Const DictionaryDecompounderTokenFilter

DictionaryDecompounderTokenFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.DictionaryDecompounderTokenFilter"

type

type: object

className

className: string = "DictionaryDecompounderTokenFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

maxSubwordSize

maxSubwordSize: object

defaultValue

defaultValue: number = 15

serializedName

serializedName: string = "maxSubwordSize"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 300

type

type: object

name

name: string = "Number"

minSubwordSize

minSubwordSize: object

defaultValue

defaultValue: number = 2

serializedName

serializedName: string = "minSubwordSize"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 300

type

type: object

name

name: string = "Number"

minWordSize

minWordSize: object

defaultValue

defaultValue: number = 5

serializedName

serializedName: string = "minWordSize"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 300

type

type: object

name

name: string = "Number"

onlyLongestMatch

onlyLongestMatch: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "onlyLongestMatch"

type

type: object

name

name: string = "Boolean"

wordList

wordList: object

required

required: boolean = true

serializedName

serializedName: string = "wordList"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const DistanceScoringFunction

DistanceScoringFunction: object

serializedName

serializedName: string = "distance"

type

type: object

className

className: string = "DistanceScoringFunction"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = ScoringFunction.type.polymorphicDiscriminator

uberParent

uberParent: string = "ScoringFunction"

modelProperties

modelProperties: object

parameters

parameters: object

serializedName

serializedName: string = "distance"

type

type: object

className

className: string = "DistanceScoringParameters"

name

name: string = "Composite"

Const DistanceScoringParameters

DistanceScoringParameters: object

type

type: object

className

className: string = "DistanceScoringParameters"

name

name: string = "Composite"

modelProperties

modelProperties: object

boostingDistance

boostingDistance: object

required

required: boolean = true

serializedName

serializedName: string = "boostingDistance"

type

type: object

name

name: string = "Number"

referencePointParameter

referencePointParameter: object

required

required: boolean = true

serializedName

serializedName: string = "referencePointParameter"

type

type: object

name

name: string = "String"

Const DocumentExtractionSkill

DocumentExtractionSkill: object

serializedName

serializedName: string = "#Microsoft.Skills.Util.DocumentExtractionSkill"

type

type: object

className

className: string = "DocumentExtractionSkill"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = SearchIndexerSkill.type.polymorphicDiscriminator

uberParent

uberParent: string = "SearchIndexerSkill"

modelProperties

modelProperties: object

configuration

configuration: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "configuration"

type

type: object

name

name: string = "Dictionary"

value

value: object

type

type: object

name

name: string = "Dictionary"

value

value: object

type

type: object

name

name: string = "any"

dataToExtract

dataToExtract: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "dataToExtract"

type

type: object

name

name: string = "String"

parsingMode

parsingMode: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "parsingMode"

type

type: object

name

name: string = "String"

Const DocumentKeysOrIds

DocumentKeysOrIds: object

type

type: object

className

className: string = "DocumentKeysOrIds"

name

name: string = "Composite"

modelProperties

modelProperties: object

datasourceDocumentIds

datasourceDocumentIds: object

serializedName

serializedName: string = "datasourceDocumentIds"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

documentKeys

documentKeys: object

serializedName

serializedName: string = "documentKeys"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const EdgeNGramTokenFilter

EdgeNGramTokenFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.EdgeNGramTokenFilter"

type

type: object

className

className: string = "EdgeNGramTokenFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

maxGram

maxGram: object

defaultValue

defaultValue: number = 2

serializedName

serializedName: string = "maxGram"

type

type: object

name

name: string = "Number"

minGram

minGram: object

defaultValue

defaultValue: number = 1

serializedName

serializedName: string = "minGram"

type

type: object

name

name: string = "Number"

side

side: object

serializedName

serializedName: string = "side"

type

type: object

allowedValues

allowedValues: string[] = ["front", "back"]

name

name: string = "Enum"

Const EdgeNGramTokenFilterV2

EdgeNGramTokenFilterV2: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.EdgeNGramTokenFilterV2"

type

type: object

className

className: string = "EdgeNGramTokenFilterV2"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

maxGram

maxGram: object

defaultValue

defaultValue: number = 2

serializedName

serializedName: string = "maxGram"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 300

type

type: object

name

name: string = "Number"

minGram

minGram: object

defaultValue

defaultValue: number = 1

serializedName

serializedName: string = "minGram"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 300

type

type: object

name

name: string = "Number"

side

side: object

serializedName

serializedName: string = "side"

type

type: object

allowedValues

allowedValues: string[] = ["front", "back"]

name

name: string = "Enum"

Const EdgeNGramTokenizer

EdgeNGramTokenizer: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.EdgeNGramTokenizer"

type

type: object

className

className: string = "EdgeNGramTokenizer"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = LexicalTokenizer.type.polymorphicDiscriminator

uberParent

uberParent: string = "LexicalTokenizer"

modelProperties

modelProperties: object

maxGram

maxGram: object

defaultValue

defaultValue: number = 2

serializedName

serializedName: string = "maxGram"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 300

type

type: object

name

name: string = "Number"

minGram

minGram: object

defaultValue

defaultValue: number = 1

serializedName

serializedName: string = "minGram"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 300

type

type: object

name

name: string = "Number"

tokenChars

tokenChars: object

serializedName

serializedName: string = "tokenChars"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

allowedValues

allowedValues: string[] = ["letter","digit","whitespace","punctuation","symbol"]

name

name: string = "Enum"

Const ElisionTokenFilter

ElisionTokenFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.ElisionTokenFilter"

type

type: object

className

className: string = "ElisionTokenFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

articles

articles: object

serializedName

serializedName: string = "articles"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const EntityLinkingSkill

EntityLinkingSkill: object

serializedName

serializedName: string = "#Microsoft.Skills.Text.V3.EntityLinkingSkill"

type

type: object

className

className: string = "EntityLinkingSkill"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = SearchIndexerSkill.type.polymorphicDiscriminator

uberParent

uberParent: string = "SearchIndexerSkill"

modelProperties

modelProperties: object

defaultLanguageCode

defaultLanguageCode: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "defaultLanguageCode"

type

type: object

name

name: string = "String"

minimumPrecision

minimumPrecision: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "minimumPrecision"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 1

InclusiveMinimum

InclusiveMinimum: number = 0

type

type: object

name

name: string = "Number"

modelVersion

modelVersion: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "modelVersion"

type

type: object

name

name: string = "String"

Const EntityRecognitionSkill

EntityRecognitionSkill: object

serializedName

serializedName: string = "#Microsoft.Skills.Text.EntityRecognitionSkill"

type

type: object

className

className: string = "EntityRecognitionSkill"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = SearchIndexerSkill.type.polymorphicDiscriminator

uberParent

uberParent: string = "SearchIndexerSkill"

modelProperties

modelProperties: object

categories

categories: object

serializedName

serializedName: string = "categories"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

defaultLanguageCode

defaultLanguageCode: object

serializedName

serializedName: string = "defaultLanguageCode"

type

type: object

name

name: string = "String"

includeTypelessEntities

includeTypelessEntities: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "includeTypelessEntities"

type

type: object

name

name: string = "Boolean"

minimumPrecision

minimumPrecision: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "minimumPrecision"

type

type: object

name

name: string = "Number"

Const EntityRecognitionSkillV3

EntityRecognitionSkillV3: object

serializedName

serializedName: string = "#Microsoft.Skills.Text.V3.EntityRecognitionSkill"

type

type: object

className

className: string = "EntityRecognitionSkillV3"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = SearchIndexerSkill.type.polymorphicDiscriminator

uberParent

uberParent: string = "SearchIndexerSkill"

modelProperties

modelProperties: object

categories

categories: object

serializedName

serializedName: string = "categories"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

defaultLanguageCode

defaultLanguageCode: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "defaultLanguageCode"

type

type: object

name

name: string = "String"

minimumPrecision

minimumPrecision: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "minimumPrecision"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 1

InclusiveMinimum

InclusiveMinimum: number = 0

type

type: object

name

name: string = "Number"

modelVersion

modelVersion: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "modelVersion"

type

type: object

name

name: string = "String"

Const FacetResult

FacetResult: object

type

type: object

className

className: string = "FacetResult"

name

name: string = "Composite"

additionalProperties

additionalProperties: object

type

type: object

name

name: string = "Object"

modelProperties

modelProperties: object

count

count: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "count"

type

type: object

name

name: string = "Number"

Const FieldMapping

FieldMapping: object

type

type: object

className

className: string = "FieldMapping"

name

name: string = "Composite"

modelProperties

modelProperties: object

mappingFunction

mappingFunction: object

serializedName

serializedName: string = "mappingFunction"

type

type: object

className

className: string = "FieldMappingFunction"

name

name: string = "Composite"

sourceFieldName

sourceFieldName: object

required

required: boolean = true

serializedName

serializedName: string = "sourceFieldName"

type

type: object

name

name: string = "String"

targetFieldName

targetFieldName: object

serializedName

serializedName: string = "targetFieldName"

type

type: object

name

name: string = "String"

Const FieldMappingFunction

FieldMappingFunction: object

type

type: object

className

className: string = "FieldMappingFunction"

name

name: string = "Composite"

modelProperties

modelProperties: object

name

name: object

required

required: boolean = true

serializedName

serializedName: string = "name"

type

type: object

name

name: string = "String"

parameters

parameters: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "parameters"

type

type: object

name

name: string = "Dictionary"

value

value: object

type

type: object

name

name: string = "Dictionary"

value

value: object

type

type: object

name

name: string = "any"

Const FreshnessScoringFunction

FreshnessScoringFunction: object

serializedName

serializedName: string = "freshness"

type

type: object

className

className: string = "FreshnessScoringFunction"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = ScoringFunction.type.polymorphicDiscriminator

uberParent

uberParent: string = "ScoringFunction"

modelProperties

modelProperties: object

parameters

parameters: object

serializedName

serializedName: string = "freshness"

type

type: object

className

className: string = "FreshnessScoringParameters"

name

name: string = "Composite"

Const FreshnessScoringParameters

FreshnessScoringParameters: object

type

type: object

className

className: string = "FreshnessScoringParameters"

name

name: string = "Composite"

modelProperties

modelProperties: object

boostingDuration

boostingDuration: object

required

required: boolean = true

serializedName

serializedName: string = "boostingDuration"

type

type: object

name

name: string = "TimeSpan"

Const GetIndexStatisticsResult

GetIndexStatisticsResult: object

type

type: object

className

className: string = "GetIndexStatisticsResult"

name

name: string = "Composite"

modelProperties

modelProperties: object

documentCount

documentCount: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "documentCount"

type

type: object

name

name: string = "Number"

storageSize

storageSize: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "storageSize"

type

type: object

name

name: string = "Number"

Const HighWaterMarkChangeDetectionPolicy

HighWaterMarkChangeDetectionPolicy: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.HighWaterMarkChangeDetectionPolicy"

type

type: object

className

className: string = "HighWaterMarkChangeDetectionPolicy"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = DataChangeDetectionPolicy.type.polymorphicDiscriminator

uberParent

uberParent: string = "DataChangeDetectionPolicy"

modelProperties

modelProperties: object

highWaterMarkColumnName

highWaterMarkColumnName: object

required

required: boolean = true

serializedName

serializedName: string = "highWaterMarkColumnName"

type

type: object

name

name: string = "String"

Const ImageAnalysisSkill

ImageAnalysisSkill: object

serializedName

serializedName: string = "#Microsoft.Skills.Vision.ImageAnalysisSkill"

type

type: object

className

className: string = "ImageAnalysisSkill"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = SearchIndexerSkill.type.polymorphicDiscriminator

uberParent

uberParent: string = "SearchIndexerSkill"

modelProperties

modelProperties: object

defaultLanguageCode

defaultLanguageCode: object

serializedName

serializedName: string = "defaultLanguageCode"

type

type: object

name

name: string = "String"

details

details: object

serializedName

serializedName: string = "details"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

visualFeatures

visualFeatures: object

serializedName

serializedName: string = "visualFeatures"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const IndexAction

IndexAction: object

type

type: object

className

className: string = "IndexAction"

name

name: string = "Composite"

additionalProperties

additionalProperties: object

type

type: object

name

name: string = "Object"

modelProperties

modelProperties: object

__actionType

__actionType: object

required

required: boolean = true

serializedName

serializedName: string = "@search\.action"

type

type: object

allowedValues

allowedValues: string[] = ["upload", "merge", "mergeOrUpload", "delete"]

name

name: string = "Enum"

Const IndexBatch

IndexBatch: object

type

type: object

className

className: string = "IndexBatch"

name

name: string = "Composite"

modelProperties

modelProperties: object

actions

actions: object

required

required: boolean = true

serializedName

serializedName: string = "value"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "IndexAction"

name

name: string = "Composite"

Const IndexDocumentsResult

IndexDocumentsResult: object

type

type: object

className

className: string = "IndexDocumentsResult"

name

name: string = "Composite"

modelProperties

modelProperties: object

results

results: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "value"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "IndexingResult"

name

name: string = "Composite"

Const IndexerExecutionResult

IndexerExecutionResult: object

type

type: object

className

className: string = "IndexerExecutionResult"

name

name: string = "Composite"

modelProperties

modelProperties: object

currentState

currentState: object

serializedName

serializedName: string = "currentState"

type

type: object

className

className: string = "IndexerState"

name

name: string = "Composite"

endTime

endTime: object

nullable

nullable: boolean = true

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "endTime"

type

type: object

name

name: string = "DateTime"

errorMessage

errorMessage: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "errorMessage"

type

type: object

name

name: string = "String"

errors

errors: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "errors"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "SearchIndexerError"

name

name: string = "Composite"

failedItemCount

failedItemCount: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "itemsFailed"

type

type: object

name

name: string = "Number"

finalTrackingState

finalTrackingState: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "finalTrackingState"

type

type: object

name

name: string = "String"

initialTrackingState

initialTrackingState: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "initialTrackingState"

type

type: object

name

name: string = "String"

itemCount

itemCount: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "itemsProcessed"

type

type: object

name

name: string = "Number"

startTime

startTime: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "startTime"

type

type: object

name

name: string = "DateTime"

status

status: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "status"

type

type: object

allowedValues

allowedValues: string[] = ["transientFailure", "success", "inProgress", "reset"]

name

name: string = "Enum"

statusDetail

statusDetail: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "statusDetail"

type

type: object

name

name: string = "String"

warnings

warnings: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "warnings"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "SearchIndexerWarning"

name

name: string = "Composite"

Const IndexerState

IndexerState: object

type

type: object

className

className: string = "IndexerState"

name

name: string = "Composite"

modelProperties

modelProperties: object

allDocumentsFinalChangeTrackingState

allDocumentsFinalChangeTrackingState: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "allDocsFinalChangeTrackingState"

type

type: object

name

name: string = "String"

allDocumentsInitialChangeTrackingState

allDocumentsInitialChangeTrackingState: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "allDocsInitialChangeTrackingState"

type

type: object

name

name: string = "String"

mode

mode: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "mode"

type

type: object

name

name: string = "String"

resetDatasourceDocumentIds

resetDatasourceDocumentIds: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "resetDatasourceDocumentIds"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

resetDocumentKeys

resetDocumentKeys: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "resetDocumentKeys"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

resetDocumentsFinalChangeTrackingState

resetDocumentsFinalChangeTrackingState: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "resetDocsFinalChangeTrackingState"

type

type: object

name

name: string = "String"

resetDocumentsInitialChangeTrackingState

resetDocumentsInitialChangeTrackingState: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "resetDocsInitialChangeTrackingState"

type

type: object

name

name: string = "String"

Const IndexingParameters

IndexingParameters: object

type

type: object

className

className: string = "IndexingParameters"

name

name: string = "Composite"

modelProperties

modelProperties: object

batchSize

batchSize: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "batchSize"

type

type: object

name

name: string = "Number"

configuration

configuration: object

serializedName

serializedName: string = "configuration"

type

type: object

className

className: string = "IndexingParametersConfiguration"

name

name: string = "Composite"

maxFailedItems

maxFailedItems: object

defaultValue

defaultValue: number = 0

nullable

nullable: boolean = true

serializedName

serializedName: string = "maxFailedItems"

type

type: object

name

name: string = "Number"

maxFailedItemsPerBatch

maxFailedItemsPerBatch: object

defaultValue

defaultValue: number = 0

nullable

nullable: boolean = true

serializedName

serializedName: string = "maxFailedItemsPerBatch"

type

type: object

name

name: string = "Number"

Const IndexingParametersConfiguration

IndexingParametersConfiguration: object

type

type: object

className

className: string = "IndexingParametersConfiguration"

name

name: string = "Composite"

additionalProperties

additionalProperties: object

type

type: object

name

name: string = "Object"

modelProperties

modelProperties: object

allowSkillsetToReadFileData

allowSkillsetToReadFileData: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "allowSkillsetToReadFileData"

type

type: object

name

name: string = "Boolean"

dataToExtract

dataToExtract: object

defaultValue

defaultValue: string = "contentAndMetadata"

serializedName

serializedName: string = "dataToExtract"

type

type: object

name

name: string = "String"

delimitedTextDelimiter

delimitedTextDelimiter: object

serializedName

serializedName: string = "delimitedTextDelimiter"

type

type: object

name

name: string = "String"

delimitedTextHeaders

delimitedTextHeaders: object

serializedName

serializedName: string = "delimitedTextHeaders"

type

type: object

name

name: string = "String"

documentRoot

documentRoot: object

serializedName

serializedName: string = "documentRoot"

type

type: object

name

name: string = "String"

excludedFileNameExtensions

excludedFileNameExtensions: object

defaultValue

defaultValue: string = ""

serializedName

serializedName: string = "excludedFileNameExtensions"

type

type: object

name

name: string = "String"

executionEnvironment

executionEnvironment: object

defaultValue

defaultValue: string = "standard"

serializedName

serializedName: string = "executionEnvironment"

type

type: object

name

name: string = "String"

failOnUnprocessableDocument

failOnUnprocessableDocument: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "failOnUnprocessableDocument"

type

type: object

name

name: string = "Boolean"

failOnUnsupportedContentType

failOnUnsupportedContentType: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "failOnUnsupportedContentType"

type

type: object

name

name: string = "Boolean"

firstLineContainsHeaders

firstLineContainsHeaders: object

defaultValue

defaultValue: boolean = true

serializedName

serializedName: string = "firstLineContainsHeaders"

type

type: object

name

name: string = "Boolean"

imageAction

imageAction: object

defaultValue

defaultValue: string = "none"

serializedName

serializedName: string = "imageAction"

type

type: object

name

name: string = "String"

indexStorageMetadataOnlyForOversizedDocuments

indexStorageMetadataOnlyForOversizedDocuments: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "indexStorageMetadataOnlyForOversizedDocuments"

type

type: object

name

name: string = "Boolean"

indexedFileNameExtensions

indexedFileNameExtensions: object

defaultValue

defaultValue: string = ""

serializedName

serializedName: string = "indexedFileNameExtensions"

type

type: object

name

name: string = "String"

parsingMode

parsingMode: object

defaultValue

defaultValue: string = "default"

serializedName

serializedName: string = "parsingMode"

type

type: object

name

name: string = "String"

pdfTextRotationAlgorithm

pdfTextRotationAlgorithm: object

defaultValue

defaultValue: string = "none"

serializedName

serializedName: string = "pdfTextRotationAlgorithm"

type

type: object

name

name: string = "String"

queryTimeout

queryTimeout: object

defaultValue

defaultValue: string = "00:05:00"

serializedName

serializedName: string = "queryTimeout"

type

type: object

name

name: string = "String"

Const IndexingResult

IndexingResult: object

type

type: object

className

className: string = "IndexingResult"

name

name: string = "Composite"

modelProperties

modelProperties: object

errorMessage

errorMessage: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "errorMessage"

type

type: object

name

name: string = "String"

key

key: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "key"

type

type: object

name

name: string = "String"

statusCode

statusCode: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "statusCode"

type

type: object

name

name: string = "Number"

succeeded

succeeded: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "status"

type

type: object

name

name: string = "Boolean"

Const IndexingSchedule

IndexingSchedule: object

type

type: object

className

className: string = "IndexingSchedule"

name

name: string = "Composite"

modelProperties

modelProperties: object

interval

interval: object

required

required: boolean = true

serializedName

serializedName: string = "interval"

type

type: object

name

name: string = "TimeSpan"

startTime

startTime: object

serializedName

serializedName: string = "startTime"

type

type: object

name

name: string = "DateTime"

Const InputFieldMappingEntry

InputFieldMappingEntry: object

type

type: object

className

className: string = "InputFieldMappingEntry"

name

name: string = "Composite"

modelProperties

modelProperties: object

inputs

inputs: object

serializedName

serializedName: string = "inputs"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "InputFieldMappingEntry"

name

name: string = "Composite"

name

name: object

required

required: boolean = true

serializedName

serializedName: string = "name"

type

type: object

name

name: string = "String"

source

source: object

serializedName

serializedName: string = "source"

type

type: object

name

name: string = "String"

sourceContext

sourceContext: object

serializedName

serializedName: string = "sourceContext"

type

type: object

name

name: string = "String"

Const KeepTokenFilter

KeepTokenFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.KeepTokenFilter"

type

type: object

className

className: string = "KeepTokenFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

keepWords

keepWords: object

required

required: boolean = true

serializedName

serializedName: string = "keepWords"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

lowerCaseKeepWords

lowerCaseKeepWords: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "keepWordsCase"

type

type: object

name

name: string = "Boolean"

Const KeyPhraseExtractionSkill

KeyPhraseExtractionSkill: object

serializedName

serializedName: string = "#Microsoft.Skills.Text.KeyPhraseExtractionSkill"

type

type: object

className

className: string = "KeyPhraseExtractionSkill"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = SearchIndexerSkill.type.polymorphicDiscriminator

uberParent

uberParent: string = "SearchIndexerSkill"

modelProperties

modelProperties: object

defaultLanguageCode

defaultLanguageCode: object

serializedName

serializedName: string = "defaultLanguageCode"

type

type: object

name

name: string = "String"

maxKeyPhraseCount

maxKeyPhraseCount: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "maxKeyPhraseCount"

type

type: object

name

name: string = "Number"

modelVersion

modelVersion: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "modelVersion"

type

type: object

name

name: string = "String"

Const KeywordMarkerTokenFilter

KeywordMarkerTokenFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.KeywordMarkerTokenFilter"

type

type: object

className

className: string = "KeywordMarkerTokenFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

ignoreCase

ignoreCase: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "ignoreCase"

type

type: object

name

name: string = "Boolean"

keywords

keywords: object

required

required: boolean = true

serializedName

serializedName: string = "keywords"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const KeywordTokenizer

KeywordTokenizer: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.KeywordTokenizer"

type

type: object

className

className: string = "KeywordTokenizer"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = LexicalTokenizer.type.polymorphicDiscriminator

uberParent

uberParent: string = "LexicalTokenizer"

modelProperties

modelProperties: object

bufferSize

bufferSize: object

defaultValue

defaultValue: number = 256

serializedName

serializedName: string = "bufferSize"

type

type: object

name

name: string = "Number"

Const KeywordTokenizerV2

KeywordTokenizerV2: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.KeywordTokenizerV2"

type

type: object

className

className: string = "KeywordTokenizerV2"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = LexicalTokenizer.type.polymorphicDiscriminator

uberParent

uberParent: string = "LexicalTokenizer"

modelProperties

modelProperties: object

maxTokenLength

maxTokenLength: object

defaultValue

defaultValue: number = 256

serializedName

serializedName: string = "maxTokenLength"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 300

type

type: object

name

name: string = "Number"

Const LanguageDetectionSkill

LanguageDetectionSkill: object

serializedName

serializedName: string = "#Microsoft.Skills.Text.LanguageDetectionSkill"

type

type: object

className

className: string = "LanguageDetectionSkill"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = SearchIndexerSkill.type.polymorphicDiscriminator

uberParent

uberParent: string = "SearchIndexerSkill"

modelProperties

modelProperties: object

defaultCountryHint

defaultCountryHint: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "defaultCountryHint"

type

type: object

name

name: string = "String"

modelVersion

modelVersion: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "modelVersion"

type

type: object

name

name: string = "String"

Const LengthTokenFilter

LengthTokenFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.LengthTokenFilter"

type

type: object

className

className: string = "LengthTokenFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

maxLength

maxLength: object

defaultValue

defaultValue: number = 300

serializedName

serializedName: string = "max"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 300

type

type: object

name

name: string = "Number"

minLength

minLength: object

defaultValue

defaultValue: number = 0

serializedName

serializedName: string = "min"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 300

type

type: object

name

name: string = "Number"

Const LexicalAnalyzer

LexicalAnalyzer: object

type

type: object

className

className: string = "LexicalAnalyzer"

name

name: string = "Composite"

uberParent

uberParent: string = "LexicalAnalyzer"

modelProperties

modelProperties: object

name

name: object

required

required: boolean = true

serializedName

serializedName: string = "name"

type

type: object

name

name: string = "String"

odatatype

odatatype: object

required

required: boolean = true

serializedName

serializedName: string = "@odata\.type"

type

type: object

name

name: string = "String"

polymorphicDiscriminator

polymorphicDiscriminator: object

clientName

clientName: string = "odatatype"

serializedName

serializedName: string = "@odata\.type"

Const LexicalNormalizer

LexicalNormalizer: object

type

type: object

className

className: string = "LexicalNormalizer"

name

name: string = "Composite"

uberParent

uberParent: string = "LexicalNormalizer"

modelProperties

modelProperties: object

name

name: object

required

required: boolean = true

serializedName

serializedName: string = "name"

type

type: object

name

name: string = "String"

odatatype

odatatype: object

required

required: boolean = true

serializedName

serializedName: string = "@odata\.type"

type

type: object

name

name: string = "String"

polymorphicDiscriminator

polymorphicDiscriminator: object

clientName

clientName: string = "odatatype"

serializedName

serializedName: string = "@odata\.type"

Const LexicalTokenizer

LexicalTokenizer: object

type

type: object

className

className: string = "LexicalTokenizer"

name

name: string = "Composite"

uberParent

uberParent: string = "LexicalTokenizer"

modelProperties

modelProperties: object

name

name: object

required

required: boolean = true

serializedName

serializedName: string = "name"

type

type: object

name

name: string = "String"

odatatype

odatatype: object

required

required: boolean = true

serializedName

serializedName: string = "@odata\.type"

type

type: object

name

name: string = "String"

polymorphicDiscriminator

polymorphicDiscriminator: object

clientName

clientName: string = "odatatype"

serializedName

serializedName: string = "@odata\.type"

Const LimitTokenFilter

LimitTokenFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.LimitTokenFilter"

type

type: object

className

className: string = "LimitTokenFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

consumeAllTokens

consumeAllTokens: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "consumeAllTokens"

type

type: object

name

name: string = "Boolean"

maxTokenCount

maxTokenCount: object

defaultValue

defaultValue: number = 1

serializedName

serializedName: string = "maxTokenCount"

type

type: object

name

name: string = "Number"

Const ListDataSourcesResult

ListDataSourcesResult: object

type

type: object

className

className: string = "ListDataSourcesResult"

name

name: string = "Composite"

modelProperties

modelProperties: object

dataSources

dataSources: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "value"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "SearchIndexerDataSource"

name

name: string = "Composite"

Const ListIndexersResult

ListIndexersResult: object

type

type: object

className

className: string = "ListIndexersResult"

name

name: string = "Composite"

modelProperties

modelProperties: object

indexers

indexers: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "value"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "SearchIndexer"

name

name: string = "Composite"

Const ListIndexesResult

ListIndexesResult: object

type

type: object

className

className: string = "ListIndexesResult"

name

name: string = "Composite"

modelProperties

modelProperties: object

indexes

indexes: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "value"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "SearchIndex"

name

name: string = "Composite"

Const ListSkillsetsResult

ListSkillsetsResult: object

type

type: object

className

className: string = "ListSkillsetsResult"

name

name: string = "Composite"

modelProperties

modelProperties: object

skillsets

skillsets: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "value"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "SearchIndexerSkillset"

name

name: string = "Composite"

Const ListSynonymMapsResult

ListSynonymMapsResult: object

type

type: object

className

className: string = "ListSynonymMapsResult"

name

name: string = "Composite"

modelProperties

modelProperties: object

synonymMaps

synonymMaps: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "value"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "SynonymMap"

name

name: string = "Composite"

Const LuceneStandardAnalyzer

LuceneStandardAnalyzer: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.StandardAnalyzer"

type

type: object

className

className: string = "LuceneStandardAnalyzer"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = LexicalAnalyzer.type.polymorphicDiscriminator

uberParent

uberParent: string = "LexicalAnalyzer"

modelProperties

modelProperties: object

maxTokenLength

maxTokenLength: object

defaultValue

defaultValue: number = 255

serializedName

serializedName: string = "maxTokenLength"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 300

type

type: object

name

name: string = "Number"

stopwords

stopwords: object

serializedName

serializedName: string = "stopwords"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const LuceneStandardTokenizer

LuceneStandardTokenizer: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.StandardTokenizer"

type

type: object

className

className: string = "LuceneStandardTokenizer"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = LexicalTokenizer.type.polymorphicDiscriminator

uberParent

uberParent: string = "LexicalTokenizer"

modelProperties

modelProperties: object

maxTokenLength

maxTokenLength: object

defaultValue

defaultValue: number = 255

serializedName

serializedName: string = "maxTokenLength"

type

type: object

name

name: string = "Number"

Const LuceneStandardTokenizerV2

LuceneStandardTokenizerV2: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.StandardTokenizerV2"

type

type: object

className

className: string = "LuceneStandardTokenizerV2"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = LexicalTokenizer.type.polymorphicDiscriminator

uberParent

uberParent: string = "LexicalTokenizer"

modelProperties

modelProperties: object

maxTokenLength

maxTokenLength: object

defaultValue

defaultValue: number = 255

serializedName

serializedName: string = "maxTokenLength"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 300

type

type: object

name

name: string = "Number"

Const MagnitudeScoringFunction

MagnitudeScoringFunction: object

serializedName

serializedName: string = "magnitude"

type

type: object

className

className: string = "MagnitudeScoringFunction"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = ScoringFunction.type.polymorphicDiscriminator

uberParent

uberParent: string = "ScoringFunction"

modelProperties

modelProperties: object

parameters

parameters: object

serializedName

serializedName: string = "magnitude"

type

type: object

className

className: string = "MagnitudeScoringParameters"

name

name: string = "Composite"

Const MagnitudeScoringParameters

MagnitudeScoringParameters: object

type

type: object

className

className: string = "MagnitudeScoringParameters"

name

name: string = "Composite"

modelProperties

modelProperties: object

boostingRangeEnd

boostingRangeEnd: object

required

required: boolean = true

serializedName

serializedName: string = "boostingRangeEnd"

type

type: object

name

name: string = "Number"

boostingRangeStart

boostingRangeStart: object

required

required: boolean = true

serializedName

serializedName: string = "boostingRangeStart"

type

type: object

name

name: string = "Number"

shouldBoostBeyondRangeByConstant

shouldBoostBeyondRangeByConstant: object

serializedName

serializedName: string = "constantBoostBeyondRange"

type

type: object

name

name: string = "Boolean"

Const MappingCharFilter

MappingCharFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.MappingCharFilter"

type

type: object

className

className: string = "MappingCharFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = CharFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "CharFilter"

modelProperties

modelProperties: object

mappings

mappings: object

required

required: boolean = true

serializedName

serializedName: string = "mappings"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const MergeSkill

MergeSkill: object

serializedName

serializedName: string = "#Microsoft.Skills.Text.MergeSkill"

type

type: object

className

className: string = "MergeSkill"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = SearchIndexerSkill.type.polymorphicDiscriminator

uberParent

uberParent: string = "SearchIndexerSkill"

modelProperties

modelProperties: object

insertPostTag

insertPostTag: object

defaultValue

defaultValue: string = " "

serializedName

serializedName: string = "insertPostTag"

type

type: object

name

name: string = "String"

insertPreTag

insertPreTag: object

defaultValue

defaultValue: string = " "

serializedName

serializedName: string = "insertPreTag"

type

type: object

name

name: string = "String"

Const MicrosoftLanguageStemmingTokenizer

MicrosoftLanguageStemmingTokenizer: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.MicrosoftLanguageStemmingTokenizer"

type

type: object

className

className: string = "MicrosoftLanguageStemmingTokenizer"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = LexicalTokenizer.type.polymorphicDiscriminator

uberParent

uberParent: string = "LexicalTokenizer"

modelProperties

modelProperties: object

isSearchTokenizer

isSearchTokenizer: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "isSearchTokenizer"

type

type: object

name

name: string = "Boolean"

language

language: object

serializedName

serializedName: string = "language"

type

type: object

allowedValues

allowedValues: string[] = ["arabic","bangla","bulgarian","catalan","croatian","czech","danish","dutch","english","estonian","finnish","french","german","greek","gujarati","hebrew","hindi","hungarian","icelandic","indonesian","italian","kannada","latvian","lithuanian","malay","malayalam","marathi","norwegianBokmaal","polish","portuguese","portugueseBrazilian","punjabi","romanian","russian","serbianCyrillic","serbianLatin","slovak","slovenian","spanish","swedish","tamil","telugu","turkish","ukrainian","urdu"]

name

name: string = "Enum"

maxTokenLength

maxTokenLength: object

defaultValue

defaultValue: number = 255

serializedName

serializedName: string = "maxTokenLength"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 300

type

type: object

name

name: string = "Number"

Const MicrosoftLanguageTokenizer

MicrosoftLanguageTokenizer: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.MicrosoftLanguageTokenizer"

type

type: object

className

className: string = "MicrosoftLanguageTokenizer"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = LexicalTokenizer.type.polymorphicDiscriminator

uberParent

uberParent: string = "LexicalTokenizer"

modelProperties

modelProperties: object

isSearchTokenizer

isSearchTokenizer: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "isSearchTokenizer"

type

type: object

name

name: string = "Boolean"

language

language: object

serializedName

serializedName: string = "language"

type

type: object

allowedValues

allowedValues: string[] = ["bangla","bulgarian","catalan","chineseSimplified","chineseTraditional","croatian","czech","danish","dutch","english","french","german","greek","gujarati","hindi","icelandic","indonesian","italian","japanese","kannada","korean","malay","malayalam","marathi","norwegianBokmaal","polish","portuguese","portugueseBrazilian","punjabi","romanian","russian","serbianCyrillic","serbianLatin","slovenian","spanish","swedish","tamil","telugu","thai","ukrainian","urdu","vietnamese"]

name

name: string = "Enum"

maxTokenLength

maxTokenLength: object

defaultValue

defaultValue: number = 255

serializedName

serializedName: string = "maxTokenLength"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 300

type

type: object

name

name: string = "Number"

Const NGramTokenFilter

NGramTokenFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.NGramTokenFilter"

type

type: object

className

className: string = "NGramTokenFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

maxGram

maxGram: object

defaultValue

defaultValue: number = 2

serializedName

serializedName: string = "maxGram"

type

type: object

name

name: string = "Number"

minGram

minGram: object

defaultValue

defaultValue: number = 1

serializedName

serializedName: string = "minGram"

type

type: object

name

name: string = "Number"

Const NGramTokenFilterV2

NGramTokenFilterV2: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.NGramTokenFilterV2"

type

type: object

className

className: string = "NGramTokenFilterV2"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

maxGram

maxGram: object

defaultValue

defaultValue: number = 2

serializedName

serializedName: string = "maxGram"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 300

type

type: object

name

name: string = "Number"

minGram

minGram: object

defaultValue

defaultValue: number = 1

serializedName

serializedName: string = "minGram"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 300

type

type: object

name

name: string = "Number"

Const NGramTokenizer

NGramTokenizer: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.NGramTokenizer"

type

type: object

className

className: string = "NGramTokenizer"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = LexicalTokenizer.type.polymorphicDiscriminator

uberParent

uberParent: string = "LexicalTokenizer"

modelProperties

modelProperties: object

maxGram

maxGram: object

defaultValue

defaultValue: number = 2

serializedName

serializedName: string = "maxGram"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 300

type

type: object

name

name: string = "Number"

minGram

minGram: object

defaultValue

defaultValue: number = 1

serializedName

serializedName: string = "minGram"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 300

type

type: object

name

name: string = "Number"

tokenChars

tokenChars: object

serializedName

serializedName: string = "tokenChars"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

allowedValues

allowedValues: string[] = ["letter","digit","whitespace","punctuation","symbol"]

name

name: string = "Enum"

Const OcrSkill

OcrSkill: object

serializedName

serializedName: string = "#Microsoft.Skills.Vision.OcrSkill"

type

type: object

className

className: string = "OcrSkill"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = SearchIndexerSkill.type.polymorphicDiscriminator

uberParent

uberParent: string = "SearchIndexerSkill"

modelProperties

modelProperties: object

defaultLanguageCode

defaultLanguageCode: object

serializedName

serializedName: string = "defaultLanguageCode"

type

type: object

name

name: string = "String"

lineEnding

lineEnding: object

serializedName

serializedName: string = "lineEnding"

type

type: object

name

name: string = "String"

shouldDetectOrientation

shouldDetectOrientation: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "detectOrientation"

type

type: object

name

name: string = "Boolean"

Const OutputFieldMappingEntry

OutputFieldMappingEntry: object

type

type: object

className

className: string = "OutputFieldMappingEntry"

name

name: string = "Composite"

modelProperties

modelProperties: object

name

name: object

required

required: boolean = true

serializedName

serializedName: string = "name"

type

type: object

name

name: string = "String"

targetName

targetName: object

serializedName

serializedName: string = "targetName"

type

type: object

name

name: string = "String"

Const PIIDetectionSkill

PIIDetectionSkill: object

serializedName

serializedName: string = "#Microsoft.Skills.Text.PIIDetectionSkill"

type

type: object

className

className: string = "PIIDetectionSkill"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = SearchIndexerSkill.type.polymorphicDiscriminator

uberParent

uberParent: string = "SearchIndexerSkill"

modelProperties

modelProperties: object

defaultLanguageCode

defaultLanguageCode: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "defaultLanguageCode"

type

type: object

name

name: string = "String"

domain

domain: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "domain"

type

type: object

name

name: string = "String"

maskingCharacter

maskingCharacter: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "maskingCharacter"

constraints

constraints: object

MaxLength

MaxLength: number = 1

type

type: object

name

name: string = "String"

maskingMode

maskingMode: object

serializedName

serializedName: string = "maskingMode"

type

type: object

name

name: string = "String"

minimumPrecision

minimumPrecision: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "minimumPrecision"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 1

InclusiveMinimum

InclusiveMinimum: number = 0

type

type: object

name

name: string = "Number"

modelVersion

modelVersion: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "modelVersion"

type

type: object

name

name: string = "String"

piiCategories

piiCategories: object

serializedName

serializedName: string = "piiCategories"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const PathHierarchyTokenizerV2

PathHierarchyTokenizerV2: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.PathHierarchyTokenizerV2"

type

type: object

className

className: string = "PathHierarchyTokenizerV2"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = LexicalTokenizer.type.polymorphicDiscriminator

uberParent

uberParent: string = "LexicalTokenizer"

modelProperties

modelProperties: object

delimiter

delimiter: object

defaultValue

defaultValue: string = "/"

serializedName

serializedName: string = "delimiter"

type

type: object

name

name: string = "String"

maxTokenLength

maxTokenLength: object

defaultValue

defaultValue: number = 300

serializedName

serializedName: string = "maxTokenLength"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 300

type

type: object

name

name: string = "Number"

numberOfTokensToSkip

numberOfTokensToSkip: object

defaultValue

defaultValue: number = 0

serializedName

serializedName: string = "skip"

type

type: object

name

name: string = "Number"

replacement

replacement: object

defaultValue

defaultValue: string = "/"

serializedName

serializedName: string = "replacement"

type

type: object

name

name: string = "String"

reverseTokenOrder

reverseTokenOrder: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "reverse"

type

type: object

name

name: string = "Boolean"

Const PatternAnalyzer

PatternAnalyzer: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.PatternAnalyzer"

type

type: object

className

className: string = "PatternAnalyzer"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = LexicalAnalyzer.type.polymorphicDiscriminator

uberParent

uberParent: string = "LexicalAnalyzer"

modelProperties

modelProperties: object

flags

flags: object

serializedName

serializedName: string = "flags"

type

type: object

name

name: string = "String"

lowerCaseTerms

lowerCaseTerms: object

defaultValue

defaultValue: boolean = true

serializedName

serializedName: string = "lowercase"

type

type: object

name

name: string = "Boolean"

pattern

pattern: object

defaultValue

defaultValue: string = "W+"

serializedName

serializedName: string = "pattern"

type

type: object

name

name: string = "String"

stopwords

stopwords: object

serializedName

serializedName: string = "stopwords"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const PatternCaptureTokenFilter

PatternCaptureTokenFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.PatternCaptureTokenFilter"

type

type: object

className

className: string = "PatternCaptureTokenFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

patterns

patterns: object

required

required: boolean = true

serializedName

serializedName: string = "patterns"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

preserveOriginal

preserveOriginal: object

defaultValue

defaultValue: boolean = true

serializedName

serializedName: string = "preserveOriginal"

type

type: object

name

name: string = "Boolean"

Const PatternReplaceCharFilter

PatternReplaceCharFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.PatternReplaceCharFilter"

type

type: object

className

className: string = "PatternReplaceCharFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = CharFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "CharFilter"

modelProperties

modelProperties: object

pattern

pattern: object

required

required: boolean = true

serializedName

serializedName: string = "pattern"

type

type: object

name

name: string = "String"

replacement

replacement: object

required

required: boolean = true

serializedName

serializedName: string = "replacement"

type

type: object

name

name: string = "String"

Const PatternReplaceTokenFilter

PatternReplaceTokenFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.PatternReplaceTokenFilter"

type

type: object

className

className: string = "PatternReplaceTokenFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

pattern

pattern: object

required

required: boolean = true

serializedName

serializedName: string = "pattern"

type

type: object

name

name: string = "String"

replacement

replacement: object

required

required: boolean = true

serializedName

serializedName: string = "replacement"

type

type: object

name

name: string = "String"

Const PatternTokenizer

PatternTokenizer: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.PatternTokenizer"

type

type: object

className

className: string = "PatternTokenizer"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = LexicalTokenizer.type.polymorphicDiscriminator

uberParent

uberParent: string = "LexicalTokenizer"

modelProperties

modelProperties: object

flags

flags: object

serializedName

serializedName: string = "flags"

type

type: object

name

name: string = "String"

group

group: object

defaultValue

defaultValue: number = -1

serializedName

serializedName: string = "group"

type

type: object

name

name: string = "Number"

pattern

pattern: object

defaultValue

defaultValue: string = "W+"

serializedName

serializedName: string = "pattern"

type

type: object

name

name: string = "String"

Const PhoneticTokenFilter

PhoneticTokenFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.PhoneticTokenFilter"

type

type: object

className

className: string = "PhoneticTokenFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

encoder

encoder: object

serializedName

serializedName: string = "encoder"

type

type: object

allowedValues

allowedValues: string[] = ["metaphone","doubleMetaphone","soundex","refinedSoundex","caverphone1","caverphone2","cologne","nysiis","koelnerPhonetik","haasePhonetik","beiderMorse"]

name

name: string = "Enum"

replaceOriginalTokens

replaceOriginalTokens: object

defaultValue

defaultValue: boolean = true

serializedName

serializedName: string = "replace"

type

type: object

name

name: string = "Boolean"

Const PrioritizedFields

PrioritizedFields: object

type

type: object

className

className: string = "PrioritizedFields"

name

name: string = "Composite"

modelProperties

modelProperties: object

prioritizedContentFields

prioritizedContentFields: object

serializedName

serializedName: string = "prioritizedContentFields"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "SemanticField"

name

name: string = "Composite"

prioritizedKeywordsFields

prioritizedKeywordsFields: object

serializedName

serializedName: string = "prioritizedKeywordsFields"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "SemanticField"

name

name: string = "Composite"

titleField

titleField: object

serializedName

serializedName: string = "titleField"

type

type: object

className

className: string = "SemanticField"

name

name: string = "Composite"

Const ResourceCounter

ResourceCounter: object

type

type: object

className

className: string = "ResourceCounter"

name

name: string = "Composite"

modelProperties

modelProperties: object

quota

quota: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "quota"

type

type: object

name

name: string = "Number"

usage

usage: object

required

required: boolean = true

serializedName

serializedName: string = "usage"

type

type: object

name

name: string = "Number"

Const ScoringFunction

ScoringFunction: object

type

type: object

className

className: string = "ScoringFunction"

name

name: string = "Composite"

uberParent

uberParent: string = "ScoringFunction"

modelProperties

modelProperties: object

boost

boost: object

required

required: boolean = true

serializedName

serializedName: string = "boost"

type

type: object

name

name: string = "Number"

fieldName

fieldName: object

required

required: boolean = true

serializedName

serializedName: string = "fieldName"

type

type: object

name

name: string = "String"

interpolation

interpolation: object

serializedName

serializedName: string = "interpolation"

type

type: object

allowedValues

allowedValues: string[] = ["linear", "constant", "quadratic", "logarithmic"]

name

name: string = "Enum"

type

type: object

required

required: boolean = true

serializedName

serializedName: string = "type"

type

type: object

name

name: string = "String"

polymorphicDiscriminator

polymorphicDiscriminator: object

clientName

clientName: string = "type"

serializedName

serializedName: string = "type"

Const ScoringProfile

ScoringProfile: object

type

type: object

className

className: string = "ScoringProfile"

name

name: string = "Composite"

modelProperties

modelProperties: object

functionAggregation

functionAggregation: object

serializedName

serializedName: string = "functionAggregation"

type

type: object

allowedValues

allowedValues: string[] = ["sum","average","minimum","maximum","firstMatching"]

name

name: string = "Enum"

functions

functions: object

serializedName

serializedName: string = "functions"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "ScoringFunction"

name

name: string = "Composite"

name

name: object

required

required: boolean = true

serializedName

serializedName: string = "name"

type

type: object

name

name: string = "String"

textWeights

textWeights: object

serializedName

serializedName: string = "text"

type

type: object

className

className: string = "TextWeights"

name

name: string = "Composite"

Const SearchDocumentsResult

SearchDocumentsResult: object

type

type: object

className

className: string = "SearchDocumentsResult"

name

name: string = "Composite"

modelProperties

modelProperties: object

answers

answers: object

nullable

nullable: boolean = true

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "@search\.answers"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "AnswerResult"

name

name: string = "Composite"

count

count: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "@odata\.count"

type

type: object

name

name: string = "Number"

coverage

coverage: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "@search\.coverage"

type

type: object

name

name: string = "Number"

facets

facets: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "@search\.facets"

type

type: object

name

name: string = "Dictionary"

value

value: object

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "FacetResult"

name

name: string = "Composite"

nextLink

nextLink: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "@odata\.nextLink"

type

type: object

name

name: string = "String"

nextPageParameters

nextPageParameters: object

serializedName

serializedName: string = "@search\.nextPageParameters"

type

type: object

className

className: string = "SearchRequest"

name

name: string = "Composite"

results

results: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "value"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "SearchResult"

name

name: string = "Composite"

Const SearchError

SearchError: object

type

type: object

className

className: string = "SearchError"

name

name: string = "Composite"

modelProperties

modelProperties: object

code

code: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "code"

type

type: object

name

name: string = "String"

details

details: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "details"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "SearchError"

name

name: string = "Composite"

message

message: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "message"

type

type: object

name

name: string = "String"

Const SearchField

SearchField: object

type

type: object

className

className: string = "SearchField"

name

name: string = "Composite"

modelProperties

modelProperties: object

analyzer

analyzer: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "analyzer"

type

type: object

name

name: string = "String"

facetable

facetable: object

serializedName

serializedName: string = "facetable"

type

type: object

name

name: string = "Boolean"

fields

fields: object

serializedName

serializedName: string = "fields"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "SearchField"

name

name: string = "Composite"

filterable

filterable: object

serializedName

serializedName: string = "filterable"

type

type: object

name

name: string = "Boolean"

indexAnalyzer

indexAnalyzer: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "indexAnalyzer"

type

type: object

name

name: string = "String"

key

key: object

serializedName

serializedName: string = "key"

type

type: object

name

name: string = "Boolean"

name

name: object

required

required: boolean = true

serializedName

serializedName: string = "name"

type

type: object

name

name: string = "String"

normalizer

normalizer: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "normalizer"

type

type: object

name

name: string = "String"

retrievable

retrievable: object

serializedName

serializedName: string = "retrievable"

type

type: object

name

name: string = "Boolean"

searchAnalyzer

searchAnalyzer: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "searchAnalyzer"

type

type: object

name

name: string = "String"

searchable

searchable: object

serializedName

serializedName: string = "searchable"

type

type: object

name

name: string = "Boolean"

sortable

sortable: object

serializedName

serializedName: string = "sortable"

type

type: object

name

name: string = "Boolean"

synonymMaps

synonymMaps: object

serializedName

serializedName: string = "synonymMaps"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

type

type: object

required

required: boolean = true

serializedName

serializedName: string = "type"

type

type: object

name

name: string = "String"

Const SearchIndex

SearchIndex: object

type

type: object

className

className: string = "SearchIndex"

name

name: string = "Composite"

modelProperties

modelProperties: object

analyzers

analyzers: object

serializedName

serializedName: string = "analyzers"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "LexicalAnalyzer"

name

name: string = "Composite"

charFilters

charFilters: object

serializedName

serializedName: string = "charFilters"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "CharFilter"

name

name: string = "Composite"

corsOptions

corsOptions: object

serializedName

serializedName: string = "corsOptions"

type

type: object

className

className: string = "CorsOptions"

name

name: string = "Composite"

defaultScoringProfile

defaultScoringProfile: object

serializedName

serializedName: string = "defaultScoringProfile"

type

type: object

name

name: string = "String"

encryptionKey

encryptionKey: object

serializedName

serializedName: string = "encryptionKey"

type

type: object

className

className: string = "SearchResourceEncryptionKey"

name

name: string = "Composite"

etag

etag: object

serializedName

serializedName: string = "@odata\.etag"

type

type: object

name

name: string = "String"

fields

fields: object

required

required: boolean = true

serializedName

serializedName: string = "fields"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "SearchField"

name

name: string = "Composite"

name

name: object

required

required: boolean = true

serializedName

serializedName: string = "name"

type

type: object

name

name: string = "String"

normalizers

normalizers: object

serializedName

serializedName: string = "normalizers"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "LexicalNormalizer"

name

name: string = "Composite"

scoringProfiles

scoringProfiles: object

serializedName

serializedName: string = "scoringProfiles"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "ScoringProfile"

name

name: string = "Composite"

semanticSettings

semanticSettings: object

serializedName

serializedName: string = "semantic"

type

type: object

className

className: string = "SemanticSettings"

name

name: string = "Composite"

similarity

similarity: object

serializedName

serializedName: string = "similarity"

type

type: object

className

className: string = "Similarity"

name

name: string = "Composite"

suggesters

suggesters: object

serializedName

serializedName: string = "suggesters"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "Suggester"

name

name: string = "Composite"

tokenFilters

tokenFilters: object

serializedName

serializedName: string = "tokenFilters"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "TokenFilter"

name

name: string = "Composite"

tokenizers

tokenizers: object

serializedName

serializedName: string = "tokenizers"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "LexicalTokenizer"

name

name: string = "Composite"

Const SearchIndexer

SearchIndexer: object

type

type: object

className

className: string = "SearchIndexer"

name

name: string = "Composite"

modelProperties

modelProperties: object

cache

cache: object

serializedName

serializedName: string = "cache"

type

type: object

className

className: string = "SearchIndexerCache"

name

name: string = "Composite"

dataSourceName

dataSourceName: object

required

required: boolean = true

serializedName

serializedName: string = "dataSourceName"

type

type: object

name

name: string = "String"

description

description: object

serializedName

serializedName: string = "description"

type

type: object

name

name: string = "String"

encryptionKey

encryptionKey: object

serializedName

serializedName: string = "encryptionKey"

type

type: object

className

className: string = "SearchResourceEncryptionKey"

name

name: string = "Composite"

etag

etag: object

serializedName

serializedName: string = "@odata\.etag"

type

type: object

name

name: string = "String"

fieldMappings

fieldMappings: object

serializedName

serializedName: string = "fieldMappings"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "FieldMapping"

name

name: string = "Composite"

isDisabled

isDisabled: object

defaultValue

defaultValue: boolean = false

nullable

nullable: boolean = true

serializedName

serializedName: string = "disabled"

type

type: object

name

name: string = "Boolean"

name

name: object

required

required: boolean = true

serializedName

serializedName: string = "name"

type

type: object

name

name: string = "String"

outputFieldMappings

outputFieldMappings: object

serializedName

serializedName: string = "outputFieldMappings"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "FieldMapping"

name

name: string = "Composite"

parameters

parameters: object

serializedName

serializedName: string = "parameters"

type

type: object

className

className: string = "IndexingParameters"

name

name: string = "Composite"

schedule

schedule: object

serializedName

serializedName: string = "schedule"

type

type: object

className

className: string = "IndexingSchedule"

name

name: string = "Composite"

skillsetName

skillsetName: object

serializedName

serializedName: string = "skillsetName"

type

type: object

name

name: string = "String"

targetIndexName

targetIndexName: object

required

required: boolean = true

serializedName

serializedName: string = "targetIndexName"

type

type: object

name

name: string = "String"

Const SearchIndexerCache

SearchIndexerCache: object

type

type: object

className

className: string = "SearchIndexerCache"

name

name: string = "Composite"

modelProperties

modelProperties: object

enableReprocessing

enableReprocessing: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "enableReprocessing"

type

type: object

name

name: string = "Boolean"

storageConnectionString

storageConnectionString: object

serializedName

serializedName: string = "storageConnectionString"

type

type: object

name

name: string = "String"

Const SearchIndexerDataContainer

SearchIndexerDataContainer: object

type

type: object

className

className: string = "SearchIndexerDataContainer"

name

name: string = "Composite"

modelProperties

modelProperties: object

name

name: object

required

required: boolean = true

serializedName

serializedName: string = "name"

type

type: object

name

name: string = "String"

query

query: object

serializedName

serializedName: string = "query"

type

type: object

name

name: string = "String"

Const SearchIndexerDataIdentity

SearchIndexerDataIdentity: object

type

type: object

className

className: string = "SearchIndexerDataIdentity"

name

name: string = "Composite"

uberParent

uberParent: string = "SearchIndexerDataIdentity"

modelProperties

modelProperties: object

odatatype

odatatype: object

required

required: boolean = true

serializedName

serializedName: string = "@odata\.type"

type

type: object

name

name: string = "String"

polymorphicDiscriminator

polymorphicDiscriminator: object

clientName

clientName: string = "odatatype"

serializedName

serializedName: string = "@odata\.type"

Const SearchIndexerDataNoneIdentity

SearchIndexerDataNoneIdentity: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.SearchIndexerDataNoneIdentity"

type

type: object

className

className: string = "SearchIndexerDataNoneIdentity"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = SearchIndexerDataIdentity.type.polymorphicDiscriminator

uberParent

uberParent: string = "SearchIndexerDataIdentity"

modelProperties

modelProperties: object

Const SearchIndexerDataSource

SearchIndexerDataSource: object

type

type: object

className

className: string = "SearchIndexerDataSource"

name

name: string = "Composite"

modelProperties

modelProperties: object

container

container: object

serializedName

serializedName: string = "container"

type

type: object

className

className: string = "SearchIndexerDataContainer"

name

name: string = "Composite"

credentials

credentials: object

serializedName

serializedName: string = "credentials"

type

type: object

className

className: string = "DataSourceCredentials"

name

name: string = "Composite"

dataChangeDetectionPolicy

dataChangeDetectionPolicy: object

serializedName

serializedName: string = "dataChangeDetectionPolicy"

type

type: object

className

className: string = "DataChangeDetectionPolicy"

name

name: string = "Composite"

dataDeletionDetectionPolicy

dataDeletionDetectionPolicy: object

serializedName

serializedName: string = "dataDeletionDetectionPolicy"

type

type: object

className

className: string = "DataDeletionDetectionPolicy"

name

name: string = "Composite"

description

description: object

serializedName

serializedName: string = "description"

type

type: object

name

name: string = "String"

encryptionKey

encryptionKey: object

serializedName

serializedName: string = "encryptionKey"

type

type: object

className

className: string = "SearchResourceEncryptionKey"

name

name: string = "Composite"

etag

etag: object

serializedName

serializedName: string = "@odata\.etag"

type

type: object

name

name: string = "String"

identity

identity: object

serializedName

serializedName: string = "identity"

type

type: object

className

className: string = "SearchIndexerDataIdentity"

name

name: string = "Composite"

name

name: object

required

required: boolean = true

serializedName

serializedName: string = "name"

type

type: object

name

name: string = "String"

type

type: object

required

required: boolean = true

serializedName

serializedName: string = "type"

type

type: object

name

name: string = "String"

Const SearchIndexerDataUserAssignedIdentity

SearchIndexerDataUserAssignedIdentity: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.SearchIndexerDataUserAssignedIdentity"

type

type: object

className

className: string = "SearchIndexerDataUserAssignedIdentity"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = SearchIndexerDataIdentity.type.polymorphicDiscriminator

uberParent

uberParent: string = "SearchIndexerDataIdentity"

modelProperties

modelProperties: object

userAssignedIdentity

userAssignedIdentity: object

required

required: boolean = true

serializedName

serializedName: string = "userAssignedIdentity"

type

type: object

name

name: string = "String"

Const SearchIndexerError

SearchIndexerError: object

type

type: object

className

className: string = "SearchIndexerError"

name

name: string = "Composite"

modelProperties

modelProperties: object

details

details: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "details"

type

type: object

name

name: string = "String"

documentationLink

documentationLink: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "documentationLink"

type

type: object

name

name: string = "String"

errorMessage

errorMessage: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "errorMessage"

type

type: object

name

name: string = "String"

key

key: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "key"

type

type: object

name

name: string = "String"

name

name: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "name"

type

type: object

name

name: string = "String"

statusCode

statusCode: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "statusCode"

type

type: object

name

name: string = "Number"

Const SearchIndexerKnowledgeStore

SearchIndexerKnowledgeStore: object

type

type: object

className

className: string = "SearchIndexerKnowledgeStore"

name

name: string = "Composite"

modelProperties

modelProperties: object

projections

projections: object

required

required: boolean = true

serializedName

serializedName: string = "projections"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "SearchIndexerKnowledgeStoreProjection"

name

name: string = "Composite"

storageConnectionString

storageConnectionString: object

required

required: boolean = true

serializedName

serializedName: string = "storageConnectionString"

type

type: object

name

name: string = "String"

Const SearchIndexerKnowledgeStoreBlobProjectionSelector

SearchIndexerKnowledgeStoreBlobProjectionSelector: object

type

type: object

className

className: string = "SearchIndexerKnowledgeStoreBlobProjectionSelector"

name

name: string = "Composite"

modelProperties

modelProperties: object

storageContainer

storageContainer: object

required

required: boolean = true

serializedName

serializedName: string = "storageContainer"

type

type: object

name

name: string = "String"

Const SearchIndexerKnowledgeStoreFileProjectionSelector

SearchIndexerKnowledgeStoreFileProjectionSelector: object

type

type: object

className

className: string = "SearchIndexerKnowledgeStoreFileProjectionSelector"

name

name: string = "Composite"

modelProperties

modelProperties: object

Const SearchIndexerKnowledgeStoreObjectProjectionSelector

SearchIndexerKnowledgeStoreObjectProjectionSelector: object

type

type: object

className

className: string = "SearchIndexerKnowledgeStoreObjectProjectionSelector"

name

name: string = "Composite"

modelProperties

modelProperties: object

Const SearchIndexerKnowledgeStoreProjection

SearchIndexerKnowledgeStoreProjection: object

type

type: object

className

className: string = "SearchIndexerKnowledgeStoreProjection"

name

name: string = "Composite"

modelProperties

modelProperties: object

files

files: object

serializedName

serializedName: string = "files"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "SearchIndexerKnowledgeStoreFileProjectionSelector"

name

name: string = "Composite"

objects

objects: object

serializedName

serializedName: string = "objects"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "SearchIndexerKnowledgeStoreObjectProjectionSelector"

name

name: string = "Composite"

tables

tables: object

serializedName

serializedName: string = "tables"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "SearchIndexerKnowledgeStoreTableProjectionSelector"

name

name: string = "Composite"

Const SearchIndexerKnowledgeStoreProjectionSelector

SearchIndexerKnowledgeStoreProjectionSelector: object

type

type: object

className

className: string = "SearchIndexerKnowledgeStoreProjectionSelector"

name

name: string = "Composite"

modelProperties

modelProperties: object

generatedKeyName

generatedKeyName: object

serializedName

serializedName: string = "generatedKeyName"

type

type: object

name

name: string = "String"

inputs

inputs: object

serializedName

serializedName: string = "inputs"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "InputFieldMappingEntry"

name

name: string = "Composite"

referenceKeyName

referenceKeyName: object

serializedName

serializedName: string = "referenceKeyName"

type

type: object

name

name: string = "String"

source

source: object

serializedName

serializedName: string = "source"

type

type: object

name

name: string = "String"

sourceContext

sourceContext: object

serializedName

serializedName: string = "sourceContext"

type

type: object

name

name: string = "String"

Const SearchIndexerKnowledgeStoreTableProjectionSelector

SearchIndexerKnowledgeStoreTableProjectionSelector: object

type

type: object

className

className: string = "SearchIndexerKnowledgeStoreTableProjectionSelector"

name

name: string = "Composite"

modelProperties

modelProperties: object

tableName

tableName: object

required

required: boolean = true

serializedName

serializedName: string = "tableName"

type

type: object

name

name: string = "String"

Const SearchIndexerLimits

SearchIndexerLimits: object

type

type: object

className

className: string = "SearchIndexerLimits"

name

name: string = "Composite"

modelProperties

modelProperties: object

maxDocumentContentCharactersToExtract

maxDocumentContentCharactersToExtract: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "maxDocumentContentCharactersToExtract"

type

type: object

name

name: string = "Number"

maxDocumentExtractionSize

maxDocumentExtractionSize: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "maxDocumentExtractionSize"

type

type: object

name

name: string = "Number"

maxRunTime

maxRunTime: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "maxRunTime"

type

type: object

name

name: string = "TimeSpan"

Const SearchIndexerSkill

SearchIndexerSkill: object

type

type: object

className

className: string = "SearchIndexerSkill"

name

name: string = "Composite"

uberParent

uberParent: string = "SearchIndexerSkill"

modelProperties

modelProperties: object

context

context: object

serializedName

serializedName: string = "context"

type

type: object

name

name: string = "String"

description

description: object

serializedName

serializedName: string = "description"

type

type: object

name

name: string = "String"

inputs

inputs: object

required

required: boolean = true

serializedName

serializedName: string = "inputs"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "InputFieldMappingEntry"

name

name: string = "Composite"

name

name: object

serializedName

serializedName: string = "name"

type

type: object

name

name: string = "String"

odatatype

odatatype: object

required

required: boolean = true

serializedName

serializedName: string = "@odata\.type"

type

type: object

name

name: string = "String"

outputs

outputs: object

required

required: boolean = true

serializedName

serializedName: string = "outputs"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "OutputFieldMappingEntry"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: object

clientName

clientName: string = "odatatype"

serializedName

serializedName: string = "@odata\.type"

Const SearchIndexerSkillset

SearchIndexerSkillset: object

type

type: object

className

className: string = "SearchIndexerSkillset"

name

name: string = "Composite"

modelProperties

modelProperties: object

cognitiveServicesAccount

cognitiveServicesAccount: object

serializedName

serializedName: string = "cognitiveServices"

type

type: object

className

className: string = "CognitiveServicesAccount"

name

name: string = "Composite"

description

description: object

serializedName

serializedName: string = "description"

type

type: object

name

name: string = "String"

encryptionKey

encryptionKey: object

serializedName

serializedName: string = "encryptionKey"

type

type: object

className

className: string = "SearchResourceEncryptionKey"

name

name: string = "Composite"

etag

etag: object

serializedName

serializedName: string = "@odata\.etag"

type

type: object

name

name: string = "String"

knowledgeStore

knowledgeStore: object

serializedName

serializedName: string = "knowledgeStore"

type

type: object

className

className: string = "SearchIndexerKnowledgeStore"

name

name: string = "Composite"

name

name: object

required

required: boolean = true

serializedName

serializedName: string = "name"

type

type: object

name

name: string = "String"

skills

skills: object

required

required: boolean = true

serializedName

serializedName: string = "skills"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "SearchIndexerSkill"

name

name: string = "Composite"

Const SearchIndexerStatus

SearchIndexerStatus: object

type

type: object

className

className: string = "SearchIndexerStatus"

name

name: string = "Composite"

modelProperties

modelProperties: object

executionHistory

executionHistory: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "executionHistory"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "IndexerExecutionResult"

name

name: string = "Composite"

lastResult

lastResult: object

serializedName

serializedName: string = "lastResult"

type

type: object

className

className: string = "IndexerExecutionResult"

name

name: string = "Composite"

limits

limits: object

serializedName

serializedName: string = "limits"

type

type: object

className

className: string = "SearchIndexerLimits"

name

name: string = "Composite"

status

status: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "status"

type

type: object

allowedValues

allowedValues: string[] = ["unknown", "error", "running"]

name

name: string = "Enum"

Const SearchIndexerWarning

SearchIndexerWarning: object

type

type: object

className

className: string = "SearchIndexerWarning"

name

name: string = "Composite"

modelProperties

modelProperties: object

details

details: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "details"

type

type: object

name

name: string = "String"

documentationLink

documentationLink: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "documentationLink"

type

type: object

name

name: string = "String"

key

key: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "key"

type

type: object

name

name: string = "String"

message

message: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "message"

type

type: object

name

name: string = "String"

name

name: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "name"

type

type: object

name

name: string = "String"

Const SearchRequest

SearchRequest: object

type

type: object

className

className: string = "SearchRequest"

name

name: string = "Composite"

modelProperties

modelProperties: object

answers

answers: object

serializedName

serializedName: string = "answers"

type

type: object

name

name: string = "String"

captions

captions: object

serializedName

serializedName: string = "captions"

type

type: object

name

name: string = "String"

facets

facets: object

serializedName

serializedName: string = "facets"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

filter

filter: object

serializedName

serializedName: string = "filter"

type

type: object

name

name: string = "String"

highlightFields

highlightFields: object

serializedName

serializedName: string = "highlight"

type

type: object

name

name: string = "String"

highlightPostTag

highlightPostTag: object

serializedName

serializedName: string = "highlightPostTag"

type

type: object

name

name: string = "String"

highlightPreTag

highlightPreTag: object

serializedName

serializedName: string = "highlightPreTag"

type

type: object

name

name: string = "String"

includeTotalResultCount

includeTotalResultCount: object

serializedName

serializedName: string = "count"

type

type: object

name

name: string = "Boolean"

minimumCoverage

minimumCoverage: object

serializedName

serializedName: string = "minimumCoverage"

type

type: object

name

name: string = "Number"

orderBy

orderBy: object

serializedName

serializedName: string = "orderby"

type

type: object

name

name: string = "String"

queryLanguage

queryLanguage: object

serializedName

serializedName: string = "queryLanguage"

type

type: object

name

name: string = "String"

queryType

queryType: object

serializedName

serializedName: string = "queryType"

type

type: object

allowedValues

allowedValues: string[] = ["simple", "full", "semantic"]

name

name: string = "Enum"

scoringParameters

scoringParameters: object

serializedName

serializedName: string = "scoringParameters"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

scoringProfile

scoringProfile: object

serializedName

serializedName: string = "scoringProfile"

type

type: object

name

name: string = "String"

scoringStatistics

scoringStatistics: object

serializedName

serializedName: string = "scoringStatistics"

type

type: object

allowedValues

allowedValues: string[] = ["local", "global"]

name

name: string = "Enum"

searchFields

searchFields: object

serializedName

serializedName: string = "searchFields"

type

type: object

name

name: string = "String"

searchMode

searchMode: object

serializedName

serializedName: string = "searchMode"

type

type: object

allowedValues

allowedValues: string[] = ["any", "all"]

name

name: string = "Enum"

searchText

searchText: object

serializedName

serializedName: string = "search"

type

type: object

name

name: string = "String"

select

select: object

serializedName

serializedName: string = "select"

type

type: object

name

name: string = "String"

semanticConfiguration

semanticConfiguration: object

serializedName

serializedName: string = "semanticConfiguration"

type

type: object

name

name: string = "String"

semanticFields

semanticFields: object

serializedName

serializedName: string = "semanticFields"

type

type: object

name

name: string = "String"

sessionId

sessionId: object

serializedName

serializedName: string = "sessionId"

type

type: object

name

name: string = "String"

skip

skip: object

serializedName

serializedName: string = "skip"

type

type: object

name

name: string = "Number"

speller

speller: object

serializedName

serializedName: string = "speller"

type

type: object

name

name: string = "String"

top

top: object

serializedName

serializedName: string = "top"

type

type: object

name

name: string = "Number"

Const SearchResourceEncryptionKey

SearchResourceEncryptionKey: object

type

type: object

className

className: string = "SearchResourceEncryptionKey"

name

name: string = "Composite"

modelProperties

modelProperties: object

accessCredentials

accessCredentials: object

serializedName

serializedName: string = "accessCredentials"

type

type: object

className

className: string = "AzureActiveDirectoryApplicationCredentials"

name

name: string = "Composite"

identity

identity: object

serializedName

serializedName: string = "identity"

type

type: object

className

className: string = "SearchIndexerDataIdentity"

name

name: string = "Composite"

keyName

keyName: object

required

required: boolean = true

serializedName

serializedName: string = "keyVaultKeyName"

type

type: object

name

name: string = "String"

keyVersion

keyVersion: object

required

required: boolean = true

serializedName

serializedName: string = "keyVaultKeyVersion"

type

type: object

name

name: string = "String"

vaultUri

vaultUri: object

required

required: boolean = true

serializedName

serializedName: string = "keyVaultUri"

type

type: object

name

name: string = "String"

Const SearchResult

SearchResult: object

type

type: object

className

className: string = "SearchResult"

name

name: string = "Composite"

additionalProperties

additionalProperties: object

type

type: object

name

name: string = "Object"

modelProperties

modelProperties: object

_highlights

_highlights: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "@search\.highlights"

type

type: object

name

name: string = "Dictionary"

value

value: object

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

_score

_score: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "@search\.score"

type

type: object

name

name: string = "Number"

captions

captions: object

nullable

nullable: boolean = true

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "@search\.captions"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "CaptionResult"

name

name: string = "Composite"

rerankerScore

rerankerScore: object

nullable

nullable: boolean = true

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "@search\.rerankerScore"

type

type: object

name

name: string = "Number"

Const SemanticConfiguration

SemanticConfiguration: object

type

type: object

className

className: string = "SemanticConfiguration"

name

name: string = "Composite"

modelProperties

modelProperties: object

name

name: object

required

required: boolean = true

serializedName

serializedName: string = "name"

type

type: object

name

name: string = "String"

prioritizedFields

prioritizedFields: object

serializedName

serializedName: string = "prioritizedFields"

type

type: object

className

className: string = "PrioritizedFields"

name

name: string = "Composite"

Const SemanticField

SemanticField: object

type

type: object

className

className: string = "SemanticField"

name

name: string = "Composite"

modelProperties

modelProperties: object

name

name: object

serializedName

serializedName: string = "fieldName"

type

type: object

name

name: string = "String"

Const SemanticSettings

SemanticSettings: object

type

type: object

className

className: string = "SemanticSettings"

name

name: string = "Composite"

modelProperties

modelProperties: object

configurations

configurations: object

serializedName

serializedName: string = "configurations"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "SemanticConfiguration"

name

name: string = "Composite"

Const SentimentSkill

SentimentSkill: object

serializedName

serializedName: string = "#Microsoft.Skills.Text.SentimentSkill"

type

type: object

className

className: string = "SentimentSkill"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = SearchIndexerSkill.type.polymorphicDiscriminator

uberParent

uberParent: string = "SearchIndexerSkill"

modelProperties

modelProperties: object

defaultLanguageCode

defaultLanguageCode: object

serializedName

serializedName: string = "defaultLanguageCode"

type

type: object

name

name: string = "String"

Const SentimentSkillV3

SentimentSkillV3: object

serializedName

serializedName: string = "#Microsoft.Skills.Text.V3.SentimentSkill"

type

type: object

className

className: string = "SentimentSkillV3"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = SearchIndexerSkill.type.polymorphicDiscriminator

uberParent

uberParent: string = "SearchIndexerSkill"

modelProperties

modelProperties: object

defaultLanguageCode

defaultLanguageCode: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "defaultLanguageCode"

type

type: object

name

name: string = "String"

includeOpinionMining

includeOpinionMining: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "includeOpinionMining"

type

type: object

name

name: string = "Boolean"

modelVersion

modelVersion: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "modelVersion"

type

type: object

name

name: string = "String"

Const ServiceCounters

ServiceCounters: object

type

type: object

className

className: string = "ServiceCounters"

name

name: string = "Composite"

modelProperties

modelProperties: object

dataSourceCounter

dataSourceCounter: object

serializedName

serializedName: string = "dataSourcesCount"

type

type: object

className

className: string = "ResourceCounter"

name

name: string = "Composite"

documentCounter

documentCounter: object

serializedName

serializedName: string = "documentCount"

type

type: object

className

className: string = "ResourceCounter"

name

name: string = "Composite"

indexCounter

indexCounter: object

serializedName

serializedName: string = "indexesCount"

type

type: object

className

className: string = "ResourceCounter"

name

name: string = "Composite"

indexerCounter

indexerCounter: object

serializedName

serializedName: string = "indexersCount"

type

type: object

className

className: string = "ResourceCounter"

name

name: string = "Composite"

skillsetCounter

skillsetCounter: object

serializedName

serializedName: string = "skillsetCount"

type

type: object

className

className: string = "ResourceCounter"

name

name: string = "Composite"

storageSizeCounter

storageSizeCounter: object

serializedName

serializedName: string = "storageSize"

type

type: object

className

className: string = "ResourceCounter"

name

name: string = "Composite"

synonymMapCounter

synonymMapCounter: object

serializedName

serializedName: string = "synonymMaps"

type

type: object

className

className: string = "ResourceCounter"

name

name: string = "Composite"

Const ServiceLimits

ServiceLimits: object

type

type: object

className

className: string = "ServiceLimits"

name

name: string = "Composite"

modelProperties

modelProperties: object

maxComplexCollectionFieldsPerIndex

maxComplexCollectionFieldsPerIndex: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "maxComplexCollectionFieldsPerIndex"

type

type: object

name

name: string = "Number"

maxComplexObjectsInCollectionsPerDocument

maxComplexObjectsInCollectionsPerDocument: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "maxComplexObjectsInCollectionsPerDocument"

type

type: object

name

name: string = "Number"

maxFieldNestingDepthPerIndex

maxFieldNestingDepthPerIndex: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "maxFieldNestingDepthPerIndex"

type

type: object

name

name: string = "Number"

maxFieldsPerIndex

maxFieldsPerIndex: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "maxFieldsPerIndex"

type

type: object

name

name: string = "Number"

Const ServiceStatistics

ServiceStatistics: object

type

type: object

className

className: string = "ServiceStatistics"

name

name: string = "Composite"

modelProperties

modelProperties: object

counters

counters: object

serializedName

serializedName: string = "counters"

type

type: object

className

className: string = "ServiceCounters"

name

name: string = "Composite"

limits

limits: object

serializedName

serializedName: string = "limits"

type

type: object

className

className: string = "ServiceLimits"

name

name: string = "Composite"

Const ShaperSkill

ShaperSkill: object

serializedName

serializedName: string = "#Microsoft.Skills.Util.ShaperSkill"

type

type: object

className

className: string = "ShaperSkill"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = SearchIndexerSkill.type.polymorphicDiscriminator

uberParent

uberParent: string = "SearchIndexerSkill"

modelProperties

modelProperties: object

Const ShingleTokenFilter

ShingleTokenFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.ShingleTokenFilter"

type

type: object

className

className: string = "ShingleTokenFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

filterToken

filterToken: object

defaultValue

defaultValue: string = "_"

serializedName

serializedName: string = "filterToken"

type

type: object

name

name: string = "String"

maxShingleSize

maxShingleSize: object

defaultValue

defaultValue: number = 2

serializedName

serializedName: string = "maxShingleSize"

constraints

constraints: object

InclusiveMinimum

InclusiveMinimum: number = 2

type

type: object

name

name: string = "Number"

minShingleSize

minShingleSize: object

defaultValue

defaultValue: number = 2

serializedName

serializedName: string = "minShingleSize"

constraints

constraints: object

InclusiveMinimum

InclusiveMinimum: number = 2

type

type: object

name

name: string = "Number"

outputUnigrams

outputUnigrams: object

defaultValue

defaultValue: boolean = true

serializedName

serializedName: string = "outputUnigrams"

type

type: object

name

name: string = "Boolean"

outputUnigramsIfNoShingles

outputUnigramsIfNoShingles: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "outputUnigramsIfNoShingles"

type

type: object

name

name: string = "Boolean"

tokenSeparator

tokenSeparator: object

defaultValue

defaultValue: string = " "

serializedName

serializedName: string = "tokenSeparator"

type

type: object

name

name: string = "String"

Const Similarity

Similarity: object

type

type: object

className

className: string = "Similarity"

name

name: string = "Composite"

uberParent

uberParent: string = "Similarity"

modelProperties

modelProperties: object

odatatype

odatatype: object

required

required: boolean = true

serializedName

serializedName: string = "@odata\.type"

type

type: object

name

name: string = "String"

polymorphicDiscriminator

polymorphicDiscriminator: object

clientName

clientName: string = "odatatype"

serializedName

serializedName: string = "@odata\.type"

Const SkillNames

SkillNames: object

type

type: object

className

className: string = "SkillNames"

name

name: string = "Composite"

modelProperties

modelProperties: object

skillNames

skillNames: object

serializedName

serializedName: string = "skillNames"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const SnowballTokenFilter

SnowballTokenFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.SnowballTokenFilter"

type

type: object

className

className: string = "SnowballTokenFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

language

language: object

required

required: boolean = true

serializedName

serializedName: string = "language"

type

type: object

allowedValues

allowedValues: string[] = ["armenian","basque","catalan","danish","dutch","english","finnish","french","german","german2","hungarian","italian","kp","lovins","norwegian","porter","portuguese","romanian","russian","spanish","swedish","turkish"]

name

name: string = "Enum"

Const SoftDeleteColumnDeletionDetectionPolicy

SoftDeleteColumnDeletionDetectionPolicy: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.SoftDeleteColumnDeletionDetectionPolicy"

type

type: object

className

className: string = "SoftDeleteColumnDeletionDetectionPolicy"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = DataDeletionDetectionPolicy.type.polymorphicDiscriminator

uberParent

uberParent: string = "DataDeletionDetectionPolicy"

modelProperties

modelProperties: object

softDeleteColumnName

softDeleteColumnName: object

serializedName

serializedName: string = "softDeleteColumnName"

type

type: object

name

name: string = "String"

softDeleteMarkerValue

softDeleteMarkerValue: object

serializedName

serializedName: string = "softDeleteMarkerValue"

type

type: object

name

name: string = "String"

Const SplitSkill

SplitSkill: object

serializedName

serializedName: string = "#Microsoft.Skills.Text.SplitSkill"

type

type: object

className

className: string = "SplitSkill"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = SearchIndexerSkill.type.polymorphicDiscriminator

uberParent

uberParent: string = "SearchIndexerSkill"

modelProperties

modelProperties: object

defaultLanguageCode

defaultLanguageCode: object

serializedName

serializedName: string = "defaultLanguageCode"

type

type: object

name

name: string = "String"

maxPageLength

maxPageLength: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "maximumPageLength"

type

type: object

name

name: string = "Number"

textSplitMode

textSplitMode: object

serializedName

serializedName: string = "textSplitMode"

type

type: object

name

name: string = "String"

Const SqlIntegratedChangeTrackingPolicy

SqlIntegratedChangeTrackingPolicy: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.SqlIntegratedChangeTrackingPolicy"

type

type: object

className

className: string = "SqlIntegratedChangeTrackingPolicy"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = DataChangeDetectionPolicy.type.polymorphicDiscriminator

uberParent

uberParent: string = "DataChangeDetectionPolicy"

modelProperties

modelProperties: object

Const StemmerOverrideTokenFilter

StemmerOverrideTokenFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.StemmerOverrideTokenFilter"

type

type: object

className

className: string = "StemmerOverrideTokenFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

rules

rules: object

required

required: boolean = true

serializedName

serializedName: string = "rules"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const StemmerTokenFilter

StemmerTokenFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.StemmerTokenFilter"

type

type: object

className

className: string = "StemmerTokenFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

language

language: object

required

required: boolean = true

serializedName

serializedName: string = "language"

type

type: object

allowedValues

allowedValues: string[] = ["arabic","armenian","basque","brazilian","bulgarian","catalan","czech","danish","dutch","dutchKp","english","lightEnglish","minimalEnglish","possessiveEnglish","porter2","lovins","finnish","lightFinnish","french","lightFrench","minimalFrench","galician","minimalGalician","german","german2","lightGerman","minimalGerman","greek","hindi","hungarian","lightHungarian","indonesian","irish","italian","lightItalian","sorani","latvian","norwegian","lightNorwegian","minimalNorwegian","lightNynorsk","minimalNynorsk","portuguese","lightPortuguese","minimalPortuguese","portugueseRslp","romanian","russian","lightRussian","spanish","lightSpanish","swedish","lightSwedish","turkish"]

name

name: string = "Enum"

Const StopAnalyzer

StopAnalyzer: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.StopAnalyzer"

type

type: object

className

className: string = "StopAnalyzer"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = LexicalAnalyzer.type.polymorphicDiscriminator

uberParent

uberParent: string = "LexicalAnalyzer"

modelProperties

modelProperties: object

stopwords

stopwords: object

serializedName

serializedName: string = "stopwords"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const StopwordsTokenFilter

StopwordsTokenFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.StopwordsTokenFilter"

type

type: object

className

className: string = "StopwordsTokenFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

ignoreCase

ignoreCase: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "ignoreCase"

type

type: object

name

name: string = "Boolean"

removeTrailingStopWords

removeTrailingStopWords: object

defaultValue

defaultValue: boolean = true

serializedName

serializedName: string = "removeTrailing"

type

type: object

name

name: string = "Boolean"

stopwords

stopwords: object

serializedName

serializedName: string = "stopwords"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

stopwordsList

stopwordsList: object

serializedName

serializedName: string = "stopwordsList"

type

type: object

allowedValues

allowedValues: string[] = ["arabic","armenian","basque","brazilian","bulgarian","catalan","czech","danish","dutch","english","finnish","french","galician","german","greek","hindi","hungarian","indonesian","irish","italian","latvian","norwegian","persian","portuguese","romanian","russian","sorani","spanish","swedish","thai","turkish"]

name

name: string = "Enum"

Const SuggestDocumentsResult

SuggestDocumentsResult: object

type

type: object

className

className: string = "SuggestDocumentsResult"

name

name: string = "Composite"

modelProperties

modelProperties: object

coverage

coverage: object

readOnly

readOnly: boolean = true

serializedName

serializedName: string = "@search\.coverage"

type

type: object

name

name: string = "Number"

results

results: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "value"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

className

className: string = "SuggestResult"

name

name: string = "Composite"

Const SuggestRequest

SuggestRequest: object

type

type: object

className

className: string = "SuggestRequest"

name

name: string = "Composite"

modelProperties

modelProperties: object

filter

filter: object

serializedName

serializedName: string = "filter"

type

type: object

name

name: string = "String"

highlightPostTag

highlightPostTag: object

serializedName

serializedName: string = "highlightPostTag"

type

type: object

name

name: string = "String"

highlightPreTag

highlightPreTag: object

serializedName

serializedName: string = "highlightPreTag"

type

type: object

name

name: string = "String"

minimumCoverage

minimumCoverage: object

serializedName

serializedName: string = "minimumCoverage"

type

type: object

name

name: string = "Number"

orderBy

orderBy: object

serializedName

serializedName: string = "orderby"

type

type: object

name

name: string = "String"

searchFields

searchFields: object

serializedName

serializedName: string = "searchFields"

type

type: object

name

name: string = "String"

searchText

searchText: object

required

required: boolean = true

serializedName

serializedName: string = "search"

type

type: object

name

name: string = "String"

select

select: object

serializedName

serializedName: string = "select"

type

type: object

name

name: string = "String"

suggesterName

suggesterName: object

required

required: boolean = true

serializedName

serializedName: string = "suggesterName"

type

type: object

name

name: string = "String"

top

top: object

serializedName

serializedName: string = "top"

type

type: object

name

name: string = "Number"

useFuzzyMatching

useFuzzyMatching: object

serializedName

serializedName: string = "fuzzy"

type

type: object

name

name: string = "Boolean"

Const SuggestResult

SuggestResult: object

type

type: object

className

className: string = "SuggestResult"

name

name: string = "Composite"

additionalProperties

additionalProperties: object

type

type: object

name

name: string = "Object"

modelProperties

modelProperties: object

_text

_text: object

readOnly

readOnly: boolean = true

required

required: boolean = true

serializedName

serializedName: string = "@search\.text"

type

type: object

name

name: string = "String"

Const Suggester

Suggester: object

type

type: object

className

className: string = "Suggester"

name

name: string = "Composite"

modelProperties

modelProperties: object

name

name: object

required

required: boolean = true

serializedName

serializedName: string = "name"

type

type: object

name

name: string = "String"

searchMode

searchMode: object

defaultValue

defaultValue: string = "analyzingInfixMatching"

isConstant

isConstant: boolean = true

serializedName

serializedName: string = "searchMode"

type

type: object

name

name: string = "String"

sourceFields

sourceFields: object

required

required: boolean = true

serializedName

serializedName: string = "sourceFields"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const SynonymMap

SynonymMap: object

type

type: object

className

className: string = "SynonymMap"

name

name: string = "Composite"

modelProperties

modelProperties: object

encryptionKey

encryptionKey: object

serializedName

serializedName: string = "encryptionKey"

type

type: object

className

className: string = "SearchResourceEncryptionKey"

name

name: string = "Composite"

etag

etag: object

serializedName

serializedName: string = "@odata\.etag"

type

type: object

name

name: string = "String"

format

format: object

defaultValue

defaultValue: string = "solr"

isConstant

isConstant: boolean = true

serializedName

serializedName: string = "format"

type

type: object

name

name: string = "String"

name

name: object

required

required: boolean = true

serializedName

serializedName: string = "name"

type

type: object

name

name: string = "String"

synonyms

synonyms: object

required

required: boolean = true

serializedName

serializedName: string = "synonyms"

type

type: object

name

name: string = "String"

Const SynonymTokenFilter

SynonymTokenFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.SynonymTokenFilter"

type

type: object

className

className: string = "SynonymTokenFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

expand

expand: object

defaultValue

defaultValue: boolean = true

serializedName

serializedName: string = "expand"

type

type: object

name

name: string = "Boolean"

ignoreCase

ignoreCase: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "ignoreCase"

type

type: object

name

name: string = "Boolean"

synonyms

synonyms: object

required

required: boolean = true

serializedName

serializedName: string = "synonyms"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const TagScoringFunction

TagScoringFunction: object

serializedName

serializedName: string = "tag"

type

type: object

className

className: string = "TagScoringFunction"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = ScoringFunction.type.polymorphicDiscriminator

uberParent

uberParent: string = "ScoringFunction"

modelProperties

modelProperties: object

parameters

parameters: object

serializedName

serializedName: string = "tag"

type

type: object

className

className: string = "TagScoringParameters"

name

name: string = "Composite"

Const TagScoringParameters

TagScoringParameters: object

type

type: object

className

className: string = "TagScoringParameters"

name

name: string = "Composite"

modelProperties

modelProperties: object

tagsParameter

tagsParameter: object

required

required: boolean = true

serializedName

serializedName: string = "tagsParameter"

type

type: object

name

name: string = "String"

Const TextTranslationSkill

TextTranslationSkill: object

serializedName

serializedName: string = "#Microsoft.Skills.Text.TranslationSkill"

type

type: object

className

className: string = "TextTranslationSkill"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = SearchIndexerSkill.type.polymorphicDiscriminator

uberParent

uberParent: string = "SearchIndexerSkill"

modelProperties

modelProperties: object

defaultFromLanguageCode

defaultFromLanguageCode: object

serializedName

serializedName: string = "defaultFromLanguageCode"

type

type: object

name

name: string = "String"

defaultToLanguageCode

defaultToLanguageCode: object

required

required: boolean = true

serializedName

serializedName: string = "defaultToLanguageCode"

type

type: object

name

name: string = "String"

suggestedFrom

suggestedFrom: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "suggestedFrom"

type

type: object

name

name: string = "String"

Const TextWeights

TextWeights: object

type

type: object

className

className: string = "TextWeights"

name

name: string = "Composite"

modelProperties

modelProperties: object

weights

weights: object

required

required: boolean = true

serializedName

serializedName: string = "weights"

type

type: object

name

name: string = "Dictionary"

value

value: object

type

type: object

name

name: string = "Number"

Const TokenFilter

TokenFilter: object

type

type: object

className

className: string = "TokenFilter"

name

name: string = "Composite"

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

name

name: object

required

required: boolean = true

serializedName

serializedName: string = "name"

type

type: object

name

name: string = "String"

odatatype

odatatype: object

required

required: boolean = true

serializedName

serializedName: string = "@odata\.type"

type

type: object

name

name: string = "String"

polymorphicDiscriminator

polymorphicDiscriminator: object

clientName

clientName: string = "odatatype"

serializedName

serializedName: string = "@odata\.type"

Const TruncateTokenFilter

TruncateTokenFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.TruncateTokenFilter"

type

type: object

className

className: string = "TruncateTokenFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

length

length: object

defaultValue

defaultValue: number = 300

serializedName

serializedName: string = "length"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 300

type

type: object

name

name: string = "Number"

Const UaxUrlEmailTokenizer

UaxUrlEmailTokenizer: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.UaxUrlEmailTokenizer"

type

type: object

className

className: string = "UaxUrlEmailTokenizer"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = LexicalTokenizer.type.polymorphicDiscriminator

uberParent

uberParent: string = "LexicalTokenizer"

modelProperties

modelProperties: object

maxTokenLength

maxTokenLength: object

defaultValue

defaultValue: number = 255

serializedName

serializedName: string = "maxTokenLength"

constraints

constraints: object

InclusiveMaximum

InclusiveMaximum: number = 300

type

type: object

name

name: string = "Number"

Const UniqueTokenFilter

UniqueTokenFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.UniqueTokenFilter"

type

type: object

className

className: string = "UniqueTokenFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

onlyOnSamePosition

onlyOnSamePosition: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "onlyOnSamePosition"

type

type: object

name

name: string = "Boolean"

Const WebApiSkill

WebApiSkill: object

serializedName

serializedName: string = "#Microsoft.Skills.Custom.WebApiSkill"

type

type: object

className

className: string = "WebApiSkill"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = SearchIndexerSkill.type.polymorphicDiscriminator

uberParent

uberParent: string = "SearchIndexerSkill"

modelProperties

modelProperties: object

batchSize

batchSize: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "batchSize"

type

type: object

name

name: string = "Number"

degreeOfParallelism

degreeOfParallelism: object

nullable

nullable: boolean = true

serializedName

serializedName: string = "degreeOfParallelism"

type

type: object

name

name: string = "Number"

httpHeaders

httpHeaders: object

serializedName

serializedName: string = "httpHeaders"

type

type: object

name

name: string = "Dictionary"

value

value: object

type

type: object

name

name: string = "String"

httpMethod

httpMethod: object

serializedName

serializedName: string = "httpMethod"

type

type: object

name

name: string = "String"

timeout

timeout: object

serializedName

serializedName: string = "timeout"

type

type: object

name

name: string = "TimeSpan"

uri

uri: object

required

required: boolean = true

serializedName

serializedName: string = "uri"

type

type: object

name

name: string = "String"

Const WordDelimiterTokenFilter

WordDelimiterTokenFilter: object

serializedName

serializedName: string = "#Microsoft.Azure.Search.WordDelimiterTokenFilter"

type

type: object

className

className: string = "WordDelimiterTokenFilter"

name

name: string = "Composite"

polymorphicDiscriminator

polymorphicDiscriminator: any = TokenFilter.type.polymorphicDiscriminator

uberParent

uberParent: string = "TokenFilter"

modelProperties

modelProperties: object

catenateAll

catenateAll: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "catenateAll"

type

type: object

name

name: string = "Boolean"

catenateNumbers

catenateNumbers: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "catenateNumbers"

type

type: object

name

name: string = "Boolean"

catenateWords

catenateWords: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "catenateWords"

type

type: object

name

name: string = "Boolean"

generateNumberParts

generateNumberParts: object

defaultValue

defaultValue: boolean = true

serializedName

serializedName: string = "generateNumberParts"

type

type: object

name

name: string = "Boolean"

generateWordParts

generateWordParts: object

defaultValue

defaultValue: boolean = true

serializedName

serializedName: string = "generateWordParts"

type

type: object

name

name: string = "Boolean"

preserveOriginal

preserveOriginal: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "preserveOriginal"

type

type: object

name

name: string = "Boolean"

protectedWords

protectedWords: object

serializedName

serializedName: string = "protectedWords"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

splitOnCaseChange

splitOnCaseChange: object

defaultValue

defaultValue: boolean = true

serializedName

serializedName: string = "splitOnCaseChange"

type

type: object

name

name: string = "Boolean"

splitOnNumerics

splitOnNumerics: object

defaultValue

defaultValue: boolean = true

serializedName

serializedName: string = "splitOnNumerics"

type

type: object

name

name: string = "Boolean"

stemEnglishPossessive

stemEnglishPossessive: object

defaultValue

defaultValue: boolean = true

serializedName

serializedName: string = "stemEnglishPossessive"

type

type: object

name

name: string = "Boolean"

Const accept

accept: object

parameterPath

parameterPath: string = "accept"

mapper

mapper: object

defaultValue

defaultValue: string = "application/json"

isConstant

isConstant: boolean = true

serializedName

serializedName: string = "Accept"

type

type: object

name

name: string = "String"

Const allowIndexDowntime

allowIndexDowntime: object

parameterPath

parameterPath: string[] = ["options", "allowIndexDowntime"]

mapper

mapper: object

serializedName

serializedName: string = "allowIndexDowntime"

type

type: object

name

name: string = "Boolean"

Const analyzeOperationSpec

analyzeOperationSpec: object

headerParameters

headerParameters: any[] = [Parameters.contentType,Parameters.accept,Parameters.xMsClientRequestId]

httpMethod

httpMethod: string = "POST"

mediaType

mediaType: string = "json"

path

path: string = "/indexes('{indexName}')/search.analyze"

queryParameters

queryParameters: any[] = [Parameters.apiVersion]

requestBody

requestBody: any = Parameters.request

serializer

serializer: any

urlParameters

urlParameters: any[] = [Parameters.endpoint, Parameters.indexName]

responses

responses: object

200

200: object

bodyMapper

bodyMapper: any = Mappers.AnalyzeResult

default

default: object

bodyMapper

bodyMapper: any = Mappers.SearchError

Const answers

answers: object

parameterPath

parameterPath: string[] = ["options", "searchOptions", "answers"]

mapper

mapper: object

serializedName

serializedName: string = "answers"

type

type: object

name

name: string = "String"

Const apiVersion

apiVersion: object

parameterPath

parameterPath: string = "apiVersion"

mapper

mapper: object

required

required: boolean = true

serializedName

serializedName: string = "api-version"

type

type: object

name

name: string = "String"

Const autocompleteGetOperationSpec

autocompleteGetOperationSpec: object

headerParameters

headerParameters: any[] = [Parameters.accept, Parameters.xMsClientRequestId]

httpMethod

httpMethod: string = "GET"

path

path: string = "/docs/search.autocomplete"

queryParameters

queryParameters: any[] = [Parameters.apiVersion,Parameters.searchText1,Parameters.suggesterName,Parameters.autocompleteMode,Parameters.filter2,Parameters.useFuzzyMatching1,Parameters.highlightPostTag2,Parameters.highlightPreTag2,Parameters.minimumCoverage2,Parameters.searchFields2,Parameters.top2]

serializer

serializer: any

urlParameters

urlParameters: any[] = [Parameters.endpoint, Parameters.indexName]

responses

responses: object

200

200: object

bodyMapper

bodyMapper: any = Mappers.AutocompleteResult

default

default: object

bodyMapper

bodyMapper: any = Mappers.SearchError

Const autocompleteMode

autocompleteMode: object

parameterPath

parameterPath: string[] = ["options", "autocompleteOptions", "autocompleteMode"]

mapper

mapper: object

serializedName

serializedName: string = "autocompleteMode"

type

type: object

allowedValues

allowedValues: string[] = ["oneTerm", "twoTerms", "oneTermWithContext"]

name

name: string = "Enum"

Const autocompletePostOperationSpec

autocompletePostOperationSpec: object

headerParameters

headerParameters: any[] = [Parameters.accept,Parameters.xMsClientRequestId,Parameters.contentType]

httpMethod

httpMethod: string = "POST"

mediaType

mediaType: string = "json"

path

path: string = "/docs/search.post.autocomplete"

queryParameters

queryParameters: any[] = [Parameters.apiVersion]

requestBody

requestBody: any = Parameters.autocompleteRequest

serializer

serializer: any

urlParameters

urlParameters: any[] = [Parameters.endpoint, Parameters.indexName]

responses

responses: object

200

200: object

bodyMapper

bodyMapper: any = Mappers.AutocompleteResult

default

default: object

bodyMapper

bodyMapper: any = Mappers.SearchError

Const autocompleteRequest

autocompleteRequest: object

mapper

mapper: any = AutocompleteRequestMapper

parameterPath

parameterPath: string = "autocompleteRequest"

Const batch

batch: object

mapper

mapper: any = IndexBatchMapper

parameterPath

parameterPath: string = "batch"

Const captions

captions: object

parameterPath

parameterPath: string[] = ["options", "searchOptions", "captions"]

mapper

mapper: object

serializedName

serializedName: string = "captions"

type

type: object

name

name: string = "String"

Const contentType

contentType: object

parameterPath

parameterPath: string[] = ["options", "contentType"]

mapper

mapper: object

defaultValue

defaultValue: string = "application/json"

isConstant

isConstant: boolean = true

serializedName

serializedName: string = "Content-Type"

type

type: object

name

name: string = "String"

Const countOperationSpec

countOperationSpec: object

headerParameters

headerParameters: any[] = [Parameters.accept, Parameters.xMsClientRequestId]

httpMethod

httpMethod: string = "GET"

path

path: string = "/docs/$count"

queryParameters

queryParameters: any[] = [Parameters.apiVersion]

serializer

serializer: any

urlParameters

urlParameters: any[] = [Parameters.endpoint, Parameters.indexName]

responses

responses: object

200

200: object

bodyMapper

bodyMapper: object

type

type: object

name

name: string = "Number"

default

default: object

bodyMapper

bodyMapper: any = Mappers.SearchError

Const createOperationSpec

createOperationSpec: object

headerParameters

headerParameters: any[] = [Parameters.contentType,Parameters.accept,Parameters.xMsClientRequestId]

httpMethod

httpMethod: string = "POST"

mediaType

mediaType: string = "json"

path

path: string = "/indexes"

queryParameters

queryParameters: any[] = [Parameters.apiVersion]

requestBody

requestBody: any = Parameters.index

serializer

serializer: any

urlParameters

urlParameters: any[] = [Parameters.endpoint]

responses

responses: object

201

201: object

bodyMapper

bodyMapper: any = Mappers.SearchIndex

default

default: object

bodyMapper

bodyMapper: any = Mappers.SearchError

Const createOrUpdateOperationSpec

createOrUpdateOperationSpec: object

headerParameters

headerParameters: any[] = [Parameters.contentType,Parameters.accept,Parameters.xMsClientRequestId,Parameters.ifMatch,Parameters.ifNoneMatch,Parameters.prefer]

httpMethod

httpMethod: string = "PUT"

mediaType

mediaType: string = "json"

path

path: string = "/indexes('{indexName}')"

queryParameters

queryParameters: any[] = [Parameters.apiVersion, Parameters.allowIndexDowntime]

requestBody

requestBody: any = Parameters.index

serializer

serializer: any

urlParameters

urlParameters: any[] = [Parameters.endpoint, Parameters.indexName]

responses

responses: object

200

200: object

bodyMapper

bodyMapper: any = Mappers.SearchIndex

201

201: object

bodyMapper

bodyMapper: any = Mappers.SearchIndex

default

default: object

bodyMapper

bodyMapper: any = Mappers.SearchError

Const dataSource

dataSource: object

mapper

mapper: any = SearchIndexerDataSourceMapper

parameterPath

parameterPath: string = "dataSource"

Const dataSourceName

dataSourceName: object

parameterPath

parameterPath: string = "dataSourceName"

mapper

mapper: object

required

required: boolean = true

serializedName

serializedName: string = "dataSourceName"

type

type: object

name

name: string = "String"

Const deleteOperationSpec

deleteOperationSpec: object

headerParameters

headerParameters: any[] = [Parameters.accept,Parameters.xMsClientRequestId,Parameters.ifMatch,Parameters.ifNoneMatch]

httpMethod

httpMethod: string = "DELETE"

path

path: string = "/indexes('{indexName}')"

queryParameters

queryParameters: any[] = [Parameters.apiVersion]

serializer

serializer: any

urlParameters

urlParameters: any[] = [Parameters.endpoint, Parameters.indexName]

responses

responses: object

204

204: {}

Type declaration

404

404: {}

Type declaration

default

default: object

bodyMapper

bodyMapper: any = Mappers.SearchError

Const disableCacheReprocessingChangeDetection

disableCacheReprocessingChangeDetection: object

parameterPath

parameterPath: string[] = ["options", "disableCacheReprocessingChangeDetection"]

mapper

mapper: object

serializedName

serializedName: string = "disableCacheReprocessingChangeDetection"

type

type: object

name

name: string = "Boolean"

Let discriminators

discriminators: object

CharFilter

CharFilter: any = CharFilter

CharFilter.#Microsoft.Azure.Search.MappingCharFilter

CharFilter.#Microsoft.Azure.Search.MappingCharFilter: any = MappingCharFilter

CharFilter.#Microsoft.Azure.Search.PatternReplaceCharFilter

CharFilter.#Microsoft.Azure.Search.PatternReplaceCharFilter: any = PatternReplaceCharFilter

CognitiveServicesAccount

CognitiveServicesAccount: any = CognitiveServicesAccount

CognitiveServicesAccount.#Microsoft.Azure.Search.CognitiveServicesByKey

CognitiveServicesAccount.#Microsoft.Azure.Search.CognitiveServicesByKey: any = CognitiveServicesAccountKey

CognitiveServicesAccount.#Microsoft.Azure.Search.DefaultCognitiveServices

CognitiveServicesAccount.#Microsoft.Azure.Search.DefaultCognitiveServices: any = DefaultCognitiveServicesAccount

DataChangeDetectionPolicy

DataChangeDetectionPolicy: any = DataChangeDetectionPolicy

DataChangeDetectionPolicy.#Microsoft.Azure.Search.HighWaterMarkChangeDetectionPolicy

DataChangeDetectionPolicy.#Microsoft.Azure.Search.HighWaterMarkChangeDetectionPolicy: any = HighWaterMarkChangeDetectionPolicy

DataChangeDetectionPolicy.#Microsoft.Azure.Search.SqlIntegratedChangeTrackingPolicy

DataChangeDetectionPolicy.#Microsoft.Azure.Search.SqlIntegratedChangeTrackingPolicy: any = SqlIntegratedChangeTrackingPolicy

DataDeletionDetectionPolicy

DataDeletionDetectionPolicy: any = DataDeletionDetectionPolicy

DataDeletionDetectionPolicy.#Microsoft.Azure.Search.SoftDeleteColumnDeletionDetectionPolicy

DataDeletionDetectionPolicy.#Microsoft.Azure.Search.SoftDeleteColumnDeletionDetectionPolicy: any = SoftDeleteColumnDeletionDetectionPolicy

LexicalAnalyzer

LexicalAnalyzer: any = LexicalAnalyzer

LexicalAnalyzer.#Microsoft.Azure.Search.CustomAnalyzer

LexicalAnalyzer.#Microsoft.Azure.Search.CustomAnalyzer: any = CustomAnalyzer

LexicalAnalyzer.#Microsoft.Azure.Search.PatternAnalyzer

LexicalAnalyzer.#Microsoft.Azure.Search.PatternAnalyzer: any = PatternAnalyzer

LexicalAnalyzer.#Microsoft.Azure.Search.StandardAnalyzer

LexicalAnalyzer.#Microsoft.Azure.Search.StandardAnalyzer: any = LuceneStandardAnalyzer

LexicalAnalyzer.#Microsoft.Azure.Search.StopAnalyzer

LexicalAnalyzer.#Microsoft.Azure.Search.StopAnalyzer: any = StopAnalyzer

LexicalNormalizer

LexicalNormalizer: any = LexicalNormalizer

LexicalNormalizer.#Microsoft.Azure.Search.CustomNormalizer

LexicalNormalizer.#Microsoft.Azure.Search.CustomNormalizer: any = CustomNormalizer

LexicalTokenizer

LexicalTokenizer: any = LexicalTokenizer

LexicalTokenizer.#Microsoft.Azure.Search.ClassicTokenizer

LexicalTokenizer.#Microsoft.Azure.Search.ClassicTokenizer: any = ClassicTokenizer

LexicalTokenizer.#Microsoft.Azure.Search.EdgeNGramTokenizer

LexicalTokenizer.#Microsoft.Azure.Search.EdgeNGramTokenizer: any = EdgeNGramTokenizer

LexicalTokenizer.#Microsoft.Azure.Search.KeywordTokenizer

LexicalTokenizer.#Microsoft.Azure.Search.KeywordTokenizer: any = KeywordTokenizer

LexicalTokenizer.#Microsoft.Azure.Search.KeywordTokenizerV2

LexicalTokenizer.#Microsoft.Azure.Search.KeywordTokenizerV2: any = KeywordTokenizerV2

LexicalTokenizer.#Microsoft.Azure.Search.MicrosoftLanguageStemmingTokenizer

LexicalTokenizer.#Microsoft.Azure.Search.MicrosoftLanguageStemmingTokenizer: any = MicrosoftLanguageStemmingTokenizer

LexicalTokenizer.#Microsoft.Azure.Search.MicrosoftLanguageTokenizer

LexicalTokenizer.#Microsoft.Azure.Search.MicrosoftLanguageTokenizer: any = MicrosoftLanguageTokenizer

LexicalTokenizer.#Microsoft.Azure.Search.NGramTokenizer

LexicalTokenizer.#Microsoft.Azure.Search.NGramTokenizer: any = NGramTokenizer

LexicalTokenizer.#Microsoft.Azure.Search.PathHierarchyTokenizerV2

LexicalTokenizer.#Microsoft.Azure.Search.PathHierarchyTokenizerV2: any = PathHierarchyTokenizerV2

LexicalTokenizer.#Microsoft.Azure.Search.PatternTokenizer

LexicalTokenizer.#Microsoft.Azure.Search.PatternTokenizer: any = PatternTokenizer

LexicalTokenizer.#Microsoft.Azure.Search.StandardTokenizer

LexicalTokenizer.#Microsoft.Azure.Search.StandardTokenizer: any = LuceneStandardTokenizer

LexicalTokenizer.#Microsoft.Azure.Search.StandardTokenizerV2

LexicalTokenizer.#Microsoft.Azure.Search.StandardTokenizerV2: any = LuceneStandardTokenizerV2

LexicalTokenizer.#Microsoft.Azure.Search.UaxUrlEmailTokenizer

LexicalTokenizer.#Microsoft.Azure.Search.UaxUrlEmailTokenizer: any = UaxUrlEmailTokenizer

ScoringFunction

ScoringFunction: any = ScoringFunction

ScoringFunction.distance

ScoringFunction.distance: any = DistanceScoringFunction

ScoringFunction.freshness

ScoringFunction.freshness: any = FreshnessScoringFunction

ScoringFunction.magnitude

ScoringFunction.magnitude: any = MagnitudeScoringFunction

ScoringFunction.tag

ScoringFunction.tag: any = TagScoringFunction

SearchIndexerDataIdentity

SearchIndexerDataIdentity: any = SearchIndexerDataIdentity

SearchIndexerDataIdentity.#Microsoft.Azure.Search.SearchIndexerDataNoneIdentity

SearchIndexerDataIdentity.#Microsoft.Azure.Search.SearchIndexerDataNoneIdentity: any = SearchIndexerDataNoneIdentity

SearchIndexerDataIdentity.#Microsoft.Azure.Search.SearchIndexerDataUserAssignedIdentity

SearchIndexerDataIdentity.#Microsoft.Azure.Search.SearchIndexerDataUserAssignedIdentity: any = SearchIndexerDataUserAssignedIdentity

SearchIndexerSkill

SearchIndexerSkill: any = SearchIndexerSkill

SearchIndexerSkill.#Microsoft.Skills.Custom.WebApiSkill

SearchIndexerSkill.#Microsoft.Skills.Custom.WebApiSkill: any = WebApiSkill

SearchIndexerSkill.#Microsoft.Skills.Text.CustomEntityLookupSkill

SearchIndexerSkill.#Microsoft.Skills.Text.CustomEntityLookupSkill: any = CustomEntityLookupSkill

SearchIndexerSkill.#Microsoft.Skills.Text.EntityRecognitionSkill

SearchIndexerSkill.#Microsoft.Skills.Text.EntityRecognitionSkill: any = EntityRecognitionSkill

SearchIndexerSkill.#Microsoft.Skills.Text.KeyPhraseExtractionSkill

SearchIndexerSkill.#Microsoft.Skills.Text.KeyPhraseExtractionSkill: any = KeyPhraseExtractionSkill

SearchIndexerSkill.#Microsoft.Skills.Text.LanguageDetectionSkill

SearchIndexerSkill.#Microsoft.Skills.Text.LanguageDetectionSkill: any = LanguageDetectionSkill

SearchIndexerSkill.#Microsoft.Skills.Text.MergeSkill

SearchIndexerSkill.#Microsoft.Skills.Text.MergeSkill: any = MergeSkill

SearchIndexerSkill.#Microsoft.Skills.Text.PIIDetectionSkill

SearchIndexerSkill.#Microsoft.Skills.Text.PIIDetectionSkill: any = PIIDetectionSkill

SearchIndexerSkill.#Microsoft.Skills.Text.SentimentSkill

SearchIndexerSkill.#Microsoft.Skills.Text.SentimentSkill: any = SentimentSkill

SearchIndexerSkill.#Microsoft.Skills.Text.SplitSkill

SearchIndexerSkill.#Microsoft.Skills.Text.SplitSkill: any = SplitSkill

SearchIndexerSkill.#Microsoft.Skills.Text.TranslationSkill

SearchIndexerSkill.#Microsoft.Skills.Text.TranslationSkill: any = TextTranslationSkill

SearchIndexerSkill.#Microsoft.Skills.Text.V3.EntityLinkingSkill

SearchIndexerSkill.#Microsoft.Skills.Text.V3.EntityLinkingSkill: any = EntityLinkingSkill

SearchIndexerSkill.#Microsoft.Skills.Text.V3.EntityRecognitionSkill

SearchIndexerSkill.#Microsoft.Skills.Text.V3.EntityRecognitionSkill: any = EntityRecognitionSkillV3

SearchIndexerSkill.#Microsoft.Skills.Text.V3.SentimentSkill

SearchIndexerSkill.#Microsoft.Skills.Text.V3.SentimentSkill: any = SentimentSkillV3

SearchIndexerSkill.#Microsoft.Skills.Util.ConditionalSkill

SearchIndexerSkill.#Microsoft.Skills.Util.ConditionalSkill: any = ConditionalSkill

SearchIndexerSkill.#Microsoft.Skills.Util.DocumentExtractionSkill

SearchIndexerSkill.#Microsoft.Skills.Util.DocumentExtractionSkill: any = DocumentExtractionSkill

SearchIndexerSkill.#Microsoft.Skills.Util.ShaperSkill

SearchIndexerSkill.#Microsoft.Skills.Util.ShaperSkill: any = ShaperSkill

SearchIndexerSkill.#Microsoft.Skills.Vision.ImageAnalysisSkill

SearchIndexerSkill.#Microsoft.Skills.Vision.ImageAnalysisSkill: any = ImageAnalysisSkill

SearchIndexerSkill.#Microsoft.Skills.Vision.OcrSkill

SearchIndexerSkill.#Microsoft.Skills.Vision.OcrSkill: any = OcrSkill

Similarity

Similarity: any = Similarity

Similarity.#Microsoft.Azure.Search.BM25Similarity

Similarity.#Microsoft.Azure.Search.BM25Similarity: any = BM25Similarity

Similarity.#Microsoft.Azure.Search.ClassicSimilarity

Similarity.#Microsoft.Azure.Search.ClassicSimilarity: any = ClassicSimilarity

TokenFilter

TokenFilter: any = TokenFilter

TokenFilter.#Microsoft.Azure.Search.AsciiFoldingTokenFilter

TokenFilter.#Microsoft.Azure.Search.AsciiFoldingTokenFilter: any = AsciiFoldingTokenFilter

TokenFilter.#Microsoft.Azure.Search.CjkBigramTokenFilter

TokenFilter.#Microsoft.Azure.Search.CjkBigramTokenFilter: any = CjkBigramTokenFilter

TokenFilter.#Microsoft.Azure.Search.CommonGramTokenFilter

TokenFilter.#Microsoft.Azure.Search.CommonGramTokenFilter: any = CommonGramTokenFilter

TokenFilter.#Microsoft.Azure.Search.DictionaryDecompounderTokenFilter

TokenFilter.#Microsoft.Azure.Search.DictionaryDecompounderTokenFilter: any = DictionaryDecompounderTokenFilter

TokenFilter.#Microsoft.Azure.Search.EdgeNGramTokenFilter

TokenFilter.#Microsoft.Azure.Search.EdgeNGramTokenFilter: any = EdgeNGramTokenFilter

TokenFilter.#Microsoft.Azure.Search.EdgeNGramTokenFilterV2

TokenFilter.#Microsoft.Azure.Search.EdgeNGramTokenFilterV2: any = EdgeNGramTokenFilterV2

TokenFilter.#Microsoft.Azure.Search.ElisionTokenFilter

TokenFilter.#Microsoft.Azure.Search.ElisionTokenFilter: any = ElisionTokenFilter

TokenFilter.#Microsoft.Azure.Search.KeepTokenFilter

TokenFilter.#Microsoft.Azure.Search.KeepTokenFilter: any = KeepTokenFilter

TokenFilter.#Microsoft.Azure.Search.KeywordMarkerTokenFilter

TokenFilter.#Microsoft.Azure.Search.KeywordMarkerTokenFilter: any = KeywordMarkerTokenFilter

TokenFilter.#Microsoft.Azure.Search.LengthTokenFilter

TokenFilter.#Microsoft.Azure.Search.LengthTokenFilter: any = LengthTokenFilter

TokenFilter.#Microsoft.Azure.Search.LimitTokenFilter

TokenFilter.#Microsoft.Azure.Search.LimitTokenFilter: any = LimitTokenFilter

TokenFilter.#Microsoft.Azure.Search.NGramTokenFilter

TokenFilter.#Microsoft.Azure.Search.NGramTokenFilter: any = NGramTokenFilter

TokenFilter.#Microsoft.Azure.Search.NGramTokenFilterV2

TokenFilter.#Microsoft.Azure.Search.NGramTokenFilterV2: any = NGramTokenFilterV2

TokenFilter.#Microsoft.Azure.Search.PatternCaptureTokenFilter

TokenFilter.#Microsoft.Azure.Search.PatternCaptureTokenFilter: any = PatternCaptureTokenFilter

TokenFilter.#Microsoft.Azure.Search.PatternReplaceTokenFilter

TokenFilter.#Microsoft.Azure.Search.PatternReplaceTokenFilter: any = PatternReplaceTokenFilter

TokenFilter.#Microsoft.Azure.Search.PhoneticTokenFilter

TokenFilter.#Microsoft.Azure.Search.PhoneticTokenFilter: any = PhoneticTokenFilter

TokenFilter.#Microsoft.Azure.Search.ShingleTokenFilter

TokenFilter.#Microsoft.Azure.Search.ShingleTokenFilter: any = ShingleTokenFilter

TokenFilter.#Microsoft.Azure.Search.SnowballTokenFilter

TokenFilter.#Microsoft.Azure.Search.SnowballTokenFilter: any = SnowballTokenFilter

TokenFilter.#Microsoft.Azure.Search.StemmerOverrideTokenFilter

TokenFilter.#Microsoft.Azure.Search.StemmerOverrideTokenFilter: any = StemmerOverrideTokenFilter

TokenFilter.#Microsoft.Azure.Search.StemmerTokenFilter

TokenFilter.#Microsoft.Azure.Search.StemmerTokenFilter: any = StemmerTokenFilter

TokenFilter.#Microsoft.Azure.Search.StopwordsTokenFilter

TokenFilter.#Microsoft.Azure.Search.StopwordsTokenFilter: any = StopwordsTokenFilter

TokenFilter.#Microsoft.Azure.Search.SynonymTokenFilter

TokenFilter.#Microsoft.Azure.Search.SynonymTokenFilter: any = SynonymTokenFilter

TokenFilter.#Microsoft.Azure.Search.TruncateTokenFilter

TokenFilter.#Microsoft.Azure.Search.TruncateTokenFilter: any = TruncateTokenFilter

TokenFilter.#Microsoft.Azure.Search.UniqueTokenFilter

TokenFilter.#Microsoft.Azure.Search.UniqueTokenFilter: any = UniqueTokenFilter

TokenFilter.#Microsoft.Azure.Search.WordDelimiterTokenFilter

TokenFilter.#Microsoft.Azure.Search.WordDelimiterTokenFilter: any = WordDelimiterTokenFilter

Const endpoint

endpoint: object

parameterPath

parameterPath: string = "endpoint"

skipEncoding

skipEncoding: boolean = true

mapper

mapper: object

required

required: boolean = true

serializedName

serializedName: string = "endpoint"

type

type: object

name

name: string = "String"

Const facets

facets: object

collectionFormat

collectionFormat: string = "Multi"

parameterPath

parameterPath: string[] = ["options", "searchOptions", "facets"]

mapper

mapper: object

serializedName

serializedName: string = "facet"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const filter

filter: object

parameterPath

parameterPath: string[] = ["options", "searchOptions", "filter"]

mapper

mapper: object

serializedName

serializedName: string = "$filter"

type

type: object

name

name: string = "String"

Const filter1

filter1: object

parameterPath

parameterPath: string[] = ["options", "suggestOptions", "filter"]

mapper

mapper: object

serializedName

serializedName: string = "$filter"

type

type: object

name

name: string = "String"

Const filter2

filter2: object

parameterPath

parameterPath: string[] = ["options", "autocompleteOptions", "filter"]

mapper

mapper: object

serializedName

serializedName: string = "$filter"

type

type: object

name

name: string = "String"

Const getOperationSpec

getOperationSpec: object

headerParameters

headerParameters: any[] = [Parameters.accept, Parameters.xMsClientRequestId]

httpMethod

httpMethod: string = "GET"

path

path: string = "/indexes('{indexName}')"

queryParameters

queryParameters: any[] = [Parameters.apiVersion]

serializer

serializer: any

urlParameters

urlParameters: any[] = [Parameters.endpoint, Parameters.indexName]

responses

responses: object

200

200: object

bodyMapper

bodyMapper: object = Mappers.SearchIndex

type

type: object

name

name: string = "Dictionary"

value

value: object

type

type: object

name

name: string = "any"

default

default: object

bodyMapper

bodyMapper: any = Mappers.SearchError

Const getServiceStatisticsOperationSpec

getServiceStatisticsOperationSpec: object

headerParameters

headerParameters: any[] = [Parameters.accept, Parameters.xMsClientRequestId]

httpMethod

httpMethod: string = "GET"

path

path: string = "/servicestats"

queryParameters

queryParameters: any[] = [Parameters.apiVersion]

serializer

serializer: any

urlParameters

urlParameters: any[] = [Parameters.endpoint]

responses

responses: object

200

200: object

bodyMapper

bodyMapper: any = Mappers.ServiceStatistics

default

default: object

bodyMapper

bodyMapper: any = Mappers.SearchError

Const getStatisticsOperationSpec

getStatisticsOperationSpec: object

headerParameters

headerParameters: any[] = [Parameters.accept, Parameters.xMsClientRequestId]

httpMethod

httpMethod: string = "GET"

path

path: string = "/indexes('{indexName}')/search.stats"

queryParameters

queryParameters: any[] = [Parameters.apiVersion]

serializer

serializer: any

urlParameters

urlParameters: any[] = [Parameters.endpoint, Parameters.indexName]

responses

responses: object

200

200: object

bodyMapper

bodyMapper: any = Mappers.GetIndexStatisticsResult

default

default: object

bodyMapper

bodyMapper: any = Mappers.SearchError

Const getStatusOperationSpec

getStatusOperationSpec: object

headerParameters

headerParameters: any[] = [Parameters.accept, Parameters.xMsClientRequestId]

httpMethod

httpMethod: string = "GET"

path

path: string = "/indexers('{indexerName}')/search.status"

queryParameters

queryParameters: any[] = [Parameters.apiVersion]

serializer

serializer: any

urlParameters

urlParameters: any[] = [Parameters.endpoint, Parameters.indexerName]

responses

responses: object

200

200: object

bodyMapper

bodyMapper: any = Mappers.SearchIndexerStatus

default

default: object

bodyMapper

bodyMapper: any = Mappers.SearchError

Const highlightFields

highlightFields: object

collectionFormat

collectionFormat: string = "CSV"

parameterPath

parameterPath: string[] = ["options", "searchOptions", "highlightFields"]

mapper

mapper: object

serializedName

serializedName: string = "highlight"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const highlightPostTag

highlightPostTag: object

parameterPath

parameterPath: string[] = ["options", "searchOptions", "highlightPostTag"]

mapper

mapper: object

serializedName

serializedName: string = "highlightPostTag"

type

type: object

name

name: string = "String"

Const highlightPostTag1

highlightPostTag1: object

parameterPath

parameterPath: string[] = ["options", "suggestOptions", "highlightPostTag"]

mapper

mapper: object

serializedName

serializedName: string = "highlightPostTag"

type

type: object

name

name: string = "String"

Const highlightPostTag2

highlightPostTag2: object

parameterPath

parameterPath: string[] = ["options", "autocompleteOptions", "highlightPostTag"]

mapper

mapper: object

serializedName

serializedName: string = "highlightPostTag"

type

type: object

name

name: string = "String"

Const highlightPreTag

highlightPreTag: object

parameterPath

parameterPath: string[] = ["options", "searchOptions", "highlightPreTag"]

mapper

mapper: object

serializedName

serializedName: string = "highlightPreTag"

type

type: object

name

name: string = "String"

Const highlightPreTag1

highlightPreTag1: object

parameterPath

parameterPath: string[] = ["options", "suggestOptions", "highlightPreTag"]

mapper

mapper: object

serializedName

serializedName: string = "highlightPreTag"

type

type: object

name

name: string = "String"

Const highlightPreTag2

highlightPreTag2: object

parameterPath

parameterPath: string[] = ["options", "autocompleteOptions", "highlightPreTag"]

mapper

mapper: object

serializedName

serializedName: string = "highlightPreTag"

type

type: object

name

name: string = "String"

Const ifMatch

ifMatch: object

parameterPath

parameterPath: string[] = ["options", "ifMatch"]

mapper

mapper: object

serializedName

serializedName: string = "If-Match"

type

type: object

name

name: string = "String"

Const ifNoneMatch

ifNoneMatch: object

parameterPath

parameterPath: string[] = ["options", "ifNoneMatch"]

mapper

mapper: object

serializedName

serializedName: string = "If-None-Match"

type

type: object

name

name: string = "String"

Const includeTotalResultCount

includeTotalResultCount: object

parameterPath

parameterPath: string[] = ["options", "searchOptions", "includeTotalResultCount"]

mapper

mapper: object

serializedName

serializedName: string = "$count"

type

type: object

name

name: string = "Boolean"

Const index

index: object

mapper

mapper: any = SearchIndexMapper

parameterPath

parameterPath: string = "index"

Const indexName

indexName: object

parameterPath

parameterPath: string = "indexName"

mapper

mapper: object

required

required: boolean = true

serializedName

serializedName: string = "indexName"

type

type: object

name

name: string = "String"

Const indexOperationSpec

indexOperationSpec: object

headerParameters

headerParameters: any[] = [Parameters.accept,Parameters.xMsClientRequestId,Parameters.contentType]

httpMethod

httpMethod: string = "POST"

mediaType

mediaType: string = "json"

path

path: string = "/docs/search.index"

queryParameters

queryParameters: any[] = [Parameters.apiVersion]

requestBody

requestBody: any = Parameters.batch

serializer

serializer: any

urlParameters

urlParameters: any[] = [Parameters.endpoint, Parameters.indexName]

responses

responses: object

200

200: object

bodyMapper

bodyMapper: any = Mappers.IndexDocumentsResult

207

207: object

bodyMapper

bodyMapper: any = Mappers.IndexDocumentsResult

default

default: object

bodyMapper

bodyMapper: any = Mappers.SearchError

Const indexer

indexer: object

mapper

mapper: any = SearchIndexerMapper

parameterPath

parameterPath: string = "indexer"

Const indexerName

indexerName: object

parameterPath

parameterPath: string = "indexerName"

mapper

mapper: object

required

required: boolean = true

serializedName

serializedName: string = "indexerName"

type

type: object

name

name: string = "String"

Const key

key: object

parameterPath

parameterPath: string = "key"

mapper

mapper: object

required

required: boolean = true

serializedName

serializedName: string = "key"

type

type: object

name

name: string = "String"

Const keysOrIds

keysOrIds: object

mapper

mapper: any = DocumentKeysOrIdsMapper

parameterPath

parameterPath: string[] = ["options", "keysOrIds"]

Const listOperationSpec

listOperationSpec: object

headerParameters

headerParameters: any[] = [Parameters.accept, Parameters.xMsClientRequestId]

httpMethod

httpMethod: string = "GET"

path

path: string = "/indexes"

queryParameters

queryParameters: any[] = [Parameters.apiVersion, Parameters.select]

serializer

serializer: any

urlParameters

urlParameters: any[] = [Parameters.endpoint]

responses

responses: object

200

200: object

bodyMapper

bodyMapper: any = Mappers.ListIndexesResult

default

default: object

bodyMapper

bodyMapper: any = Mappers.SearchError

Const minimumCoverage

minimumCoverage: object

parameterPath

parameterPath: string[] = ["options", "searchOptions", "minimumCoverage"]

mapper

mapper: object

serializedName

serializedName: string = "minimumCoverage"

type

type: object

name

name: string = "Number"

Const minimumCoverage1

minimumCoverage1: object

parameterPath

parameterPath: string[] = ["options", "suggestOptions", "minimumCoverage"]

mapper

mapper: object

serializedName

serializedName: string = "minimumCoverage"

type

type: object

name

name: string = "Number"

Const minimumCoverage2

minimumCoverage2: object

parameterPath

parameterPath: string[] = ["options", "autocompleteOptions", "minimumCoverage"]

mapper

mapper: object

serializedName

serializedName: string = "minimumCoverage"

type

type: object

name

name: string = "Number"

Const orderBy

orderBy: object

collectionFormat

collectionFormat: string = "CSV"

parameterPath

parameterPath: string[] = ["options", "searchOptions", "orderBy"]

mapper

mapper: object

serializedName

serializedName: string = "$orderby"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const orderBy1

orderBy1: object

collectionFormat

collectionFormat: string = "CSV"

parameterPath

parameterPath: string[] = ["options", "suggestOptions", "orderBy"]

mapper

mapper: object

serializedName

serializedName: string = "$orderby"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const overwrite

overwrite: object

parameterPath

parameterPath: string[] = ["options", "overwrite"]

mapper

mapper: object

defaultValue

defaultValue: boolean = false

serializedName

serializedName: string = "overwrite"

type

type: object

name

name: string = "Boolean"

Const prefer

prefer: object

parameterPath

parameterPath: string = "prefer"

mapper

mapper: object

defaultValue

defaultValue: string = "return=representation"

isConstant

isConstant: boolean = true

serializedName

serializedName: string = "Prefer"

type

type: object

name

name: string = "String"

Const queryLanguage

queryLanguage: object

parameterPath

parameterPath: string[] = ["options", "searchOptions", "queryLanguage"]

mapper

mapper: object

serializedName

serializedName: string = "queryLanguage"

type

type: object

name

name: string = "String"

Const queryType

queryType: object

parameterPath

parameterPath: string[] = ["options", "searchOptions", "queryType"]

mapper

mapper: object

serializedName

serializedName: string = "queryType"

type

type: object

allowedValues

allowedValues: string[] = ["simple", "full", "semantic"]

name

name: string = "Enum"

Const request

request: object

mapper

mapper: any = AnalyzeRequestMapper

parameterPath

parameterPath: string = "request"

Const resetDocsOperationSpec

resetDocsOperationSpec: object

headerParameters

headerParameters: any[] = [Parameters.contentType,Parameters.accept,Parameters.xMsClientRequestId]

httpMethod

httpMethod: string = "POST"

mediaType

mediaType: string = "json"

path

path: string = "/indexers('{indexerName}')/search.resetdocs"

queryParameters

queryParameters: any[] = [Parameters.apiVersion, Parameters.overwrite]

requestBody

requestBody: any = Parameters.keysOrIds

serializer

serializer: any

urlParameters

urlParameters: any[] = [Parameters.endpoint, Parameters.indexerName]

responses

responses: object

204

204: {}

Type declaration

default

default: object

bodyMapper

bodyMapper: any = Mappers.SearchError

Const resetOperationSpec

resetOperationSpec: object

headerParameters

headerParameters: any[] = [Parameters.accept, Parameters.xMsClientRequestId]

httpMethod

httpMethod: string = "POST"

path

path: string = "/indexers('{indexerName}')/search.reset"

queryParameters

queryParameters: any[] = [Parameters.apiVersion]

serializer

serializer: any

urlParameters

urlParameters: any[] = [Parameters.endpoint, Parameters.indexerName]

responses

responses: object

204

204: {}

Type declaration

default

default: object

bodyMapper

bodyMapper: any = Mappers.SearchError

Const resetSkillsOperationSpec

resetSkillsOperationSpec: object

headerParameters

headerParameters: any[] = [Parameters.contentType,Parameters.accept,Parameters.xMsClientRequestId]

httpMethod

httpMethod: string = "POST"

mediaType

mediaType: string = "json"

path

path: string = "/skillsets('{skillsetName}')/search.resetskills"

queryParameters

queryParameters: any[] = [Parameters.apiVersion]

requestBody

requestBody: any = Parameters.skillNames

serializer

serializer: any

urlParameters

urlParameters: any[] = [Parameters.endpoint, Parameters.skillsetName]

responses

responses: object

204

204: {}

Type declaration

default

default: object

bodyMapper

bodyMapper: any = Mappers.SearchError

Const runOperationSpec

runOperationSpec: object

headerParameters

headerParameters: any[] = [Parameters.accept, Parameters.xMsClientRequestId]

httpMethod

httpMethod: string = "POST"

path

path: string = "/indexers('{indexerName}')/search.run"

queryParameters

queryParameters: any[] = [Parameters.apiVersion]

serializer

serializer: any

urlParameters

urlParameters: any[] = [Parameters.endpoint, Parameters.indexerName]

responses

responses: object

202

202: {}

Type declaration

default

default: object

bodyMapper

bodyMapper: any = Mappers.SearchError

Const scoringParameters

scoringParameters: object

collectionFormat

collectionFormat: string = "Multi"

parameterPath

parameterPath: string[] = ["options", "searchOptions", "scoringParameters"]

mapper

mapper: object

serializedName

serializedName: string = "scoringParameter"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const scoringProfile

scoringProfile: object

parameterPath

parameterPath: string[] = ["options", "searchOptions", "scoringProfile"]

mapper

mapper: object

serializedName

serializedName: string = "scoringProfile"

type

type: object

name

name: string = "String"

Const scoringStatistics

scoringStatistics: object

parameterPath

parameterPath: string[] = ["options", "searchOptions", "scoringStatistics"]

mapper

mapper: object

serializedName

serializedName: string = "scoringStatistics"

type

type: object

allowedValues

allowedValues: string[] = ["local", "global"]

name

name: string = "Enum"

Const searchFields

searchFields: object

collectionFormat

collectionFormat: string = "CSV"

parameterPath

parameterPath: string[] = ["options", "searchOptions", "searchFields"]

mapper

mapper: object

serializedName

serializedName: string = "searchFields"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const searchFields1

searchFields1: object

collectionFormat

collectionFormat: string = "CSV"

parameterPath

parameterPath: string[] = ["options", "suggestOptions", "searchFields"]

mapper

mapper: object

serializedName

serializedName: string = "searchFields"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const searchFields2

searchFields2: object

collectionFormat

collectionFormat: string = "CSV"

parameterPath

parameterPath: string[] = ["options", "autocompleteOptions", "searchFields"]

mapper

mapper: object

serializedName

serializedName: string = "searchFields"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const searchGetOperationSpec

searchGetOperationSpec: object

headerParameters

headerParameters: any[] = [Parameters.accept, Parameters.xMsClientRequestId]

httpMethod

httpMethod: string = "GET"

path

path: string = "/docs"

queryParameters

queryParameters: any[] = [Parameters.apiVersion,Parameters.searchText,Parameters.includeTotalResultCount,Parameters.facets,Parameters.filter,Parameters.highlightFields,Parameters.highlightPostTag,Parameters.highlightPreTag,Parameters.minimumCoverage,Parameters.orderBy,Parameters.queryType,Parameters.scoringParameters,Parameters.scoringProfile,Parameters.semanticConfiguration,Parameters.searchFields,Parameters.queryLanguage,Parameters.speller,Parameters.answers,Parameters.searchMode,Parameters.scoringStatistics,Parameters.sessionId,Parameters.select,Parameters.skip,Parameters.top,Parameters.captions,Parameters.semanticFields]

serializer

serializer: any

urlParameters

urlParameters: any[] = [Parameters.endpoint, Parameters.indexName]

responses

responses: object

200

200: object

bodyMapper

bodyMapper: any = Mappers.SearchDocumentsResult

default

default: object

bodyMapper

bodyMapper: any = Mappers.SearchError

Const searchMode

searchMode: object

parameterPath

parameterPath: string[] = ["options", "searchOptions", "searchMode"]

mapper

mapper: object

serializedName

serializedName: string = "searchMode"

type

type: object

allowedValues

allowedValues: string[] = ["any", "all"]

name

name: string = "Enum"

Const searchPostOperationSpec

searchPostOperationSpec: object

headerParameters

headerParameters: any[] = [Parameters.accept,Parameters.xMsClientRequestId,Parameters.contentType]

httpMethod

httpMethod: string = "POST"

mediaType

mediaType: string = "json"

path

path: string = "/docs/search.post.search"

queryParameters

queryParameters: any[] = [Parameters.apiVersion]

requestBody

requestBody: any = Parameters.searchRequest

serializer

serializer: any

urlParameters

urlParameters: any[] = [Parameters.endpoint, Parameters.indexName]

responses

responses: object

200

200: object

bodyMapper

bodyMapper: any = Mappers.SearchDocumentsResult

default

default: object

bodyMapper

bodyMapper: any = Mappers.SearchError

Const searchRequest

searchRequest: object

mapper

mapper: any = SearchRequestMapper

parameterPath

parameterPath: string = "searchRequest"

Const searchText

searchText: object

parameterPath

parameterPath: string[] = ["options", "searchText"]

mapper

mapper: object

serializedName

serializedName: string = "search"

type

type: object

name

name: string = "String"

Const searchText1

searchText1: object

parameterPath

parameterPath: string = "searchText"

mapper

mapper: object

required

required: boolean = true

serializedName

serializedName: string = "search"

type

type: object

name

name: string = "String"

Const select

select: object

collectionFormat

collectionFormat: string = "CSV"

parameterPath

parameterPath: string[] = ["options", "select"]

mapper

mapper: object

serializedName

serializedName: string = "$select"

type

type: object

name

name: string = "String"

element

element: object

type

type: object

name

name: string = "String"

Const select1

select1: object

collectionFormat

collectionFormat: string = "CSV"

parameterPath

parameterPath: string[] = ["options", "suggestOptions", "select"]

mapper

mapper: object

serializedName

serializedName: string = "$select"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const selectedFields

selectedFields: object

collectionFormat

collectionFormat: string = "CSV"

parameterPath

parameterPath: string[] = ["options", "selectedFields"]

mapper

mapper: object

serializedName

serializedName: string = "$select"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const semanticConfiguration

semanticConfiguration: object

parameterPath

parameterPath: string[] = ["options", "searchOptions", "semanticConfiguration"]

mapper

mapper: object

serializedName

serializedName: string = "semanticConfiguration"

type

type: object

name

name: string = "String"

Const semanticFields

semanticFields: object

collectionFormat

collectionFormat: string = "CSV"

parameterPath

parameterPath: string[] = ["options", "searchOptions", "semanticFields"]

mapper

mapper: object

serializedName

serializedName: string = "semanticFields"

type

type: object

name

name: string = "Sequence"

element

element: object

type

type: object

name

name: string = "String"

Const sessionId

sessionId: object

parameterPath

parameterPath: string[] = ["options", "searchOptions", "sessionId"]

mapper

mapper: object

serializedName

serializedName: string = "sessionId"

type

type: object

name

name: string = "String"

Const skillNames

skillNames: object

mapper

mapper: any = SkillNamesMapper

parameterPath

parameterPath: string = "skillNames"

Const skillset

skillset: object

mapper

mapper: any = SearchIndexerSkillsetMapper

parameterPath

parameterPath: string = "skillset"

Const skillsetName

skillsetName: object

parameterPath

parameterPath: string = "skillsetName"

mapper

mapper: object

required

required: boolean = true

serializedName

serializedName: string = "skillsetName"

type

type: object

name

name: string = "String"

Const skip

skip: object

parameterPath

parameterPath: string[] = ["options", "searchOptions", "skip"]

mapper

mapper: object

serializedName

serializedName: string = "$skip"

type

type: object

name

name: string = "Number"

Const skipIndexerResetRequirementForCache

skipIndexerResetRequirementForCache: object

parameterPath

parameterPath: string[] = ["options", "skipIndexerResetRequirementForCache"]

mapper

mapper: object

serializedName

serializedName: string = "ignoreResetRequirements"

type

type: object

name

name: string = "Boolean"

Const speller

speller: object

parameterPath

parameterPath: string[] = ["options", "searchOptions", "speller"]

mapper

mapper: object

serializedName

serializedName: string = "speller"

type

type: object

name

name: string = "String"

Const suggestGetOperationSpec

suggestGetOperationSpec: object

headerParameters

headerParameters: any[] = [Parameters.accept, Parameters.xMsClientRequestId]

httpMethod

httpMethod: string = "GET"

path

path: string = "/docs/search.suggest"

queryParameters

queryParameters: any[] = [Parameters.apiVersion,Parameters.searchText1,Parameters.suggesterName,Parameters.filter1,Parameters.useFuzzyMatching,Parameters.highlightPostTag1,Parameters.highlightPreTag1,Parameters.minimumCoverage1,Parameters.orderBy1,Parameters.searchFields1,Parameters.select1,Parameters.top1]

serializer

serializer: any

urlParameters

urlParameters: any[] = [Parameters.endpoint, Parameters.indexName]

responses

responses: object

200

200: object

bodyMapper

bodyMapper: any = Mappers.SuggestDocumentsResult

default

default: object

bodyMapper

bodyMapper: any = Mappers.SearchError

Const suggestPostOperationSpec

suggestPostOperationSpec: object

headerParameters

headerParameters: any[] = [Parameters.accept,Parameters.xMsClientRequestId,Parameters.contentType]

httpMethod

httpMethod: string = "POST"

mediaType

mediaType: string = "json"

path

path: string = "/docs/search.post.suggest"

queryParameters

queryParameters: any[] = [Parameters.apiVersion]

requestBody

requestBody: any = Parameters.suggestRequest

serializer

serializer: any

urlParameters

urlParameters: any[] = [Parameters.endpoint, Parameters.indexName]

responses

responses: object

200

200: object

bodyMapper

bodyMapper: any = Mappers.SuggestDocumentsResult

default

default: object

bodyMapper

bodyMapper: any = Mappers.SearchError

Const suggestRequest

suggestRequest: object

mapper

mapper: any = SuggestRequestMapper

parameterPath

parameterPath: string = "suggestRequest"

Const suggesterName

suggesterName: object

parameterPath

parameterPath: string = "suggesterName"

mapper

mapper: object

required

required: boolean = true

serializedName

serializedName: string = "suggesterName"

type

type: object

name

name: string = "String"

Const synonymMap

synonymMap: object

mapper

mapper: any = SynonymMapMapper

parameterPath

parameterPath: string = "synonymMap"

Const synonymMapName

synonymMapName: object

parameterPath

parameterPath: string = "synonymMapName"

mapper

mapper: object

required

required: boolean = true

serializedName

serializedName: string = "synonymMapName"

type

type: object

name

name: string = "String"

Const top

top: object

parameterPath

parameterPath: string[] = ["options", "searchOptions", "top"]

mapper

mapper: object

serializedName

serializedName: string = "$top"

type

type: object

name

name: string = "Number"

Const top1

top1: object

parameterPath

parameterPath: string[] = ["options", "suggestOptions", "top"]

mapper

mapper: object

serializedName

serializedName: string = "$top"

type

type: object

name

name: string = "Number"

Const top2

top2: object

parameterPath

parameterPath: string[] = ["options", "autocompleteOptions", "top"]

mapper

mapper: object

serializedName

serializedName: string = "$top"

type

type: object

name

name: string = "Number"

Const useFuzzyMatching

useFuzzyMatching: object

parameterPath

parameterPath: string[] = ["options", "suggestOptions", "useFuzzyMatching"]

mapper

mapper: object

serializedName

serializedName: string = "fuzzy"

type

type: object

name

name: string = "Boolean"

Const useFuzzyMatching1

useFuzzyMatching1: object

parameterPath

parameterPath: string[] = ["options", "autocompleteOptions", "useFuzzyMatching"]

mapper

mapper: object

serializedName

serializedName: string = "fuzzy"

type

type: object

name

name: string = "Boolean"

Const xMsClientRequestId

xMsClientRequestId: object

parameterPath

parameterPath: string[] = ["options", "requestOptionsParam", "xMsClientRequestId"]

mapper

mapper: object

serializedName

serializedName: string = "x-ms-client-request-id"

type

type: object

name

name: string = "Uuid"

Generated using TypeDoc