All Classes and Interfaces

Class
Description
Information about a token returned by an analyzer.
Specifies some text and analysis components used to break that text into tokens.
An answer is a text passage extracted from the contents of the most relevant documents that matched the query.
Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if such equivalents exist.
The result of Autocomplete requests.
Specifies the mode for Autocomplete.
Parameter group.
Implementation of PagedFluxBase where the element type is AutocompleteItem and the page type is AutocompletePagedResponse.
Implementation of PagedIterableBase where the element type is AutocompleteItem and the page type is AutocompletePagedResponse.
This class represents a response from the autocomplete API.
The result of Autocomplete query.
The AML skill allows you to extend AI enrichment with a custom Azure Machine Learning (AML) model.
Specifies the data to extract from Azure blob storage and tells the indexer which data to extract from image content when "imageAction" is set to a value other than "none".
Determines how to process embedded images and image files in Azure blob storage.
Represents the parsing mode for indexing from an Azure blob data source.
Determines algorithm for text extraction from PDF files in Azure blob storage.
Ranking function based on the Okapi BM25 similarity algorithm.
Captions are the most representative passages from the document relatively to the search query.
Base type for character filters.
Defines the names of all character filters supported by Azure Cognitive Search.
Forms bigrams of CJK terms that are generated from the standard tokenizer.
Scripts that can be ignored by CjkBigramTokenFilter.
Legacy similarity algorithm which uses the Lucene TFIDFSimilarity implementation of TF-IDF.
Grammar-based tokenizer that is suitable for processing most European-language documents.
Base type for describing any cognitive service resource attached to a skillset.
A cognitive service resource provisioned with a key that is attached to a skillset.
Construct bigrams for frequently occurring terms while indexing.
A skill that enables scenarios that require a Boolean operation to determine the data to assign to an output.
Defines options to control Cross-Origin Resource Sharing (CORS) for an index.
This model represents a property bag containing all options for creating or updating a data source connection.
This model represents a property bag containing all options for creating or updating an indexer.
This model represents a property bag containing all options for creating or updating a skillset.
Allows you to take control over the process of converting text into indexable/searchable tokens.
An object that contains information about the matches that were found, and related metadata.
A complex object that can be used to specify alternative spellings or synonyms to the root entity name.
A skill looks for text from a custom, user-defined list of words and phrases.
The language codes supported for input text by CustomEntityLookupSkill.
Allows you to configure normalization for filterable, sortable, and facetable fields, which by default operate with strict matching.
Base type for data change detection policies.
Base type for data deletion detection policies.
An empty object that represents the default cognitive service resource for a skillset.
Decomposes compound words found in many Germanic languages.
Defines a function that boosts scores based on distance from a geographic location.
Provides parameter values to a distance scoring function.
A skill that extracts content from a file within the enrichment pipeline.
Generates n-grams of the given size(s) starting from the front or the back of an input token.
Specifies which side of the input an n-gram should be generated from.
Tokenizes the input from an edge into n-grams of the given size(s).
Removes elisions.
A string indicating what entity categories to return.
Using the Text Analytics API, extracts linked entities from text.
Text analytics entity recognition.
The language codes supported for input text by EntityRecognitionSkill.
Represents the version of EntityRecognitionSkill.
A single bucket of a facet query result.
Marker annotation that indicates the field or method is to be ignored by converting to SearchField.
Additional parameters to build SearchField.
Defines a mapping between a field in a data source and a target field in an index.
Represents a function that transforms a value from a data source before indexing.
Defines a function that boosts scores based on the value of a date-time field.
Provides parameter values to a freshness scoring function.
Defines a data change detection policy that captures changes based on the value of a high water mark column.
A skill that analyzes image files.
The language codes supported for input by ImageAnalysisSkill.
A string indicating which domain-specific details to return.
Represents an index action that operates on a document.
The operation to perform on a document in an indexing batch.
Contains a batch of document write actions to send to the index.
An IndexBatchException is thrown whenever Azure Cognitive Search index call was only partially successful.
Contains a batch of document write actions to send to the index.
Options for document index operations.
Response containing the status of operations for all documents in the indexing request.
Represents all of the state that defines and dictates the indexer's current execution.
Specifies the environment in which the indexer should execute.
Represents the result of an individual indexer execution.
Represents the status of an individual indexer execution.
Details the status of an individual indexer execution.
Represents the overall indexer status.
Represents the mode the indexer is executing in.
Represents parameters for indexer execution.
A dictionary of indexer-specific configuration properties.
Status of an indexing operation for a single document.
Represents a schedule for indexer execution.
Input field mapping for a skill.
A token filter that only keeps tokens with text contained in a specified list of words.
A skill that uses text analytics for key phrase extraction.
The language codes supported for input text by KeyPhraseExtractionSkill.
Marks terms as keywords.
Emits the entire input as a single token.
A skill that detects the language of input text and reports a single language code for every document submitted on the request.
Removes words that are too long or too short.
Base type for analyzers.
Defines the names of all text analyzers supported by Azure Cognitive Search.
Base type for normalizers.
Defines the names of all text normalizers supported by Azure Cognitive Search.
Base type for tokenizers.
Defines the names of all tokenizers supported by Azure Cognitive Search.
Limits the number of tokens while indexing.
Defines the sequence of characters to use between the lines of text recognized by the OCR skill.
Standard Apache Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter.
Breaks text following the Unicode Text Segmentation rules.
Defines a function that boosts scores based on the magnitude of a numeric field.
Provides parameter values to a magnitude scoring function.
A character filter that applies mappings defined with the mappings option.
A skill for merging two or more strings into a single unified string, with an optional user-defined delimiter separating each component part.
Divides text using language-specific rules and reduces words to their base forms.
Divides text using language-specific rules.
Lists the languages supported by the Microsoft language stemming tokenizer.
Lists the languages supported by the Microsoft language tokenizer.
Generates n-grams of the given size(s).
Tokenizes the input into n-grams of the given size(s).
A skill that extracts text from image files.
The language codes supported for input by OcrSkill.
Options passed when onActionAdded(Consumer) is called.
Options passed when onActionError(Consumer) is called.
Options passed when onActionSent(Consumer) is called.
Options passed when onActionSucceeded(Consumer) is called.
Output field mapping for a skill.
Tokenizer for path-like hierarchies.
Flexibly separates text into terms via a regular expression pattern.
Uses Java regexes to emit multiple tokens - one for each capture group in one or more patterns.
A character filter that replaces characters in the input string.
A character filter that replaces characters in the input string.
Tokenizer that uses regex pattern matching to construct distinct tokens.
Identifies the type of phonetic encoder to use with a PhoneticTokenFilter.
Create tokens for phonetic matches.
Using the Text Analytics API, extracts personal information from an input text and gives you the option of masking it.
A string indicating what maskingMode to use to mask the personal information detected in the input text.
Describes the title, content, and keywords fields to be used for semantic ranking, captions, highlights, and answers.
This parameter is only valid if the query type is 'semantic'.
This parameter is only valid if the query type is 'semantic'.
The language of the query.
Improve search recall by spell-correcting individual search query terms.
Specifies the syntax of the search query.
A single bucket of a range facet query result that reports the number of documents with a field value falling within a particular range.
Defines flags that can be combined to control how regular expressions are used in the pattern analyzer and pattern tokenizer.
Represents a resource's usage and quota.
Base type for functions that can modify document scores during ranking.
Defines the aggregation function used to combine the results of all the scoring functions in a scoring profile.
Defines the function used to interpolate score boosting across a range of documents.
Represents a parameter value to be used in scoring functions (for example, referencePointParameter).
Defines parameters for a search index that influence scoring in search queries.
A value that specifies whether we want to calculate scoring statistics (such as document frequency) globally for more consistent scoring, or locally, for lower latency.
An annotation that directs SearchIndexAsyncClient.buildSearchFields(Class, FieldBuilderOptions) to turn the field or method into a searchable field.
Represents an index alias, which describes a mapping from the alias name to an index.
This class provides a client that contains the operations for querying an index and uploading, merging, or deleting documents in an Azure Cognitive Search service.
Cloud audiences available for Search.
This class provides a client that contains the operations for querying an index and uploading, merging, or deleting documents in an Azure Cognitive Search service.
This class provides a fluent builder API to help aid the configuration and instantiation of SearchClients and SearchAsyncClients.
Represents an untyped document returned from a search or document lookup.
Represents a field in an index definition, which describes the name, data type, and search behavior of a field.
Defines the data type of a field in a search index.
This class is used to help construct valid OData filter expressions by automatically replacing, quoting, and escaping string parameters.
Represents a search index definition, which describes the fields and search behavior of an index.
This class provides a client that contains the operations for creating, getting, listing, updating, or deleting indexes or synonym map and analyzing text in an Azure Cognitive Search service.
This class provides a client that contains the operations for creating, getting, listing, updating, or deleting indexes or synonym map and analyzing text in an Azure Cognitive Search service.
This class provides a fluent builder API to help aid the configuration and instantiation of SearchIndexClients and SearchIndexAsyncClients.
Represents an indexer.
This class provides a client that contains the operations for creating, getting, listing, updating, or deleting data source connections, indexers, or skillsets and running or resetting indexers in an Azure Cognitive Search service.
The SearchIndexerCache model.
This class provides a client that contains the operations for creating, getting, listing, updating, or deleting data source connections, indexers, or skillsets and running or resetting indexers in an Azure Cognitive Search service.
This class provides a fluent builder API to help aid the configuration and instantiation of SearchIndexerClients and SearchIndexerAsyncClients.
Represents information about the entity (such as Azure SQL table or CosmosDB collection) that will be indexed.
Abstract base type for data identities.
Clears the identity property of a datasource.
Represents a datasource definition, which can be used to configure an indexer.
Utility class that aids in the creation of SearchIndexerDataSourceConnections.
Defines the type of a datasource.
Specifies the identity for a datasource to use.
Represents an item- or document-level indexing error.
Definition of additional projections to azure blob, table, or files, of enriched data.
Abstract class to share properties between concrete selectors.
Projection definition for what data to store in Azure Files.
Projection definition for what data to store in Azure Blob.
Container object for various projection selectors.
Abstract class to share properties between concrete selectors.
Description for what data to store in Azure Tables.
The SearchIndexerLimits model.
Base type for skills.
A list of skills.
Represents the current status and execution history of an indexer.
Represents an item-level warning.
This class provides a buffered sender that contains operations for conveniently indexing documents to an Azure Search index.
This class provides a buffered sender that contains operations for conveniently indexing documents to an Azure Search index.
Statistics for a given index.
Specifies whether any or all of the search terms must be matched in order to count the document as a match.
Additional parameters for searchGet operation.
Implementation of ContinuablePagedFlux where the continuation token type is SearchRequest, the element type is SearchResult, and the page type is SearchPagedResponse.
Implementation of ContinuablePagedIterable where the continuation token type is SearchRequest, the element type is SearchResult, and the page type is SearchPagedResponse.
Represents an HTTP response from the search API request that contains a list of items deserialized into a Page.
A customer-managed encryption key in Azure Key Vault.
Contains a document found by a search query, plus associated metadata.
Represents service-level resource counters and quotas.
Represents various service level limits.
Response from a get service statistics request.
The versions of Azure Cognitive Search supported by this client library.
Defines how the Suggest API should apply to a group of fields in the index.
Defines a specific configuration to be used in the context of semantic capabilities.
A field that is used as part of the semantic configuration.
Defines parameters for a search index that influence semantic capabilities.
Text analytics positive-negative sentiment analysis, scored as a floating point value in a range of zero to 1.
The language codes supported for input text by SentimentSkill.
Represents the version of SentimentSkill.
A skill for reshaping the outputs.
Creates combinations of tokens as a single token.
Base type for similarity algorithms.
An annotation that directs SearchIndexAsyncClient.buildSearchFields(Class, FieldBuilderOptions) to turn the field or method into a non-searchable field.
A filter that stems words using a Snowball-generated stemmer.
The language to use for a Snowball token filter.
Defines a data deletion detection policy that implements a soft-deletion strategy.
A skill to split a string into chunks of text.
The language codes supported for input text by SplitSkill.
Defines a data change detection policy that captures changes using the Integrated Change Tracking feature of Azure SQL Database.
Provides the ability to override other stemming filters with custom dictionary-based stemming.
Language specific stemming filter.
The language to use for a stemmer token filter.
Divides text at non-letters; Applies the lowercase and stopword token filters.
Identifies a predefined list of language-specific stopwords.
Removes stop words from a token stream.
Parameter group.
Implementation of PagedFluxBase where the element type is SuggestResult and the page type is SuggestPagedResponse.
Implementation of PagedIterableBase where the element type is SuggestResult and the page type is SuggestPagedResponse.
Represents an HTTP response from the suggest API request that contains a list of items deserialized into a Page.
A result containing a document found by a suggestion query, plus associated metadata.
Represents a synonym map definition.
Matches single or multi-word synonyms in a token stream.
Defines a function that boosts scores of documents with string values matching a given list of tags.
Provides parameter values to a tag scoring function.
A value indicating which split mode to perform.
A skill to translate text from one language to another.
The language codes supported for input text by TextTranslationSkill.
Defines weights on index fields for which matches should boost scoring in search queries.
Represents classes of characters on which a token filter can operate.
Base type for token filters.
Defines the names of all token filters supported by Azure Cognitive Search.
Truncates the terms to a specific length.
Tokenizes urls and emails as one token.
Filters out tokens with same text as the previous token.
A single bucket of a simple or interval facet query result that reports the number of documents with a field falling within a particular interval or having a specific value.
The strings indicating what visual feature types to return.
A skill that can call a Web API endpoint, allowing you to extend a skillset by having it call your custom code.
Splits words into subwords and performs optional transformations on subword groups.