See: Description
Class | Description |
---|---|
AnalyzedTokenInfo |
Information about a token returned by an analyzer.
|
AnalyzeTextOptions |
Specifies some text and analysis components used to break that text into
tokens.
|
AsciiFoldingTokenFilter |
Converts alphabetic, numeric, and symbolic Unicode characters which are not
in the first 127 ASCII characters (the "Basic Latin" Unicode block) into
their ASCII equivalents, if such equivalents exist.
|
BlobIndexerDataToExtract |
Defines values for BlobIndexerDataToExtract.
|
BlobIndexerImageAction |
Defines values for BlobIndexerImageAction.
|
BlobIndexerParsingMode |
Defines values for BlobIndexerParsingMode.
|
BlobIndexerPdfTextRotationAlgorithm |
Defines values for BlobIndexerPdfTextRotationAlgorithm.
|
BM25SimilarityAlgorithm |
Ranking function based on the Okapi BM25 similarity algorithm.
|
CharFilter |
Base type for character filters.
|
CharFilterName |
Defines values for CharFilterName.
|
CjkBigramTokenFilter |
Forms bigrams of CJK terms that are generated from the standard tokenizer.
|
ClassicSimilarityAlgorithm |
Legacy similarity algorithm which uses the Lucene TFIDFSimilarity implementation of TF-IDF.
|
ClassicTokenizer |
Grammar-based tokenizer that is suitable for processing most
European-language documents.
|
CognitiveServicesAccount |
Base type for describing any cognitive service resource attached to a skillset.
|
CognitiveServicesAccountKey |
A cognitive service resource provisioned with a key that is attached to a skillset.
|
CommonGramTokenFilter |
Construct bigrams for frequently occurring terms while indexing.
|
ConditionalSkill |
A skill that enables scenarios that require a Boolean operation to determine the data to assign to an output.
|
CorsOptions |
Defines options to control Cross-Origin Resource Sharing (CORS) for an index.
|
CreateOrUpdateDataSourceConnectionOptions |
This model represents a property bag containing all options for creating or updating a
data source connection . |
CreateOrUpdateIndexerOptions |
This model represents a property bag containing all options for creating or updating an
indexer . |
CreateOrUpdateSkillsetOptions |
This model represents a property bag containing all options for creating or updating a
skillset . |
CustomAnalyzer |
Allows you to take control over the process of converting text into indexable/searchable tokens.
|
CustomEntity |
An object that contains information about the matches that were found, and related metadata.
|
CustomEntityAlias |
A complex object that can be used to specify alternative spellings or synonyms to the root entity name.
|
CustomEntityLookupSkill |
A skill looks for text from a custom, user-defined list of words and phrases.
|
CustomEntityLookupSkillLanguage |
Defines values for CustomEntityLookupSkillLanguage.
|
CustomNormalizer |
Allows you to configure normalization for filterable, sortable, and facetable fields, which by default operate with
strict matching.
|
DataChangeDetectionPolicy |
Base type for data change detection policies.
|
DataDeletionDetectionPolicy |
Base type for data deletion detection policies.
|
DefaultCognitiveServicesAccount |
An empty object that represents the default cognitive service resource for a skillset.
|
DictionaryDecompounderTokenFilter |
Decomposes compound words found in many Germanic languages.
|
DistanceScoringFunction |
Defines a function that boosts scores based on distance from a geographic location.
|
DistanceScoringParameters |
Provides parameter values to a distance scoring function.
|
DocumentExtractionSkill |
A skill that extracts content from a file within the enrichment pipeline.
|
EdgeNGramTokenFilter |
Generates n-grams of the given size(s) starting from the front or the back
of an input token.
|
EdgeNGramTokenizer |
Tokenizes the input from an edge into n-grams of the given size(s).
|
ElisionTokenFilter |
Removes elisions.
|
EntityCategory |
Defines values for EntityCategory.
|
EntityLinkingSkill |
Using the Text Analytics API, extracts linked entities from text.
|
EntityRecognitionSkill |
Text analytics entity recognition.
|
EntityRecognitionSkillLanguage |
Defines values for EntityRecognitionSkillLanguage.
|
FieldBuilderOptions |
Additional parameters to build
SearchField . |
FieldMapping |
Defines a mapping between a field in a data source and a target field in an index.
|
FieldMappingFunction |
Represents a function that transforms a value from a data source before indexing.
|
FreshnessScoringFunction |
Defines a function that boosts scores based on the value of a date-time field.
|
FreshnessScoringParameters |
Provides parameter values to a freshness scoring function.
|
HighWaterMarkChangeDetectionPolicy |
Defines a data change detection policy that captures changes based on the value of a high water mark column.
|
ImageAnalysisSkill |
A skill that analyzes image files.
|
ImageAnalysisSkillLanguage |
Defines values for ImageAnalysisSkillLanguage.
|
ImageDetail |
Defines values for ImageDetail.
|
IndexDocumentsBatch<T> |
Contains a batch of document write actions to send to the index.
|
IndexerExecutionEnvironment |
Defines values for IndexerExecutionEnvironment.
|
IndexerExecutionResult |
Represents the result of an individual indexer execution.
|
IndexingParameters |
Represents parameters for indexer execution.
|
IndexingParametersConfiguration |
A dictionary of indexer-specific configuration properties.
|
IndexingSchedule |
Represents a schedule for indexer execution.
|
InputFieldMappingEntry |
Input field mapping for a skill.
|
KeepTokenFilter |
A token filter that only keeps tokens with text contained in a specified
list of words.
|
KeyPhraseExtractionSkill |
A skill that uses text analytics for key phrase extraction.
|
KeyPhraseExtractionSkillLanguage |
Defines values for KeyPhraseExtractionSkillLanguage.
|
KeywordMarkerTokenFilter |
Marks terms as keywords.
|
KeywordTokenizer |
Emits the entire input as a single token.
|
LanguageDetectionSkill |
A skill that detects the language of input text and reports a single language code for every document submitted on
the request.
|
LengthTokenFilter |
Removes words that are too long or too short.
|
LexicalAnalyzer |
Base type for analyzers.
|
LexicalAnalyzerName |
Defines values for LexicalAnalyzerName.
|
LexicalNormalizer |
Base type for normalizers.
|
LexicalNormalizerName |
Defines values for LexicalNormalizerName.
|
LexicalTokenizer |
Base type for tokenizers.
|
LexicalTokenizerName |
Defines values for LexicalTokenizerName.
|
LimitTokenFilter |
Limits the number of tokens while indexing.
|
LineEnding |
Defines values for LineEnding.
|
LuceneStandardAnalyzer |
Standard Apache Lucene analyzer; Composed of the standard tokenizer, lowercase filter and stop filter.
|
LuceneStandardTokenizer |
Breaks text following the Unicode Text Segmentation rules.
|
MagnitudeScoringFunction |
Defines a function that boosts scores based on the magnitude of a numeric field.
|
MagnitudeScoringParameters |
Provides parameter values to a magnitude scoring function.
|
MappingCharFilter |
A character filter that applies mappings defined with the mappings option.
|
MergeSkill |
A skill for merging two or more strings into a single unified string, with an optional user-defined delimiter
separating each component part.
|
MicrosoftLanguageStemmingTokenizer |
Divides text using language-specific rules and reduces words to their base
forms.
|
MicrosoftLanguageTokenizer |
Divides text using language-specific rules.
|
NGramTokenFilter |
Generates n-grams of the given size(s).
|
NGramTokenizer |
Tokenizes the input into n-grams of the given size(s).
|
OcrSkill |
A skill that extracts text from image files.
|
OcrSkillLanguage |
Defines values for OcrSkillLanguage.
|
OutputFieldMappingEntry |
Output field mapping for a skill.
|
PathHierarchyTokenizer |
Tokenizer for path-like hierarchies.
|
PatternAnalyzer |
Flexibly separates text into terms via a regular expression pattern.
|
PatternCaptureTokenFilter |
Uses Java regexes to emit multiple tokens - one for each capture group in
one or more patterns.
|
PatternReplaceCharFilter |
A character filter that replaces characters in the input string.
|
PatternReplaceTokenFilter |
A character filter that replaces characters in the input string.
|
PatternTokenizer |
Tokenizer that uses regex pattern matching to construct distinct tokens.
|
PhoneticTokenFilter |
Create tokens for phonetic matches.
|
PiiDetectionSkill |
Using the Text Analytics API, extracts personal information from an input text and gives you the option of masking
it.
|
PiiDetectionSkillMaskingMode |
Defines values for PiiDetectionSkillMaskingMode.
|
RegexFlags |
Defines values for RegexFlags.
|
ResourceCounter |
Represents a resource's usage and quota.
|
ScoringFunction |
Base type for functions that can modify document scores during ranking.
|
ScoringProfile |
Defines parameters for a search index that influence scoring in search queries.
|
SearchField |
Represents a field in an index definition, which describes the name, data type, and search behavior of a field.
|
SearchFieldDataType |
Defines values for SearchFieldDataType.
|
SearchIndex |
Represents a search index definition, which describes the fields and search
behavior of an index.
|
SearchIndexer |
Represents an indexer.
|
SearchIndexerCache |
The SearchIndexerCache model.
|
SearchIndexerDataContainer |
Represents information about the entity (such as Azure SQL table or CosmosDB collection) that will be indexed.
|
SearchIndexerDataIdentity |
Abstract base type for data identities.
|
SearchIndexerDataNoneIdentity |
Clears the identity property of a datasource.
|
SearchIndexerDataSourceConnection |
Represents a datasource definition, which can be used to configure an
indexer.
|
SearchIndexerDataSourceType |
Defines values for SearchIndexerDataSourceType.
|
SearchIndexerDataUserAssignedIdentity |
Specifies the identity for a datasource to use.
|
SearchIndexerError |
Represents an item- or document-level indexing error.
|
SearchIndexerKnowledgeStore |
Definition of additional projections to azure blob, table, or files, of enriched data.
|
SearchIndexerKnowledgeStoreBlobProjectionSelector |
Abstract class to share properties between concrete selectors.
|
SearchIndexerKnowledgeStoreFileProjectionSelector |
Projection definition for what data to store in Azure Files.
|
SearchIndexerKnowledgeStoreObjectProjectionSelector |
Projection definition for what data to store in Azure Blob.
|
SearchIndexerKnowledgeStoreProjection |
Container object for various projection selectors.
|
SearchIndexerKnowledgeStoreProjectionSelector |
Abstract class to share properties between concrete selectors.
|
SearchIndexerKnowledgeStoreTableProjectionSelector |
Description for what data to store in Azure Tables.
|
SearchIndexerLimits |
The SearchIndexerLimits model.
|
SearchIndexerSkill |
Base type for skills.
|
SearchIndexerSkillset |
A list of skills.
|
SearchIndexerStatus |
Represents the current status and execution history of an indexer.
|
SearchIndexerWarning |
Represents an item-level warning.
|
SearchIndexStatistics |
Statistics for a given index.
|
SearchResourceEncryptionKey |
A customer-managed encryption key in Azure Key Vault.
|
SearchServiceCounters |
Represents service-level resource counters and quotas.
|
SearchServiceLimits |
Represents various service level limits.
|
SearchServiceStatistics |
Response from a get service statistics request.
|
SearchSuggester |
Defines how the Suggest API should apply to a group of fields in the index.
|
SentimentSkill |
Text analytics positive-negative sentiment analysis, scored as a floating point value in a range of zero to 1.
|
SentimentSkillLanguage |
Defines values for SentimentSkillLanguage.
|
ShaperSkill |
A skill for reshaping the outputs.
|
ShingleTokenFilter |
Creates combinations of tokens as a single token.
|
SimilarityAlgorithm |
Base type for similarity algorithms.
|
SnowballTokenFilter |
A filter that stems words using a Snowball-generated stemmer.
|
SoftDeleteColumnDeletionDetectionPolicy |
Defines a data deletion detection policy that implements a soft-deletion strategy.
|
SplitSkill |
A skill to split a string into chunks of text.
|
SplitSkillLanguage |
Defines values for SplitSkillLanguage.
|
SqlIntegratedChangeTrackingPolicy |
Defines a data change detection policy that captures changes using the Integrated Change Tracking feature of Azure
SQL Database.
|
StemmerOverrideTokenFilter |
Provides the ability to override other stemming filters with custom
dictionary-based stemming.
|
StemmerTokenFilter |
Language specific stemming filter.
|
StopAnalyzer |
Divides text at non-letters; Applies the lowercase and stopword token filters.
|
StopwordsTokenFilter |
Removes stop words from a token stream.
|
SynonymMap |
Represents a synonym map definition.
|
SynonymTokenFilter |
Matches single or multi-word synonyms in a token stream.
|
TagScoringFunction |
Defines a function that boosts scores of documents with string values matching a given list of tags.
|
TagScoringParameters |
Provides parameter values to a tag scoring function.
|
TextSplitMode |
Defines values for TextSplitMode.
|
TextTranslationSkill |
A skill to translate text from one language to another.
|
TextTranslationSkillLanguage |
Defines values for TextTranslationSkillLanguage.
|
TextWeights |
Defines weights on index fields for which matches should boost scoring in search queries.
|
TokenFilter |
Base type for token filters.
|
TokenFilterName |
Defines values for TokenFilterName.
|
TruncateTokenFilter |
Truncates the terms to a specific length.
|
UaxUrlEmailTokenizer |
Tokenizes urls and emails as one token.
|
UniqueTokenFilter |
Filters out tokens with same text as the previous token.
|
VisualFeature |
Defines values for VisualFeature.
|
WebApiSkill |
A skill that can call a Web API endpoint, allowing you to extend a skillset by having it call your custom code.
|
WordDelimiterTokenFilter |
Splits words into subwords and performs optional transformations on subword
groups.
|
Enum | Description |
---|---|
CjkBigramTokenFilterScripts |
Defines values for CjkBigramTokenFilterScripts.
|
EdgeNGramTokenFilterSide |
Defines values for EdgeNGramTokenFilterSide.
|
EntityRecognitionSkillVersion |
Represents the version of
EntityRecognitionSkill . |
IndexerExecutionStatus |
Defines values for IndexerExecutionStatus.
|
IndexerStatus |
Defines values for IndexerStatus.
|
MicrosoftStemmingTokenizerLanguage |
Defines values for MicrosoftStemmingTokenizerLanguage.
|
MicrosoftTokenizerLanguage |
Defines values for MicrosoftTokenizerLanguage.
|
PhoneticEncoder |
Defines values for PhoneticEncoder.
|
ScoringFunctionAggregation |
Defines values for ScoringFunctionAggregation.
|
ScoringFunctionInterpolation |
Defines values for ScoringFunctionInterpolation.
|
SentimentSkillVersion |
Represents the version of
SentimentSkill . |
SnowballTokenFilterLanguage |
Defines values for SnowballTokenFilterLanguage.
|
StemmerTokenFilterLanguage |
Defines values for StemmerTokenFilterLanguage.
|
StopwordsList |
Defines values for StopwordsList.
|
TokenCharacterKind |
Defines values for TokenCharacterKind.
|
Copyright © 2021 Microsoft Corporation. All rights reserved.