azure.ai.textanalytics package

class azure.ai.textanalytics.TextAnalyticsApiVersion[source]

Text Analytics API versions supported by this package

V3_0 = 'v3.0'
V3_1_PREVIEW = 'v3.1-preview.2'

this is the default version

class azure.ai.textanalytics.TextAnalyticsClient(endpoint: str, credential: Union[AzureKeyCredential, TokenCredential], **kwargs: Any)[source]

The Text Analytics API is a suite of text analytics web services built with best-in-class Microsoft machine learning algorithms. The API can be used to analyze unstructured text for tasks such as sentiment analysis, key phrase extraction, and language detection. No training data is needed to use this API - just bring your text data. This API uses advanced natural language processing techniques to deliver best in class predictions.

Further documentation can be found in https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview

Parameters
  • endpoint (str) – Supported Cognitive Services or Text Analytics resource endpoints (protocol and hostname, for example: https://westus2.api.cognitive.microsoft.com).

  • credential (AzureKeyCredential or TokenCredential) – Credentials needed for the client to connect to Azure. This can be the an instance of AzureKeyCredential if using a cognitive services/text analytics API key or a token credential from azure.identity.

Keyword Arguments
  • default_country_hint (str) – Sets the default country_hint to use for all operations. Defaults to “US”. If you don’t want to use a country hint, pass the string “none”.

  • default_language (str) – Sets the default language to use for all operations. Defaults to “en”.

  • api_version (str or TextAnalyticsApiVersion) – The API version of the service to use for requests. It defaults to the latest service version. Setting to an older version may result in reduced feature compatibility.

Example:

Creating the TextAnalyticsClient with endpoint and API key.
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient
endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint, AzureKeyCredential(key))
Creating the TextAnalyticsClient with endpoint and token credential from Azure Active Directory.
from azure.ai.textanalytics import TextAnalyticsClient
from azure.identity import DefaultAzureCredential

endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
credential = DefaultAzureCredential()

text_analytics_client = TextAnalyticsClient(endpoint, credential=credential)
analyze_sentiment(documents: Union[List[str], List[TextDocumentInput], List[Dict[str, str]]], **kwargs: Any) → List[Union[AnalyzeSentimentResult, DocumentError]][source]

Analyze sentiment for a batch of documents.

Returns a sentiment prediction, as well as sentiment scores for each sentiment class (Positive, Negative, and Neutral) for the document and each sentence within it.

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

Parameters

documents (list[str] or list[TextDocumentInput] or list[dict[str, str]]) – The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {“id”: “1”, “language”: “en”, “text”: “hello world”}.

Keyword Arguments
  • show_opinion_mining (bool) – Whether to mine the opinions of a sentence and conduct more granular analysis around the aspects of a product or service (also known as aspect-based sentiment analysis). If set to true, the returned SentenceSentiment objects will have property mined_opinions containing the result of this analysis. Only available for API version v3.1-preview and up.

  • language (str) – The 2 letter ISO 639-1 representation of language for the entire batch. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Text Analytics API.

  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version.

  • show_stats (bool) – If set to true, response will contain document level statistics.

New in version v3.1-preview: The show_opinion_mining parameter.

Returns

The combined list of AnalyzeSentimentResult and DocumentError in the order the original documents were passed in.

Return type

list[AnalyzeSentimentResult, DocumentError]

Raises

HttpResponseError or TypeError or ValueError

Example:

Analyze sentiment in a batch of documents.
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient

endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
documents = [
    "I had the best day of my life.",
    "This was a waste of my time. The speaker put me to sleep.",
    "No tengo dinero ni nada que dar...",
    "L'hôtel n'était pas très confortable. L'éclairage était trop sombre."
]

result = text_analytics_client.analyze_sentiment(documents)
docs = [doc for doc in result if not doc.is_error]

for idx, doc in enumerate(docs):
    print("Document text: {}".format(documents[idx]))
    print("Overall sentiment: {}".format(doc.sentiment))
close()None

Close sockets opened by the client. Calling this method is unnecessary when using the client as a context manager.

detect_language(documents: Union[List[str], List[DetectLanguageInput], List[Dict[str, str]]], **kwargs: Any) → List[Union[DetectLanguageResult, DocumentError]][source]

Detect language for a batch of documents.

Returns the detected language and a numeric score between zero and one. Scores close to one indicate 100% certainty that the identified language is true. See https://aka.ms/talangs for the list of enabled languages.

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

Parameters

documents (list[str] or list[DetectLanguageInput] or list[dict[str, str]]) – The set of documents to process as part of this batch. If you wish to specify the ID and country_hint on a per-item basis you must use as input a list[DetectLanguageInput] or a list of dict representations of DetectLanguageInput, like {“id”: “1”, “country_hint”: “us”, “text”: “hello world”}.

Keyword Arguments
  • country_hint (str) – A country hint for the entire batch. Accepts two letter country codes specified by ISO 3166-1 alpha-2. Per-document country hints will take precedence over whole batch hints. Defaults to “US”. If you don’t want to use a country hint, pass the string “none”.

  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version.

  • show_stats (bool) – If set to true, response will contain document level statistics.

Returns

The combined list of DetectLanguageResult and DocumentError in the order the original documents were passed in.

Return type

list[DetectLanguageResult, DocumentError]

Raises

HttpResponseError or TypeError or ValueError

Example:

Detecting language in a batch of documents.
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient

endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
documents = [
    "This document is written in English.",
    "Este es un document escrito en Español.",
    "这是一个用中文写的文件",
    "Dies ist ein Dokument in deutsche Sprache.",
    "Detta är ett dokument skrivet på engelska."
]

result = text_analytics_client.detect_language(documents)

for idx, doc in enumerate(result):
    if not doc.is_error:
        print("Document text: {}".format(documents[idx]))
        print("Language detected: {}".format(doc.primary_language.name))
        print("ISO6391 name: {}".format(doc.primary_language.iso6391_name))
        print("Confidence score: {}\n".format(doc.primary_language.confidence_score))
    if doc.is_error:
        print(doc.id, doc.error)
extract_key_phrases(documents: Union[List[str], List[TextDocumentInput], List[Dict[str, str]]], **kwargs: Any) → List[Union[ExtractKeyPhrasesResult, DocumentError]][source]

Extract key phrases from a batch of documents.

Returns a list of strings denoting the key phrases in the input text. For example, for the input text “The food was delicious and there were wonderful staff”, the API returns the main talking points: “food” and “wonderful staff”

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

Parameters

documents (list[str] or list[TextDocumentInput] or list[dict[str, str]]) – The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {“id”: “1”, “language”: “en”, “text”: “hello world”}.

Keyword Arguments
  • language (str) – The 2 letter ISO 639-1 representation of language for the entire batch. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Text Analytics API.

  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version.

  • show_stats (bool) – If set to true, response will contain document level statistics.

Returns

The combined list of ExtractKeyPhrasesResult and DocumentError in the order the original documents were passed in.

Return type

list[ExtractKeyPhrasesResult, DocumentError]

Raises

HttpResponseError or TypeError or ValueError

Example:

Extract the key phrases in a batch of documents.
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient

endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
documents = [
    "Redmond is a city in King County, Washington, United States, located 15 miles east of Seattle.",
    "I need to take my cat to the veterinarian.",
    "I will travel to South America in the summer.",
]

result = text_analytics_client.extract_key_phrases(documents)
for doc in result:
    if not doc.is_error:
        print(doc.key_phrases)
    if doc.is_error:
        print(doc.id, doc.error)
recognize_entities(documents: Union[List[str], List[TextDocumentInput], List[Dict[str, str]]], **kwargs: Any) → List[Union[RecognizeEntitiesResult, DocumentError]][source]

Recognize entities for a batch of documents.

Identifies and categorizes entities in your text as people, places, organizations, date/time, quantities, percentages, currencies, and more. For the list of supported entity types, check: https://aka.ms/taner

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

Parameters

documents (list[str] or list[TextDocumentInput] or list[dict[str, str]]) – The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {“id”: “1”, “language”: “en”, “text”: “hello world”}.

Keyword Arguments
  • language (str) – The 2 letter ISO 639-1 representation of language for the entire batch. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Text Analytics API.

  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version.

  • show_stats (bool) – If set to true, response will contain document level statistics.

Returns

The combined list of RecognizeEntitiesResult and DocumentError in the order the original documents were passed in.

Return type

list[RecognizeEntitiesResult, DocumentError]

Raises

HttpResponseError or TypeError or ValueError

Example:

Recognize entities in a batch of documents.
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient

endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
documents = [
    "Microsoft was founded by Bill Gates and Paul Allen.",
    "I had a wonderful trip to Seattle last week.",
    "I visited the Space Needle 2 times.",
]

result = text_analytics_client.recognize_entities(documents)
docs = [doc for doc in result if not doc.is_error]

for idx, doc in enumerate(docs):
    print("\nDocument text: {}".format(documents[idx]))
    for entity in doc.entities:
        print("Entity: {}".format(entity.text))
        print("...Category: {}".format(entity.category))
        print("...Confidence Score: {}".format(entity.confidence_score))
        print("...Offset: {}".format(entity.offset))
        print("...Length: {}".format(entity.length))
recognize_linked_entities(documents: Union[List[str], List[TextDocumentInput], List[Dict[str, str]]], **kwargs: Any) → List[Union[RecognizeLinkedEntitiesResult, DocumentError]][source]

Recognize linked entities from a well-known knowledge base for a batch of documents.

Identifies and disambiguates the identity of each entity found in text (for example, determining whether an occurrence of the word Mars refers to the planet, or to the Roman god of war). Recognized entities are associated with URLs to a well-known knowledge base, like Wikipedia.

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

Parameters

documents (list[str] or list[TextDocumentInput] or list[dict[str, str]]) – The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {“id”: “1”, “language”: “en”, “text”: “hello world”}.

Keyword Arguments
  • language (str) – The 2 letter ISO 639-1 representation of language for the entire batch. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Text Analytics API.

  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version.

  • show_stats (bool) – If set to true, response will contain document level statistics.

Returns

The combined list of RecognizeLinkedEntitiesResult and DocumentError in the order the original documents were passed in.

Return type

list[RecognizeLinkedEntitiesResult, DocumentError]

Raises

HttpResponseError or TypeError or ValueError

Example:

Recognize linked entities in a batch of documents.
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient

endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
documents = [
    "Microsoft moved its headquarters to Bellevue, Washington in January 1979.",
    "Steve Ballmer stepped down as CEO of Microsoft and was succeeded by Satya Nadella.",
    "Microsoft superó a Apple Inc. como la compañía más valiosa que cotiza en bolsa en el mundo.",
]

result = text_analytics_client.recognize_linked_entities(documents)
docs = [doc for doc in result if not doc.is_error]

for idx, doc in enumerate(docs):
    print("Document text: {}\n".format(documents[idx]))
    for entity in doc.entities:
        print("Entity: {}".format(entity.name))
        print("...URL: {}".format(entity.url))
        print("...Data Source: {}".format(entity.data_source))
        print("...Entity matches:")
        for match in entity.matches:
            print("......Entity match text: {}".format(match.text))
            print("......Confidence Score: {}".format(match.confidence_score))
            print("......Offset: {}".format(match.offset))
            print("......Length: {}".format(match.length))
    print("------------------------------------------")
recognize_pii_entities(documents: Union[List[str], List[TextDocumentInput], List[Dict[str, str]]], **kwargs: Any) → List[Union[RecognizePiiEntitiesResult, DocumentError]][source]

Recognize entities containing personal information for a batch of documents.

Returns a list of personal information entities (“SSN”, “Bank Account”, etc) in the document. For the list of supported entity types, check https://aka.ms/tanerpii

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

Parameters

documents (list[str] or list[TextDocumentInput] or list[dict[str, str]]) – The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {“id”: “1”, “language”: “en”, “text”: “hello world”}.

Keyword Arguments
  • language (str) – The 2 letter ISO 639-1 representation of language for the entire batch. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Text Analytics API.

  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version.

  • show_stats (bool) – If set to true, response will contain document level statistics.

  • domain_filter (str or PiiEntityDomainType) – Filters the response entities to ones only included in the specified domain. I.e., if set to ‘PHI’, will only return entities in the Protected Healthcare Information domain. See https://aka.ms/tanerpii for more information.

Returns

The combined list of RecognizePiiEntitiesResult and DocumentError in the order the original documents were passed in.

Return type

list[RecognizePiiEntitiesResult, DocumentError]

Raises

HttpResponseError or TypeError or ValueError or NotImplementedError

Example:

Recognize personally identifiable information entities in a batch of documents.
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient

endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

text_analytics_client = TextAnalyticsClient(
    endpoint=endpoint, credential=AzureKeyCredential(key)
)
documents = [
    "The employee's SSN is 859-98-0987.",
    "Is 998.214.865-68 your Brazilian CPF number?",
    "My phone number is 555-555-5555"
]

result = text_analytics_client.recognize_pii_entities(documents)
docs = [doc for doc in result if not doc.is_error]

for idx, doc in enumerate(docs):
    print("Document text: {}".format(documents[idx]))
    print("Redacted document text: {}".format(doc.redacted_text))
    for entity in doc.entities:
        print("...Entity: {}".format(entity.text))
        print("......Category: {}".format(entity.category))
        print("......Confidence Score: {}\n".format(entity.confidence_score))
class azure.ai.textanalytics.DetectLanguageInput(**kwargs)[source]

The input document to be analyzed for detecting language.

Variables
  • id (str) – Required. Unique, non-empty document identifier.

  • text (str) – Required. The input text to process.

  • country_hint (str) – A country hint to help better detect the language of the text. Accepts two letter country codes specified by ISO 3166-1 alpha-2. Defaults to “US”. Pass in the string “none” to not use a country_hint.

as_dict(keep_readonly=True, key_transformer=<function attribute_transformer>, **kwargs)

Return a dict that can be JSONify using json.dump.

Advanced usage might optionaly use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

  • attribute_transformer

  • full_restapi_key_transformer

  • last_restapi_key_transformer

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

key_transformer (function) – A key transformer function.

Returns

A dict JSON compatible object

Return type

dict

classmethod deserialize(data, content_type=None)

Parse a str using the RestAPI syntax and return a model.

Parameters
  • data (str) – A str using RestAPI structure. JSON by default.

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod enable_additional_properties_sending()
classmethod from_dict(data, key_extractors=None, content_type=None)

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

Parameters
  • data (dict) – A dict using RestAPI structure

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod is_xml_model()
serialize(keep_readonly=False, **kwargs)

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

keep_readonly (bool) – If you want to serialize the readonly attributes

Returns

A dict JSON compatible object

Return type

dict

validate()

Validate this model recursively and return a list of ValidationError.

Returns

A list of validation error

Return type

list

class azure.ai.textanalytics.TextDocumentInput(**kwargs)[source]

The input document to be analyzed by the service.

Variables
  • id (str) – Required. A unique, non-empty document identifier.

  • text (str) – Required. The input text to process.

  • language (str) – This is the 2 letter ISO 639-1 representation of a language. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default.

as_dict(keep_readonly=True, key_transformer=<function attribute_transformer>, **kwargs)

Return a dict that can be JSONify using json.dump.

Advanced usage might optionaly use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

  • attribute_transformer

  • full_restapi_key_transformer

  • last_restapi_key_transformer

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

key_transformer (function) – A key transformer function.

Returns

A dict JSON compatible object

Return type

dict

classmethod deserialize(data, content_type=None)

Parse a str using the RestAPI syntax and return a model.

Parameters
  • data (str) – A str using RestAPI structure. JSON by default.

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod enable_additional_properties_sending()
classmethod from_dict(data, key_extractors=None, content_type=None)

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

Parameters
  • data (dict) – A dict using RestAPI structure

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod is_xml_model()
serialize(keep_readonly=False, **kwargs)

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

keep_readonly (bool) – If you want to serialize the readonly attributes

Returns

A dict JSON compatible object

Return type

dict

validate()

Validate this model recursively and return a list of ValidationError.

Returns

A list of validation error

Return type

list

class azure.ai.textanalytics.DetectedLanguage(**kwargs)[source]

DetectedLanguage contains the predicted language found in text, its confidence score, and ISO 639-1 representation.

Variables
  • name (str) – Long name of a detected language (e.g. English, French).

  • iso6391_name (str) – A two letter representation of the detected language according to the ISO 639-1 standard (e.g. en, fr).

  • confidence_score (float) – A confidence score between 0 and 1. Scores close to 1 indicate 100% certainty that the identified language is true.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.RecognizeEntitiesResult(**kwargs)[source]

RecognizeEntitiesResult is a result object which contains the recognized entities from a particular document.

Variables
  • id (str) – Unique, non-empty document identifier that matches the document id that was passed in with the request. If not specified in the request, an id is assigned for the document.

  • entities (list[CategorizedEntity]) – Recognized entities in the document.

  • warnings (list[TextAnalyticsWarning]) – Warnings encountered while processing document. Results will still be returned if there are warnings, but they may not be fully accurate.

  • statistics (TextDocumentStatistics) – If show_stats=true was specified in the request this field will contain information about the document payload.

  • is_error (bool) – Boolean check for error item when iterating over list of results. Always False for an instance of a RecognizeEntitiesResult.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.DetectLanguageResult(**kwargs)[source]

DetectLanguageResult is a result object which contains the detected language of a particular document.

Variables
  • id (str) – Unique, non-empty document identifier that matches the document id that was passed in with the request. If not specified in the request, an id is assigned for the document.

  • primary_language (DetectedLanguage) – The primary language detected in the document.

  • warnings (list[TextAnalyticsWarning]) – Warnings encountered while processing document. Results will still be returned if there are warnings, but they may not be fully accurate.

  • statistics (TextDocumentStatistics) – If show_stats=true was specified in the request this field will contain information about the document payload.

  • is_error (bool) – Boolean check for error item when iterating over list of results. Always False for an instance of a DetectLanguageResult.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.CategorizedEntity(**kwargs)[source]

CategorizedEntity contains information about a particular entity found in text.

Variables
  • text (str) – Entity text as appears in the request.

  • category (str) – Entity category, such as Person/Location/Org/SSN etc

  • subcategory (str) – Entity subcategory, such as Age/Year/TimeRange etc

  • offset (int) – The entity text offset from the start of the document. Returned in unicode code points. Only returned for API versions v3.1-preview and up.

  • length (int) – The length of the entity text. Returned in unicode code points. Only returned for API versions v3.1-preview and up.

  • confidence_score (float) – Confidence score between 0 and 1 of the extracted entity.

New in version v3.1-preview: The offset and length properties.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.TextAnalyticsError(**kwargs)[source]

TextAnalyticsError contains the error code, message, and other details that explain why the batch or individual document failed to be processed by the service.

Variables
  • code (str) – Error code. Possible values include: ‘invalidRequest’, ‘invalidArgument’, ‘internalServerError’, ‘serviceUnavailable’, ‘invalidParameterValue’, ‘invalidRequestBodyFormat’, ‘emptyRequest’, ‘missingInputRecords’, ‘invalidDocument’, ‘modelVersionIncorrect’, ‘invalidDocumentBatch’, ‘unsupportedLanguageCode’, ‘invalidCountryHint’

  • message (str) – Error message.

  • target (str) – Error target.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.TextAnalyticsWarning(**kwargs)[source]

TextAnalyticsWarning contains the warning code and message that explains why the response has a warning.

Variables
  • code (str) – Warning code. Possible values include: ‘LongWordsInDocument’, ‘DocumentTruncated’.

  • message (str) – Warning message.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.ExtractKeyPhrasesResult(**kwargs)[source]

ExtractKeyPhrasesResult is a result object which contains the key phrases found in a particular document.

Variables
  • id (str) – Unique, non-empty document identifier that matches the document id that was passed in with the request. If not specified in the request, an id is assigned for the document.

  • key_phrases (list[str]) – A list of representative words or phrases. The number of key phrases returned is proportional to the number of words in the input document.

  • warnings (list[TextAnalyticsWarning]) – Warnings encountered while processing document. Results will still be returned if there are warnings, but they may not be fully accurate.

  • statistics (TextDocumentStatistics) – If show_stats=true was specified in the request this field will contain information about the document payload.

  • is_error (bool) – Boolean check for error item when iterating over list of results. Always False for an instance of a ExtractKeyPhrasesResult.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.RecognizeLinkedEntitiesResult(**kwargs)[source]

RecognizeLinkedEntitiesResult is a result object which contains links to a well-known knowledge base, like for example, Wikipedia or Bing.

Variables
  • id (str) – Unique, non-empty document identifier that matches the document id that was passed in with the request. If not specified in the request, an id is assigned for the document.

  • entities (list[LinkedEntity]) – Recognized well-known entities in the document.

  • warnings (list[TextAnalyticsWarning]) – Warnings encountered while processing document. Results will still be returned if there are warnings, but they may not be fully accurate.

  • statistics (TextDocumentStatistics) – If show_stats=true was specified in the request this field will contain information about the document payload.

  • is_error (bool) – Boolean check for error item when iterating over list of results. Always False for an instance of a RecognizeLinkedEntitiesResult.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.AnalyzeSentimentResult(**kwargs)[source]

AnalyzeSentimentResult is a result object which contains the overall predicted sentiment and confidence scores for your document and a per-sentence sentiment prediction with scores.

Variables
  • id (str) – Unique, non-empty document identifier that matches the document id that was passed in with the request. If not specified in the request, an id is assigned for the document.

  • sentiment (str) – Predicted sentiment for document (Negative, Neutral, Positive, or Mixed). Possible values include: ‘positive’, ‘neutral’, ‘negative’, ‘mixed’

  • warnings (list[TextAnalyticsWarning]) – Warnings encountered while processing document. Results will still be returned if there are warnings, but they may not be fully accurate.

  • statistics (TextDocumentStatistics) – If show_stats=true was specified in the request this field will contain information about the document payload.

  • confidence_scores (SentimentConfidenceScores) – Document level sentiment confidence scores between 0 and 1 for each sentiment label.

  • sentences (list[SentenceSentiment]) – Sentence level sentiment analysis.

  • is_error (bool) – Boolean check for error item when iterating over list of results. Always False for an instance of a AnalyzeSentimentResult.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.TextDocumentStatistics(**kwargs)[source]

TextDocumentStatistics contains information about the document payload.

Variables
  • character_count (int) – Number of text elements recognized in the document.

  • transaction_count (int) – Number of transactions for the document.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.DocumentError(**kwargs)[source]

DocumentError is an error object which represents an error on the individual document.

Variables
  • id (str) – Unique, non-empty document identifier that matches the document id that was passed in with the request. If not specified in the request, an id is assigned for the document.

  • error (TextAnalyticsError) – The document error.

  • is_error (bool) – Boolean check for error item when iterating over list of results. Always True for an instance of a DocumentError.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.LinkedEntity(**kwargs)[source]

LinkedEntity contains a link to the well-known recognized entity in text. The link comes from a data source like Wikipedia or Bing. It additionally includes all of the matches of this entity found in the document.

Variables
  • name (str) – Entity Linking formal name.

  • matches (list[LinkedEntityMatch]) – List of instances this entity appears in the text.

  • language (str) – Language used in the data source.

  • data_source_entity_id (str) – Unique identifier of the recognized entity from the data source.

  • url (str) – URL to the entity’s page from the data source.

  • data_source (str) – Data source used to extract entity linking, such as Wiki/Bing etc.

  • bing_entity_search_api_id (str) – Bing Entity Search unique identifier of the recognized entity. Use in conjunction with the Bing Entity Search SDK to fetch additional relevant information. Only available for API version v3.1-preview and up.

New in version v3.1-preview: The bing_entity_search_api_id property.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.LinkedEntityMatch(**kwargs)[source]

A match for the linked entity found in text. Provides the confidence score of the prediction and where the entity was found in the text.

Variables
  • confidence_score (float) – If a well-known item is recognized, a decimal number denoting the confidence level between 0 and 1 will be returned.

  • text (str) – Entity text as appears in the request.

  • offset (int) – The linked entity match text offset from the start of the document. Returned in unicode code points. Only returned for API versions v3.1-preview and up.

  • length (int) – The length of the linked entity match text. Returned in unicode code points. Only returned for API versions v3.1-preview and up.

New in version v3.1-preview: The offset and length properties.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.TextDocumentBatchStatistics(**kwargs)[source]

TextDocumentBatchStatistics contains information about the request payload. Note: This object is not returned in the response and needs to be retrieved by a response hook.

Variables
  • document_count (int) – Number of documents submitted in the request.

  • valid_document_count (int) – Number of valid documents. This excludes empty, over-size limit or non-supported languages documents.

  • erroneous_document_count (int) – Number of invalid documents. This includes empty, over-size limit or non-supported languages documents.

  • transaction_count (long) – Number of transactions for the request.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.SentenceSentiment(**kwargs)[source]

SentenceSentiment contains the predicted sentiment and confidence scores for each individual sentence in the document.

Variables
  • text (str) – The sentence text.

  • sentiment (str) – The predicted Sentiment for the sentence. Possible values include: ‘positive’, ‘neutral’, ‘negative’

  • confidence_scores (SentimentConfidenceScores) – The sentiment confidence score between 0 and 1 for the sentence for all labels.

  • offset (int) – The sentence offset from the start of the document. Returned in unicode code points. Only returned for API versions v3.1-preview and up.

  • length (int) – The length of the sentence. Returned in unicode code points. Only returned for API versions v3.1-preview and up.

  • mined_opinions (list[MinedOpinion]) – The list of opinions mined from this sentence. For example in “The food is good, but the service is bad”, we would mind these two opinions “food is good”, “service is bad”. Only returned if show_opinion_mining is set to True in the call to analyze_sentiment and api version is v3.1-preview and up.

New in version v3.1-preview: The offset, length, and mined_opinions properties.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.SentimentConfidenceScores(**kwargs)[source]

The confidence scores (Softmax scores) between 0 and 1. Higher values indicate higher confidence.

Variables
  • positive (float) – Positive score.

  • neutral (float) – Neutral score.

  • negative (float) – Negative score.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.MinedOpinion(**kwargs)[source]

A mined opinion object represents an opinion we’ve extracted from a sentence. It consists of both an aspect that these opinions are about, and the actual opinions themselves.

Variables
get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.AspectSentiment(**kwargs)[source]

AspectSentiment contains the related opinions, predicted sentiment, confidence scores and other information about an aspect of a product. An aspect of a product/service is a key component of that product/service. For example in “The food at Hotel Foo is good”, “food” is an aspect of “Hotel Foo”.

Variables
  • text (str) – The aspect text.

  • sentiment (str) – The predicted Sentiment for the aspect. Possible values include ‘positive’, ‘mixed’, and ‘negative’.

  • confidence_scores (SentimentConfidenceScores) – The sentiment confidence score between 0 and 1 for the aspect for ‘positive’ and ‘negative’ labels. It’s score for ‘neutral’ will always be 0

  • offset (int) – The aspect offset from the start of the document. Returned in unicode code points.

  • length (int) – The length of the aspect. Returned in unicode code points.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.OpinionSentiment(**kwargs)[source]

OpinionSentiment contains the predicted sentiment, confidence scores and other information about an opinion of an aspect. For example, in the sentence “The food is good”, the opinion of the aspect ‘food’ is ‘good’.

Variables
  • text (str) – The opinion text.

  • sentiment (str) – The predicted Sentiment for the opinion. Possible values include ‘positive’, ‘mixed’, and ‘negative’.

  • confidence_scores (SentimentConfidenceScores) – The sentiment confidence score between 0 and 1 for the opinion for ‘positive’ and ‘negative’ labels. It’s score for ‘neutral’ will always be 0

  • offset (int) – The opinion offset from the start of the document. Returned in unicode code points.

  • length (int) – The length of the opinion. Returned in unicode code points.

  • is_negated (bool) – Whether the opinion is negated. For example, in “The food is not good”, the opinion “good” is negated.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.RecognizePiiEntitiesResult(**kwargs)[source]

RecognizePiiEntitiesResult is a result object which contains the recognized Personally Identifiable Information (PII) entities from a particular document.

Variables
  • id (str) – Unique, non-empty document identifier that matches the document id that was passed in with the request. If not specified in the request, an id is assigned for the document.

  • entities (list[PiiEntity]) – Recognized PII entities in the document.

  • redacted_text (str) – Returns the text of the input document with all of the PII information redacted out. Only returned for API versions v3.1-preview and up.

  • warnings (list[TextAnalyticsWarning]) – Warnings encountered while processing document. Results will still be returned if there are warnings, but they may not be fully accurate.

  • statistics (TextDocumentStatistics) – If show_stats=true was specified in the request this field will contain information about the document payload.

  • is_error (bool) – Boolean check for error item when iterating over list of results. Always False for an instance of a RecognizePiiEntitiesResult.

New in version v3.1-preview: The redacted_text parameter.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.PiiEntity(**kwargs)[source]

PiiEntity contains information about a Personally Identifiable Information (PII) entity found in text.

Variables
  • text (str) – Entity text as appears in the request.

  • category (str) – Entity category, such as Financial Account Identification/Social Security Number/Phone Number, etc.

  • subcategory (str) – Entity subcategory, such as Credit Card/EU Phone number/ABA Routing Numbers, etc.

  • offset (int) – The PII entity text offset from the start of the document. Returned in unicode code points.

  • length (int) – The length of the PII entity text. Returned in unicode code points.

  • confidence_score (float) – Confidence score between 0 and 1 of the extracted entity.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.PiiEntityDomainType[source]

The different domains of PII entities that users can filter by

PROTECTED_HEALTH_INFORMATION = 'PHI'