azure.ai.textanalytics package

class azure.ai.textanalytics.TextAnalyticsClient(endpoint, credential, **kwargs)[source]

The Text Analytics API is a suite of text analytics web services built with best-in-class Microsoft machine learning algorithms. The API can be used to analyze unstructured text for tasks such as sentiment analysis, key phrase extraction, and language detection. No training data is needed to use this API - just bring your text data. This API uses advanced natural language processing techniques to deliver best in class predictions.

Further documentation can be found in https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview

Parameters
  • endpoint (str) – Supported Cognitive Services or Text Analytics resource endpoints (protocol and hostname, for example: https://westus2.api.cognitive.microsoft.com).

  • credential (str or TokenCredential) – Credentials needed for the client to connect to Azure. This can be the cognitive services/text analytics subscription key or a token credential from azure.identity.

Keyword Arguments
  • default_country_hint (str) – Sets the default country_hint to use for all operations. Defaults to “US”. If you don’t want to use a country hint, pass the empty string “”.

  • default_language (str) – Sets the default language to use for all operations. Defaults to “en”.

Example:

Creating the TextAnalyticsClient with endpoint and subscription key.
from azure.ai.textanalytics import TextAnalyticsClient
endpoint = os.getenv("AZURE_TEXT_ANALYTICS_ENDPOINT")
key = os.getenv("AZURE_TEXT_ANALYTICS_KEY")

text_analytics_client = TextAnalyticsClient(endpoint, key)
Creating the TextAnalyticsClient with endpoint and token credential from Azure Active Directory.
from azure.ai.textanalytics import TextAnalyticsClient
from azure.identity import DefaultAzureCredential

endpoint = os.getenv("AZURE_TEXT_ANALYTICS_ENDPOINT")
credential = DefaultAzureCredential()

text_analytics_client = TextAnalyticsClient(endpoint, credential=credential)
analyze_sentiment(inputs, language=None, **kwargs)[source]

Analyze sentiment for a batch of documents.

Returns a sentiment prediction, as well as sentiment scores for each sentiment class (Positive, Negative, and Neutral) for the document and each sentence within it. See https://aka.ms/talangs for the list of enabled languages.

Parameters
  • inputs (list[str] or list[TextDocumentInput]) – The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {“id”: “1”, “language”: “en”, “text”: “hello world”}.

  • language (str) – The 2 letter ISO 639-1 representation of language for the entire batch. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default. Per-document language will take precedence over whole batch language.

Keyword Arguments
  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version.

  • show_stats (bool) – If set to true, response will contain document level statistics.

Returns

The combined list of AnalyzeSentimentResults and DocumentErrors in the order the original documents were passed in.

Return type

list[AnalyzeSentimentResult, DocumentError]

Raises

HttpResponseError

Example:

Analyze sentiment in a batch of documents.
from azure.ai.textanalytics import TextAnalyticsClient
text_analytics_client = TextAnalyticsClient(endpoint=self.endpoint, credential=self.key)
documents = [
    "I had the best day of my life.",
    "This was a waste of my time. The speaker put me to sleep.",
    "No tengo dinero ni nada que dar...",
    "L'hôtel n'était pas très confortable. L'éclairage était trop sombre."
]

result = text_analytics_client.analyze_sentiment(documents)
docs = [doc for doc in result if not doc.is_error]

for idx, doc in enumerate(docs):
    print("Document text: {}".format(documents[idx]))
    print("Overall sentiment: {}".format(doc.sentiment))
detect_languages(inputs, country_hint=None, **kwargs)[source]

Detects Language for a batch of documents.

Returns the detected language and a numeric score between zero and one. Scores close to one indicate 100% certainty that the identified language is true. See https://aka.ms/talangs for the list of enabled languages.

Parameters
  • inputs (list[str] or list[DetectLanguageInput]) – The set of documents to process as part of this batch. If you wish to specify the ID and country_hint on a per-item basis you must use as input a list[DetectLanguageInput] or a list of dict representations of DetectLanguageInput, like {“id”: “1”, “country_hint”: “us”, “text”: “hello world”}.

  • country_hint (str) – A country hint for the entire batch. Accepts two letter country codes specified by ISO 3166-1 alpha-2. Per-document country hints will take precedence over whole batch hints. Defaults to “US”. If you don’t want to use a country hint, pass the empty string “”.

Keyword Arguments
  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version.

  • show_stats (bool) – If set to true, response will contain document level statistics.

Returns

The combined list of DetectLanguageResults and DocumentErrors in the order the original documents were passed in.

Return type

list[DetectLanguageResult, DocumentError]

Raises

HttpResponseError

Example:

Detecting language in a batch of documents.
from azure.ai.textanalytics import TextAnalyticsClient
text_analytics_client = TextAnalyticsClient(endpoint=self.endpoint, credential=self.key)
documents = [
    "This document is written in English.",
    "Este es un document escrito en Español.",
    "这是一个用中文写的文件",
    "Dies ist ein Dokument in englischer Sprache.",
    "Detta är ett dokument skrivet på engelska."
]

result = text_analytics_client.detect_languages(documents)

for idx, doc in enumerate(result):
    if not doc.is_error:
        print("Document text: {}".format(documents[idx]))
        print("Language detected: {}".format(doc.primary_language.name))
        print("ISO6391 name: {}".format(doc.primary_language.iso6391_name))
        print("Confidence score: {}\n".format(doc.primary_language.score))
    if doc.is_error:
        print(doc.id, doc.error)
extract_key_phrases(inputs, language=None, **kwargs)[source]

Extract Key Phrases from a batch of documents.

Returns a list of strings denoting the key phrases in the input text. See https://aka.ms/talangs for the list of enabled languages.

Parameters
  • inputs (list[str] or list[TextDocumentInput]) – The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {“id”: “1”, “language”: “en”, “text”: “hello world”}.

  • language (str) – The 2 letter ISO 639-1 representation of language for the entire batch. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default. Per-document language will take precedence over whole batch language.

Keyword Arguments
  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version.

  • show_stats (bool) – If set to true, response will contain document level statistics.

Returns

The combined list of ExtractKeyPhrasesResults and DocumentErrors in the order the original documents were passed in.

Return type

list[ExtractKeyPhrasesResult, DocumentError]

Raises

HttpResponseError

Example:

Extract the key phrases in a batch of documents.
from azure.ai.textanalytics import TextAnalyticsClient
text_analytics_client = TextAnalyticsClient(endpoint=self.endpoint, credential=self.key)
documents = [
    "Redmond is a city in King County, Washington, United States, located 15 miles east of Seattle.",
    "I need to take my cat to the veterinarian.",
    "I will travel to South America in the summer.",
]

result = text_analytics_client.extract_key_phrases(documents)
for doc in result:
    if not doc.is_error:
        print(doc.key_phrases)
    if doc.is_error:
        print(doc.id, doc.error)
recognize_entities(inputs, language=None, **kwargs)[source]

Named Entity Recognition for a batch of documents.

Returns a list of general named entities in a given document. For a list of supported entity types, check: https://aka.ms/taner For a list of enabled languages, check: https://aka.ms/talangs

Parameters
  • inputs (list[str] or list[TextDocumentInput]) – The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {“id”: “1”, “language”: “en”, “text”: “hello world”}.

  • language (str) – The 2 letter ISO 639-1 representation of language for the entire batch. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default. Per-document language will take precedence over whole batch language.

Keyword Arguments
  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version.

  • show_stats (bool) – If set to true, response will contain document level statistics.

Returns

The combined list of RecognizeEntitiesResults and DocumentErrors in the order the original documents were passed in.

Return type

list[RecognizeEntitiesResult, DocumentError]

Raises

HttpResponseError

Example:

Recognize entities in a batch of documents.
from azure.ai.textanalytics import TextAnalyticsClient
text_analytics_client = TextAnalyticsClient(endpoint=self.endpoint, credential=self.key)
documents = [
    "Microsoft was founded by Bill Gates and Paul Allen.",
    "I had a wonderful trip to Seattle last week.",
    "I visited the Space Needle 2 times.",
]

result = text_analytics_client.recognize_entities(documents)
docs = [doc for doc in result if not doc.is_error]

for idx, doc in enumerate(docs):
    print("\nDocument text: {}".format(documents[idx]))
    for entity in doc.entities:
        print("Entity: \t", entity.text, "\tType: \t", entity.type,
              "\tConfidence Score: \t", round(entity.score, 3))
recognize_linked_entities(inputs, language=None, **kwargs)[source]

Recognize linked entities from a well-known knowledge base for a batch of documents.

Returns a list of recognized entities with links to a well-known knowledge base. See https://aka.ms/talangs for supported languages in Text Analytics API.

Parameters
  • inputs (list[str] or list[TextDocumentInput]) – The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {“id”: “1”, “language”: “en”, “text”: “hello world”}.

  • language (str) – The 2 letter ISO 639-1 representation of language for the entire batch. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default. Per-document language will take precedence over whole batch language.

Keyword Arguments
  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version.

  • show_stats (bool) – If set to true, response will contain document level statistics.

Returns

The combined list of RecognizeLinkedEntitiesResults and DocumentErrors in the order the original documents were passed in.

Return type

list[RecognizeLinkedEntitiesResult, DocumentError]

Raises

HttpResponseError

Example:

Recognize linked entities in a batch of documents.
from azure.ai.textanalytics import TextAnalyticsClient
text_analytics_client = TextAnalyticsClient(endpoint=self.endpoint, credential=self.key)
documents = [
    "Microsoft moved its headquarters to Bellevue, Washington in January 1979.",
    "Steve Ballmer stepped down as CEO of Microsoft and was succeeded by Satya Nadella.",
    "Microsoft superó a Apple Inc. como la compañía más valiosa que cotiza en bolsa en el mundo.",
]

result = text_analytics_client.recognize_linked_entities(documents)
docs = [doc for doc in result if not doc.is_error]

for idx, doc in enumerate(docs):
    print("Document text: {}\n".format(documents[idx]))
    for entity in doc.entities:
        print("Entity: {}".format(entity.name))
        print("Url: {}".format(entity.url))
        print("Data Source: {}".format(entity.data_source))
        for match in entity.matches:
            print("Score: {0:.3f}".format(match.score))
            print("Offset: {}".format(match.offset))
            print("Length: {}\n".format(match.length))
    print("------------------------------------------")
recognize_pii_entities(inputs, language=None, **kwargs)[source]

Recognize entities containing personal information for a batch of documents.

Returns a list of personal information entities (“SSN”, “Bank Account”, etc) in the document. For the list of supported entity types, check https://aka.ms/tanerpii. See https://aka.ms/talangs for the list of enabled languages.

Parameters
  • inputs (list[str] or list[TextDocumentInput]) – The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {“id”: “1”, “language”: “en”, “text”: “hello world”}.

  • language (str) – The 2 letter ISO 639-1 representation of language for the entire batch. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default. Per-document language will take precedence over whole batch language.

Keyword Arguments
  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version.

  • show_stats (bool) – If set to true, response will contain document level statistics.

Returns

The combined list of RecognizePiiEntitiesResults and DocumentErrors in the order the original documents were passed in.

Return type

list[RecognizePiiEntitiesResult, DocumentError]

Raises

HttpResponseError

Example:

Recognize personally identifiable information entities in a batch of documents.
from azure.ai.textanalytics import TextAnalyticsClient
text_analytics_client = TextAnalyticsClient(endpoint=self.endpoint, credential=self.key)
documents = [
    "The employee's SSN is 555-55-5555.",
    "Your ABA number - 111000025 - is the first 9 digits in the lower left hand corner of your personal check.",
    "Is 998.214.865-68 your Brazilian CPF number?"
]

result = text_analytics_client.recognize_pii_entities(documents)
docs = [doc for doc in result if not doc.is_error]

for idx, doc in enumerate(docs):
    print("Document text: {}".format(documents[idx]))
    for entity in doc.entities:
        print("Entity: {}".format(entity.text))
        print("Type: {}".format(entity.type))
        print("Confidence Score: {}\n".format(entity.score))
class azure.ai.textanalytics.DetectLanguageInput(**kwargs)[source]

Contains an input document to be analyzed for type of language.

Parameters
  • id (str) – Required. Unique, non-empty document identifier.

  • text (str) – Required. The input text to process.

  • country_hint (str) – A country hint to help better detect the language of the text. Accepts two letter country codes specified by ISO 3166-1 alpha-2. Defaults to “US”. Pass in the empty string “” to not use a country_hint.

as_dict(keep_readonly=True, key_transformer=<function attribute_transformer>)

Return a dict that can be JSONify using json.dump.

Advanced usage might optionaly use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

  • attribute_transformer

  • full_restapi_key_transformer

  • last_restapi_key_transformer

Parameters

key_transformer (function) – A key transformer function.

Returns

A dict JSON compatible object

Return type

dict

classmethod deserialize(data, content_type=None)

Parse a str using the RestAPI syntax and return a model.

Parameters
  • data (str) – A str using RestAPI structure. JSON by default.

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod enable_additional_properties_sending()
classmethod from_dict(data, key_extractors=None, content_type=None)

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

Parameters
  • data (dict) – A dict using RestAPI structure

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod is_xml_model()
serialize(keep_readonly=False)

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

Parameters

keep_readonly (bool) – If you want to serialize the readonly attributes

Returns

A dict JSON compatible object

Return type

dict

validate()

Validate this model recursively and return a list of ValidationError.

Returns

A list of validation error

Return type

list

class azure.ai.textanalytics.TextDocumentInput(**kwargs)[source]

Contains an input document to be analyzed by the service.

Parameters
  • id (str) – Required. A unique, non-empty document identifier.

  • text (str) – Required. The input text to process.

  • language (str) – This is the 2 letter ISO 639-1 representation of a language. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default.

as_dict(keep_readonly=True, key_transformer=<function attribute_transformer>)

Return a dict that can be JSONify using json.dump.

Advanced usage might optionaly use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

  • attribute_transformer

  • full_restapi_key_transformer

  • last_restapi_key_transformer

Parameters

key_transformer (function) – A key transformer function.

Returns

A dict JSON compatible object

Return type

dict

classmethod deserialize(data, content_type=None)

Parse a str using the RestAPI syntax and return a model.

Parameters
  • data (str) – A str using RestAPI structure. JSON by default.

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod enable_additional_properties_sending()
classmethod from_dict(data, key_extractors=None, content_type=None)

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

Parameters
  • data (dict) – A dict using RestAPI structure

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod is_xml_model()
serialize(keep_readonly=False)

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

Parameters

keep_readonly (bool) – If you want to serialize the readonly attributes

Returns

A dict JSON compatible object

Return type

dict

validate()

Validate this model recursively and return a list of ValidationError.

Returns

A list of validation error

Return type

list

class azure.ai.textanalytics.DetectedLanguage(**kwargs)[source]

DetectedLanguage.

Parameters
  • name (str) – Long name of a detected language (e.g. English, French).

  • iso6391_name (str) – A two letter representation of the detected language according to the ISO 639-1 standard (e.g. en, fr).

  • score (float) – A confidence score between 0 and 1. Scores close to 1 indicate 100% certainty that the identified language is true.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.RecognizeEntitiesResult(**kwargs)[source]

RecognizeEntitiesResult.

Parameters
  • id (str) – Unique, non-empty document identifier.

  • entities (list[NamedEntity]) – Recognized entities in the document.

  • statistics (TextDocumentStatistics) – If show_stats=true was specified in the request this field will contain information about the document payload.

  • is_error (bool) – Boolean check for error item when iterating over list of results. Always False for an instance of a RecognizeEntitiesResult.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.RecognizePiiEntitiesResult(**kwargs)[source]

RecognizePiiEntitiesResult.

Parameters
  • id (str) – Unique, non-empty document identifier.

  • entities (list[NamedEntity]) – Recognized entities in the document.

  • statistics (TextDocumentStatistics) – If show_stats=true was specified in the request this field will contain information about the document payload.

  • is_error (bool) – Boolean check for error item when iterating over list of results. Always False for an instance of a RecognizePiiEntitiesResult.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.DetectLanguageResult(**kwargs)[source]

DetectLanguageResult.

Parameters
  • id (str) – Unique, non-empty document identifier.

  • detected_languages (list[DetectedLanguage]) – A list of extracted languages.

  • primary_language (DetectedLanguage) – The primary language detected in the document.

  • statistics (TextDocumentStatistics) – If show_stats=true was specified in the request this field will contain information about the document payload.

  • is_error (bool) – Boolean check for error item when iterating over list of results. Always False for an instance of a DetectLanguageResult.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.NamedEntity(**kwargs)[source]

NamedEntity.

Parameters
  • text (str) – Entity text as appears in the request.

  • type (str) – Entity type, such as Person/Location/Org/SSN etc

  • subtype (str) – Entity sub type, such as Age/Year/TimeRange etc

  • offset (int) – Start position (in Unicode characters) for the entity text.

  • length (int) – Length (in Unicode characters) for the entity text.

  • score (float) – Confidence score between 0 and 1 of the extracted entity.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.TextAnalyticsError(**kwargs)[source]

TextAnalyticsError.

Parameters
  • code (str) – Error code. Possible values include: ‘invalidRequest’, ‘invalidArgument’, ‘internalServerError’, ‘serviceUnavailable’

  • message (str) – Error message.

  • target (str) – Error target.

  • inner_error (InnerError) – Inner error contains more specific information.

  • details (list[TextAnalyticsError]) – Details about specific errors that led to this reported error.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.InnerError(**kwargs)[source]

InnerError.

Parameters
  • code (str) – Error code. Possible values include: ‘invalidParameterValue’, ‘invalidRequestBodyFormat’, ‘emptyRequest’, ‘missingInputRecords’, ‘invalidDocument’, ‘modelVersionIncorrect’, ‘invalidDocumentBatch’, ‘unsupportedLanguageCode’, ‘invalidCountryHint’

  • message (str) – Error message.

  • details (dict[str, str]) – Error details.

  • target (str) – Error target.

  • inner_error (InnerError) – Inner error contains more specific information.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.ExtractKeyPhrasesResult(**kwargs)[source]

ExtractKeyPhrasesResult.

Parameters
  • id (str) – Unique, non-empty document identifier.

  • key_phrases (list[str]) – A list of representative words or phrases. The number of key phrases returned is proportional to the number of words in the input document.

  • statistics (TextDocumentStatistics) – If show_stats=true was specified in the request this field will contain information about the document payload.

  • is_error (bool) – Boolean check for error item when iterating over list of results. Always False for an instance of a ExtractKeyPhrasesResult.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.RecognizeLinkedEntitiesResult(**kwargs)[source]

RecognizeLinkedEntitiesResult.

Parameters
  • id (str) – Unique, non-empty document identifier.

  • entities (list[LinkedEntity]) – Recognized well-known entities in the document.

  • statistics (TextDocumentStatistics) – If show_stats=true was specified in the request this field will contain information about the document payload.

  • is_error (bool) – Boolean check for error item when iterating over list of results. Always False for an instance of a RecognizeLinkedEntitiesResult.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.AnalyzeSentimentResult(**kwargs)[source]

AnalyzeSentimentResult.

Parameters
  • id (str) – Unique, non-empty document identifier.

  • sentiment (str) – Predicted sentiment for document (Negative, Neutral, Positive, or Mixed). Possible values include: ‘positive’, ‘neutral’, ‘negative’, ‘mixed’

  • statistics (TextDocumentStatistics) – If show_stats=true was specified in the request this field will contain information about the document payload.

  • document_scores (SentimentConfidenceScorePerLabel) – Document level sentiment confidence scores between 0 and 1 for each sentiment class.

  • sentences (list[SentenceSentiment]) – Sentence level sentiment analysis.

  • is_error (bool) – Boolean check for error item when iterating over list of results. Always False for an instance of a AnalyzeSentimentResult.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.TextDocumentStatistics(**kwargs)[source]

If showStats=true was specified in the request this field will contain information about the document payload.

Parameters
  • character_count (int) – Number of text elements recognized in the document.

  • transaction_count (int) – Number of transactions for the document.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.DocumentError(**kwargs)[source]

DocumentError.

Parameters
  • id (str) – Document Id.

  • error (TextAnalyticsError) – Document Error.

  • is_error (bool) – Boolean check for error item when iterating over list of results. Always True for an instance of a DocumentError.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.LinkedEntity(**kwargs)[source]

LinkedEntity.

Parameters
  • name (str) – Entity Linking formal name.

  • matches (list[LinkedEntityMatch]) – List of instances this entity appears in the text.

  • language (str) – Language used in the data source.

  • id (str) – Unique identifier of the recognized entity from the data source.

  • url (str) – URL for the entity’s page from the data source.

  • data_source (str) – Data source used to extract entity linking, such as Wiki/Bing etc.

  • is_error (bool) – Boolean check for error item when iterating over list of results. Always False for an instance of a LinkedEntity.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.LinkedEntityMatch(**kwargs)[source]

LinkedEntityMatch.

Parameters
  • score (float) – If a well-known item is recognized, a decimal number denoting the confidence level between 0 and 1 will be returned.

  • text (str) – Entity text as appears in the request.

  • offset (int) – Start position (in Unicode characters) for the entity match text.

  • length (int) – Length (in Unicode characters) for the entity match text.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.TextDocumentBatchStatistics(**kwargs)[source]

If show_stats=true was specified in the request this field will contain information about the request payload. Note: This object is not returned in the response and needs to be retrieved by a response hook.

Parameters
  • document_count (int) – Number of documents submitted in the request.

  • valid_document_count (int) – Number of valid documents. This excludes empty, over-size limit or non-supported languages documents.

  • erroneous_document_count (int) – Number of invalid documents. This includes empty, over-size limit or non-supported languages documents.

  • transaction_count (long) – Number of transactions for the request.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.SentenceSentiment(**kwargs)[source]

SentenceSentiment.

Parameters
  • sentiment (str) – The predicted Sentiment for the sentence. Possible values include: ‘positive’, ‘neutral’, ‘negative’

  • sentence_scores (SentimentConfidenceScorePerLabel) – The sentiment confidence score between 0 and 1 for the sentence for all classes.

  • offset (int) – The sentence offset from the start of the document.

  • length (int) – The length of the sentence by Unicode standard.

  • warnings (list[str]) – The warnings generated for the sentence.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.ai.textanalytics.SentimentConfidenceScorePerLabel(**kwargs)[source]

Represents the confidence scores between 0 and 1 across all sentiment classes: positive, neutral, negative.

Parameters
  • positive (float) – Positive score.

  • neutral (float) – Neutral score.

  • negative (float) – Negative score.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
azure.ai.textanalytics.single_detect_language(endpoint, credential, input_text, country_hint='US', **kwargs)[source]

Detect Language for a single document.

Returns the detected language and a numeric score between zero and one. Scores close to one indicate 100% certainty that the identified language is true. See https://aka.ms/talangs for the list of enabled languages.

Parameters
  • endpoint (str) – Supported Cognitive Services endpoints (protocol and hostname, for example: https://westus2.api.cognitive.microsoft.com).

  • credential (str or TokenCredential) – Credentials needed for the client to connect to Azure. This can be the cognitive services subscription key or a token credential from azure.identity.

  • input_text (str) – The single string to detect language from.

  • country_hint (str) – The country hint for the text. Accepts two letter country codes specified by ISO 3166-1 alpha-2. Defaults to “US”. If you don’t want to use a country hint, pass the empty string “”.

Keyword Arguments
  • show_stats (bool) – If set to true, response will contain document level statistics.

  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version.

Returns

An instance of DetectLanguageResult.

Return type

DetectLanguageResult

Raises

HttpResponseError

Example:

Detecting language in a single string.
from azure.ai.textanalytics import single_detect_language

text = "I need to take my cat to the veterinarian."

result = single_detect_language(
    endpoint=self.endpoint,
    credential=self.key,
    input_text=text,
    country_hint="US",
    show_stats=True
)

print("Language detected: {}".format(result.primary_language.name))
print("Confidence score: {}\n".format(result.primary_language.score))
print("Document Statistics:")
print("Text character count: {}".format(result.statistics.character_count))
print("Transactions count: {}".format(result.statistics.transaction_count))
azure.ai.textanalytics.single_recognize_entities(endpoint, credential, input_text, language='en', **kwargs)[source]

Named Entity Recognition for a single document.

Returns a list of general named entities in a given document. For a list of supported entity types, check: https://aka.ms/taner For a list of enabled languages, check: https://aka.ms/talangs

Parameters
  • endpoint (str) – Supported Cognitive Services endpoints (protocol and hostname, for example: https://westus2.api.cognitive.microsoft.com).

  • credential (str or TokenCredential) – Credentials needed for the client to connect to Azure. This can be the cognitive services subscription key or a token credential from azure.identity.

  • input_text (str) – The single string to recognize entities from.

  • language (str) – This is the 2 letter ISO 639-1 representation of a language. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default.

Keyword Arguments
  • show_stats (bool) – If set to true, response will contain document level statistics.

  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version.

Returns

An instance of RecognizeEntitiesResult.

Return type

RecognizeEntitiesResult

Raises

HttpResponseError

Example:

Recognize entities in a single string.
from azure.ai.textanalytics import single_recognize_entities

text = "Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975," \
       " to develop and sell BASIC interpreters for the Altair 8800."

result = single_recognize_entities(
    endpoint=self.endpoint,
    credential=self.key,
    input_text=text,
    language="en"
)

for entity in result.entities:
    print("Entity: {}".format(entity.text))
    print("Type: {}".format(entity.type))
    print("Confidence Score: {0:.3f}\n".format(entity.score))
azure.ai.textanalytics.single_recognize_pii_entities(endpoint, credential, input_text, language='en', **kwargs)[source]

Recognize entities containing personal information for a single document.

Returns a list of personal information entities (“SSN”, “Bank Account”, etc) in the document. For the list of supported entity types, check https://aka.ms/tanerpii. See https://aka.ms/talangs for the list of enabled languages.

Parameters
  • endpoint (str) – Supported Cognitive Services endpoints (protocol and hostname, for example: https://westus2.api.cognitive.microsoft.com).

  • credential (str or TokenCredential) – Credentials needed for the client to connect to Azure. This can be the cognitive services subscription key or a token credential from azure.identity.

  • input_text (str) – The single string to recognize entities from.

  • language (str) – This is the 2 letter ISO 639-1 representation of a language. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default.

Keyword Arguments
  • show_stats (bool) – If set to true, response will contain document level statistics.

  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version.

Returns

An instance of RecognizePiiEntitiesResult.

Return type

RecognizePiiEntitiesResult

Raises

HttpResponseError

Example:

Recognize personally identifiable information entities in a single string.
from azure.ai.textanalytics import single_recognize_pii_entities

text = "The employee's ABA number is 111000025 and his SSN is 555-55-5555."

result = single_recognize_pii_entities(
    endpoint=self.endpoint,
    credential=self.key,
    input_text=text,
    language="en"
)

for entity in result.entities:
    print("Entity: {}".format(entity.text))
    print("Type: {}".format(entity.type))
    print("Confidence Score: {}\n".format(entity.score))
azure.ai.textanalytics.single_recognize_linked_entities(endpoint, credential, input_text, language='en', **kwargs)[source]

Recognize linked entities from a well-known knowledge base for a single document.

Returns a list of recognized entities with links to a well-known knowledge base. See https://aka.ms/talangs for supported languages in Text Analytics API.

Parameters
  • endpoint (str) – Supported Cognitive Services endpoints (protocol and hostname, for example: https://westus2.api.cognitive.microsoft.com).

  • credential (str or TokenCredential) – Credentials needed for the client to connect to Azure. This can be the cognitive services subscription key or a token credential from azure.identity.

  • input_text (str) – The single string to recognize entities from.

  • language (str) – This is the 2 letter ISO 639-1 representation of a language. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default.

Keyword Arguments
  • show_stats (bool) – If set to true, response will contain document level statistics.

  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version.

Returns

An instance of RecognizeLinkedEntitiesResult

Return type

RecognizeLinkedEntitiesResult

Raises

HttpResponseError

Example:

Recognize linked entities in a single string.
from azure.ai.textanalytics import single_recognize_linked_entities

text = "Easter Island, a Chilean territory, is a remote volcanic island in Polynesia. " \
       "Its native name is Rapa Nui."

result = single_recognize_linked_entities(
    endpoint=self.endpoint,
    credential=self.key,
    input_text=text,
    language="en"
)

for entity in result.entities:
    print("Entity: {}".format(entity.name))
    print("Url: {}".format(entity.url))
    print("Data Source: {}\n".format(entity.data_source))
    print("Where this entity appears in the text:")
    for idx, match in enumerate(entity.matches):
        print("Match {}: {}".format(idx+1, match.text))
        print("Score: {0:.3f}".format(match.score))
        print("Offset: {}".format(match.offset))
        print("Length: {}\n".format(match.length))
azure.ai.textanalytics.single_extract_key_phrases(endpoint, credential, input_text, language='en', **kwargs)[source]

Extract Key Phrases for a single document.

Returns a list of strings denoting the key phrases in the input text. See https://aka.ms/talangs for the list of enabled languages.

Parameters
  • endpoint (str) – Supported Cognitive Services endpoints (protocol and hostname, for example: https://westus2.api.cognitive.microsoft.com).

  • credential (str or TokenCredential) – Credentials needed for the client to connect to Azure. This can be the cognitive services subscription key or a token credential from azure.identity.

  • input_text (str) – The single string to extract key phrases from.

  • language (str) – This is the 2 letter ISO 639-1 representation of a language. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default.

Keyword Arguments
  • show_stats (bool) – If set to true, response will contain document level statistics.

  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version.

Returns

An instance of ExtractKeyPhrasesResult

Return type

ExtractKeyPhrasesResult

Raises

HttpResponseError

Example:

Extract key phrases in a single string.
from azure.ai.textanalytics import single_extract_key_phrases

text = "Redmond is a city in King County, Washington, United States, located 15 miles east of Seattle."

result = single_extract_key_phrases(
    endpoint=self.endpoint,
    credential=self.key,
    input_text=text,
    language="en"
)

print("Key phrases found:\n")
for phrase in result.key_phrases:
    print(phrase)
azure.ai.textanalytics.single_analyze_sentiment(endpoint, credential, input_text, language='en', **kwargs)[source]

Analyze sentiment in a single document.

Returns a sentiment prediction, as well as sentiment scores for each sentiment class (Positive, Negative, and Neutral) for the document and each sentence within it. See https://aka.ms/talangs for the list of enabled languages.

Parameters
  • endpoint (str) – Supported Cognitive Services endpoints (protocol and hostname, for example: https://westus2.api.cognitive.microsoft.com).

  • credential (str or TokenCredential) – Credentials needed for the client to connect to Azure. This can be the cognitive services subscription key or a token credential from azure.identity.

  • input_text (str) – The single string to analyze sentiment from.

  • language (str) – This is the 2 letter ISO 639-1 representation of a language. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default.

Keyword Arguments
  • show_stats (bool) – If set to true, response will contain document level statistics.

  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version.

Returns

An instance of AnalyzeSentimentResult

Return type

AnalyzeSentimentResult

Raises

HttpResponseError

Example:

Analyze sentiment in a single string.
from azure.ai.textanalytics import single_analyze_sentiment

text = "I visited the restaurant last week. The portions were very generous. However, I did not like what " \
       "I ordered."

result = single_analyze_sentiment(
    endpoint=self.endpoint,
    credential=self.key,
    input_text=text,
    language="en"
)

print("Overall sentiment: {}".format(result.sentiment))
print("Overall scores: positive={0:.3f}; neutral={1:.3f}; negative={2:.3f} \n".format(
    result.document_scores.positive,
    result.document_scores.neutral,
    result.document_scores.negative,
))

for idx, sentence in enumerate(result.sentences):
    print("Sentence {} sentiment: {}".format(idx+1, sentence.sentiment))
    print("Offset: {}".format(sentence.offset))
    print("Length: {}".format(sentence.length))
    print("Sentence score: positive={0:.3f}; neutral={1:.3f}; negative={2:.3f} \n".format(
        sentence.sentence_scores.positive,
        sentence.sentence_scores.neutral,
        sentence.sentence_scores.negative,
    ))