azure.ai.textanalytics.aio package

class azure.ai.textanalytics.aio.TextAnalyticsClient(endpoint: str, credential: Union[AzureKeyCredential, AsyncTokenCredential], **kwargs: Any)[source]

The Text Analytics API is a suite of text analytics web services built with best-in-class Microsoft machine learning algorithms. The API can be used to analyze unstructured text for tasks such as sentiment analysis, key phrase extraction, and language detection. No training data is needed to use this API - just bring your text data. This API uses advanced natural language processing techniques to deliver best in class predictions.

Further documentation can be found in https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview

Parameters
  • endpoint (str) – Supported Cognitive Services or Text Analytics resource endpoints (protocol and hostname, for example: https://westus2.api.cognitive.microsoft.com).

  • credential (AzureKeyCredential or AsyncTokenCredential) – Credentials needed for the client to connect to Azure. This can be the an instance of AzureKeyCredential if using a cognitive services/text analytics API key or a token credential from azure.identity.

Keyword Arguments
  • default_country_hint (str) – Sets the default country_hint to use for all operations. Defaults to “US”. If you don’t want to use a country hint, pass the string “none”.

  • default_language (str) – Sets the default language to use for all operations. Defaults to “en”.

Example:

Creating the TextAnalyticsClient with endpoint and API key.
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics.aio import TextAnalyticsClient
endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint, AzureKeyCredential(key))
Creating the TextAnalyticsClient with endpoint and token credential from Azure Active Directory.
from azure.ai.textanalytics.aio import TextAnalyticsClient
from azure.identity.aio import DefaultAzureCredential

endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
credential = DefaultAzureCredential()

text_analytics_client = TextAnalyticsClient(endpoint, credential=credential)
async analyze_sentiment(**kwargs: Any) → List[Union[azure.ai.textanalytics._models.AnalyzeSentimentResult, azure.ai.textanalytics._models.DocumentError]][source]

Analyze sentiment for a batch of documents.

Returns a sentiment prediction, as well as sentiment scores for each sentiment class (Positive, Negative, and Neutral) for the document and each sentence within it.

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

Parameters

documents (list[str] or list[TextDocumentInput] or list[dict[str, str]]) – The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {“id”: “1”, “language”: “en”, “text”: “hello world”}.

Keyword Arguments
  • language (str) – The 2 letter ISO 639-1 representation of language for the entire batch. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Text Analytics API.

  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version.

  • show_stats (bool) – If set to true, response will contain document level statistics.

Returns

The combined list of AnalyzeSentimentResult and DocumentError in the order the original documents were passed in.

Return type

list[AnalyzeSentimentResult, DocumentError]

Raises

HttpResponseError or TypeError or ValueError

Example:

Analyze sentiment in a batch of documents.
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics.aio import TextAnalyticsClient

endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
documents = [
    "I had the best day of my life.",
    "This was a waste of my time. The speaker put me to sleep.",
    "No tengo dinero ni nada que dar...",
    "L'hôtel n'était pas très confortable. L'éclairage était trop sombre."
]

async with text_analytics_client:
    result = await text_analytics_client.analyze_sentiment(documents)

docs = [doc for doc in result if not doc.is_error]

for idx, doc in enumerate(docs):
    print("Document text: {}".format(documents[idx]))
    print("Overall sentiment: {}".format(doc.sentiment))
async close()None

Close sockets opened by the client. Calling this method is unnecessary when using the client as a context manager.

async detect_language(**kwargs: Any) → List[Union[azure.ai.textanalytics._models.DetectLanguageResult, azure.ai.textanalytics._models.DocumentError]][source]

Detect language for a batch of documents.

Returns the detected language and a numeric score between zero and one. Scores close to one indicate 100% certainty that the identified language is true. See https://aka.ms/talangs for the list of enabled languages.

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

Parameters

documents (list[str] or list[DetectLanguageInput] or list[dict[str, str]]) – The set of documents to process as part of this batch. If you wish to specify the ID and country_hint on a per-item basis you must use as input a list[DetectLanguageInput] or a list of dict representations of DetectLanguageInput, like {“id”: “1”, “country_hint”: “us”, “text”: “hello world”}.

Keyword Arguments
  • country_hint (str) – A country hint for the entire batch. Accepts two letter country codes specified by ISO 3166-1 alpha-2. Per-document country hints will take precedence over whole batch hints. Defaults to “US”. If you don’t want to use a country hint, pass the string “none”.

  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version.

  • show_stats (bool) – If set to true, response will contain document level statistics.

Returns

The combined list of DetectLanguageResult and DocumentError in the order the original documents were passed in.

Return type

list[DetectLanguageResult, DocumentError]

Raises

HttpResponseError or TypeError or ValueError

Example:

Detecting language in a batch of documents.
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics.aio import TextAnalyticsClient

endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
documents = [
    "This document is written in English.",
    "Este es un document escrito en Español.",
    "这是一个用中文写的文件",
    "Dies ist ein Dokument in deutsche Sprache.",
    "Detta är ett dokument skrivet på engelska."
]
async with text_analytics_client:
    result = await text_analytics_client.detect_language(documents)

for idx, doc in enumerate(result):
    if not doc.is_error:
        print("Document text: {}".format(documents[idx]))
        print("Language detected: {}".format(doc.primary_language.name))
        print("ISO6391 name: {}".format(doc.primary_language.iso6391_name))
        print("Confidence score: {}\n".format(doc.primary_language.confidence_score))
    if doc.is_error:
        print(doc.id, doc.error)
async extract_key_phrases(**kwargs: Any) → List[Union[azure.ai.textanalytics._models.ExtractKeyPhrasesResult, azure.ai.textanalytics._models.DocumentError]][source]

Extract key phrases from a batch of documents.

Returns a list of strings denoting the key phrases in the input text. For example, for the input text “The food was delicious and there were wonderful staff”, the API returns the main talking points: “food” and “wonderful staff”

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

Parameters

documents (list[str] or list[TextDocumentInput] or list[dict[str, str]]) – The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {“id”: “1”, “language”: “en”, “text”: “hello world”}.

Keyword Arguments
  • language (str) – The 2 letter ISO 639-1 representation of language for the entire batch. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Text Analytics API.

  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version.

  • show_stats (bool) – If set to true, response will contain document level statistics.

Returns

The combined list of ExtractKeyPhrasesResult and DocumentError in the order the original documents were passed in.

Return type

list[ExtractKeyPhrasesResult, DocumentError]

Raises

HttpResponseError or TypeError or ValueError

Example:

Extract the key phrases in a batch of documents.
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics.aio import TextAnalyticsClient

endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
documents = [
    "Redmond is a city in King County, Washington, United States, located 15 miles east of Seattle.",
    "I need to take my cat to the veterinarian.",
    "I will travel to South America in the summer.",
]

async with text_analytics_client:
    result = await text_analytics_client.extract_key_phrases(documents)

for doc in result:
    if not doc.is_error:
        print(doc.key_phrases)
    if doc.is_error:
        print(doc.id, doc.error)
async recognize_entities(**kwargs: Any) → List[Union[azure.ai.textanalytics._models.RecognizeEntitiesResult, azure.ai.textanalytics._models.DocumentError]][source]

Recognize entities for a batch of documents.

Identifies and categorizes entities in your text as people, places, organizations, date/time, quantities, percentages, currencies, and more. For the list of supported entity types, check: https://aka.ms/taner

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

Parameters

documents (list[str] or list[TextDocumentInput] or list[dict[str, str]]) – The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {“id”: “1”, “language”: “en”, “text”: “hello world”}.

Keyword Arguments
  • language (str) – The 2 letter ISO 639-1 representation of language for the entire batch. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Text Analytics API.

  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version.

  • show_stats (bool) – If set to true, response will contain document level statistics.

Returns

The combined list of RecognizeEntitiesResult and DocumentError in the order the original documents were passed in.

Return type

list[RecognizeEntitiesResult, DocumentError]

Raises

HttpResponseError or TypeError or ValueError

Example:

Recognize entities in a batch of documents.
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics.aio import TextAnalyticsClient

endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
documents = [
    "Microsoft was founded by Bill Gates and Paul Allen.",
    "I had a wonderful trip to Seattle last week.",
    "I visited the Space Needle 2 times.",
]

async with text_analytics_client:
    result = await text_analytics_client.recognize_entities(documents)

docs = [doc for doc in result if not doc.is_error]

for idx, doc in enumerate(docs):
    print("\nDocument text: {}".format(documents[idx]))
    for entity in doc.entities:
        print("Entity: \t", entity.text, "\tCategory: \t", entity.category,
              "\tConfidence Score: \t", entity.confidence_score)
async recognize_linked_entities(**kwargs: Any) → List[Union[azure.ai.textanalytics._models.RecognizeLinkedEntitiesResult, azure.ai.textanalytics._models.DocumentError]][source]

Recognize linked entities from a well-known knowledge base for a batch of documents.

Identifies and disambiguates the identity of each entity found in text (for example, determining whether an occurrence of the word Mars refers to the planet, or to the Roman god of war). Recognized entities are associated with URLs to a well-known knowledge base, like Wikipedia.

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

Parameters

documents (list[str] or list[TextDocumentInput] or list[dict[str, str]]) – The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {“id”: “1”, “language”: “en”, “text”: “hello world”}.

Keyword Arguments
  • language (str) – The 2 letter ISO 639-1 representation of language for the entire batch. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Text Analytics API.

  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version.

  • show_stats (bool) – If set to true, response will contain document level statistics.

Returns

The combined list of RecognizeLinkedEntitiesResult and DocumentError in the order the original documents were passed in.

Return type

list[RecognizeLinkedEntitiesResult, DocumentError]

Raises

HttpResponseError or TypeError or ValueError

Example:

Recognize linked entities in a batch of documents.
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics.aio import TextAnalyticsClient

endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
documents = [
    "Microsoft moved its headquarters to Bellevue, Washington in January 1979.",
    "Steve Ballmer stepped down as CEO of Microsoft and was succeeded by Satya Nadella.",
    "Microsoft superó a Apple Inc. como la compañía más valiosa que cotiza en bolsa en el mundo.",
]

async with text_analytics_client:
    result = await text_analytics_client.recognize_linked_entities(documents)

docs = [doc for doc in result if not doc.is_error]

for idx, doc in enumerate(docs):
    print("Document text: {}\n".format(documents[idx]))
    for entity in doc.entities:
        print("Entity: {}".format(entity.name))
        print("Url: {}".format(entity.url))
        print("Data Source: {}".format(entity.data_source))
        for match in entity.matches:
            print("Confidence Score: {}".format(match.confidence_score))
            print("Entity as appears in request: {}".format(match.text))
    print("------------------------------------------")