azure.ai.textanalytics.aio package

class azure.ai.textanalytics.aio.TextAnalyticsClient(endpoint: str, credential: Union[AzureKeyCredential, AsyncTokenCredential], **kwargs: Any)[source]

The Text Analytics API is a suite of text analytics web services built with best-in-class Microsoft machine learning algorithms. The API can be used to analyze unstructured text for tasks such as sentiment analysis, key phrase extraction, and language detection. No training data is needed to use this API - just bring your text data. This API uses advanced natural language processing techniques to deliver best in class predictions.

Further documentation can be found in https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview

Parameters
  • endpoint (str) – Supported Cognitive Services or Text Analytics resource endpoints (protocol and hostname, for example: https://westus2.api.cognitive.microsoft.com).

  • credential (AzureKeyCredential or AsyncTokenCredential) – Credentials needed for the client to connect to Azure. This can be the an instance of AzureKeyCredential if using a cognitive services/text analytics API key or a token credential from azure.identity.

Keyword Arguments
  • default_country_hint (str) – Sets the default country_hint to use for all operations. Defaults to “US”. If you don’t want to use a country hint, pass the string “none”.

  • default_language (str) – Sets the default language to use for all operations. Defaults to “en”.

  • api_version (str or TextAnalyticsApiVersion) – The API version of the service to use for requests. It defaults to the latest service version. Setting to an older version may result in reduced feature compatibility.

Example:

Creating the TextAnalyticsClient with endpoint and API key.
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics.aio import TextAnalyticsClient
endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint, AzureKeyCredential(key))
Creating the TextAnalyticsClient with endpoint and token credential from Azure Active Directory.
from azure.ai.textanalytics.aio import TextAnalyticsClient
from azure.identity.aio import DefaultAzureCredential

endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
credential = DefaultAzureCredential()

text_analytics_client = TextAnalyticsClient(endpoint, credential=credential)
async analyze_sentiment(documents: Union[List[str], List[azure.ai.textanalytics._models.TextDocumentInput], List[Dict[str, str]]], **kwargs: Any) → List[Union[azure.ai.textanalytics._models.AnalyzeSentimentResult, azure.ai.textanalytics._models.DocumentError]][source]

Analyze sentiment for a batch of documents. Turn on opinion mining with show_opinion_mining.

Returns a sentiment prediction, as well as sentiment scores for each sentiment class (Positive, Negative, and Neutral) for the document and each sentence within it.

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

Parameters

documents (list[str] or list[TextDocumentInput] or list[dict[str, str]]) – The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {“id”: “1”, “language”: “en”, “text”: “hello world”}.

Keyword Arguments
  • show_opinion_mining (bool) – Whether to mine the opinions of a sentence and conduct more granular analysis around the aspects of a product or service (also known as aspect-based sentiment analysis). If set to true, the returned SentenceSentiment objects will have property mined_opinions containing the result of this analysis. Only available for API version v3.1-preview and up.

  • language (str) – The 2 letter ISO 639-1 representation of language for the entire batch. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Text Analytics API.

  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version. See here for more info: https://aka.ms/text-analytics-model-versioning

  • show_stats (bool) – If set to true, response will contain document level statistics in the statistics field of the document-level response.

  • string_index_type (str) – Specifies the method used to interpret string offsets. Can be one of ‘UnicodeCodePoint’ (default), ‘Utf16CodePoint’, or ‘TextElements_v8’. For additional information see https://aka.ms/text-analytics-offsets

New in version v3.1-preview: The show_opinion_mining parameter.

Returns

The combined list of AnalyzeSentimentResult and DocumentError in the order the original documents were passed in.

Return type

list[AnalyzeSentimentResult, DocumentError]

Raises

HttpResponseError or TypeError or ValueError

Example:

Analyze sentiment in a batch of documents.
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics.aio import TextAnalyticsClient

endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))

documents = [
    """I had the best day of my life. I decided to go sky-diving and it made me appreciate my whole life so much more.
    I developed a deep-connection with my instructor as well, and I feel as if I've made a life-long friend in her.""",
    """This was a waste of my time. All of the views on this drop are extremely boring, all I saw was grass. 0/10 would
    not recommend to any divers, even first timers.""",
    """This was pretty good! The sights were ok, and I had fun with my instructors! Can't complain too much about my experience""",
    """I only have one word for my experience: WOW!!! I can't believe I have had such a wonderful skydiving company right
    in my backyard this whole time! I will definitely be a repeat customer, and I want to take my grandmother skydiving too,
    I know she'll love it!"""
]

async with text_analytics_client:
    result = await text_analytics_client.analyze_sentiment(documents)

docs = [doc for doc in result if not doc.is_error]

print("Let's visualize the sentiment of each of these documents")
for idx, doc in enumerate(docs):
    print("Document text: {}".format(documents[idx]))
    print("Overall sentiment: {}".format(doc.sentiment))
async begin_analyze_batch_actions(documents: Union[List[str], List[azure.ai.textanalytics._models.TextDocumentInput], List[Dict[str, str]]], actions: List[Union[azure.ai.textanalytics._models.RecognizeEntitiesAction, azure.ai.textanalytics._models.RecognizePiiEntitiesAction, azure.ai.textanalytics._models.ExtractKeyPhrasesAction]], **kwargs: Any) → azure.core.polling._async_poller.AsyncLROPoller[azure.core.async_paging.AsyncItemPaged[azure.ai.textanalytics._models.AnalyzeBatchActionsResult]][source]

Start a long-running operation to perform a variety of text analysis actions over a batch of documents.

Parameters
  • documents (list[str] or list[TextDocumentInput] or list[dict[str, str]]) – The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {“id”: “1”, “language”: “en”, “text”: “hello world”}.

  • actions (list[RecognizeEntitiesAction or RecognizePiiEntitiesAction or ExtractKeyPhrasesAction]) – A heterogeneous list of actions to perform on the inputted documents. Each action object encapsulates the parameters used for the particular action type. The outputted action results will be in the same order you inputted your actions. Duplicate actions in list not supported.

Keyword Arguments
  • display_name (str) – An optional display name to set for the requested analysis.

  • language (str) – The 2 letter ISO 639-1 representation of language for the entire batch. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Text Analytics API.

  • show_stats (bool) – If set to true, response will contain document level statistics.

  • polling_interval (int) – Waiting time between two polls for LRO operations if no Retry-After header is present. Defaults to 30 seconds.

Returns

An instance of an LROPoller. Call result() on the poller object to return a pageable heterogeneous list of the action results in the order the actions were sent in this method.

Return type

AsyncLROPoller[AsyncItemPaged[ AnalyzeBatchActionsResult]]

Raises

HttpResponseError or TypeError or ValueError or NotImplementedError

Example:

Start a long-running operation to perform a variety of text analysis tasks over a batch of documents.
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics.aio import TextAnalyticsClient
from azure.ai.textanalytics import (
    RecognizeEntitiesAction,
    RecognizePiiEntitiesAction,
    ExtractKeyPhrasesAction,
    AnalyzeBatchActionsType
)

endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

text_analytics_client = TextAnalyticsClient(
    endpoint=endpoint,
    credential=AzureKeyCredential(key),
)

documents = [
    "We went to Contoso Steakhouse located at midtown NYC last week for a dinner party, and we adore the spot! \
    They provide marvelous food and they have a great menu. The chief cook happens to be the owner (I think his name is John Doe) \
    and he is super nice, coming out of the kitchen and greeted us all. We enjoyed very much dining in the place! \
    The Sirloin steak I ordered was tender and juicy, and the place was impeccably clean. You can even pre-order from their \
    online menu at www.contososteakhouse.com, call 312-555-0176 or send email to order@contososteakhouse.com! \
    The only complaint I have is the food didn't come fast enough. Overall I highly recommend it!"
]

async with text_analytics_client:
    poller = await text_analytics_client.begin_analyze_batch_actions(
        documents,
        display_name="Sample Text Analysis",
        actions=[
            RecognizeEntitiesAction(),
            RecognizePiiEntitiesAction(),
            ExtractKeyPhrasesAction()
        ]
    )

    result = await poller.result()

    async for action_result in result:
        if action_result.is_error:
            raise ValueError(
                "Action has failed with message: {}".format(
                    action_result.error.message
                )
            )
        if action_result.action_type == AnalyzeBatchActionsType.RECOGNIZE_ENTITIES:
            print("Results of Entities Recognition action:")
            for idx, doc in enumerate(action_result.document_results):
                print("\nDocument text: {}".format(documents[idx]))
                for entity in doc.entities:
                    print("Entity: {}".format(entity.text))
                    print("...Category: {}".format(entity.category))
                    print("...Confidence Score: {}".format(entity.confidence_score))
                    print("...Offset: {}".format(entity.offset))
                print("------------------------------------------")

        if action_result.action_type == AnalyzeBatchActionsType.RECOGNIZE_PII_ENTITIES:
            print("Results of PII Entities Recognition action:")
            for idx, doc in enumerate(action_result.document_results):
                print("Document text: {}".format(documents[idx]))
                for entity in doc.entities:
                    print("Entity: {}".format(entity.text))
                    print("Category: {}".format(entity.category))
                    print("Confidence Score: {}\n".format(entity.confidence_score))
                print("------------------------------------------")

        if action_result.action_type == AnalyzeBatchActionsType.EXTRACT_KEY_PHRASES:
            print("Results of Key Phrase Extraction action:")
            for idx, doc in enumerate(action_result.document_results):
                print("Document text: {}\n".format(documents[idx]))
                print("Key Phrases: {}\n".format(doc.key_phrases))
                print("------------------------------------------")

async begin_analyze_healthcare_entities(documents: Union[List[str], List[TextDocumentInput], List[Dict[str, str]]], **kwargs: Any) → AsyncLROPoller[AsyncItemPaged[AnalyzeHealthcareEntitiesResultItem]][source]

Analyze healthcare entities and identify relationships between these entities in a batch of documents.

Entities are associated with references that can be found in existing knowledge bases, such as UMLS, CHV, MSH, etc.

Relations are comprised of a pair of entities and a directional relationship.

Parameters

documents (list[str] or list[TextDocumentInput] or list[dict[str, str]]) – The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {“id”: “1”, “language”: “en”, “text”: “hello world”}.

Keyword Arguments
  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version. See here for more info: https://aka.ms/text-analytics-model-versioning

  • show_stats (bool) – If set to true, response will contain document level statistics.

  • string_index_type (str) – Specifies the method used to interpret string offsets. Can be one of ‘UnicodeCodePoint’ (default), ‘Utf16CodePoint’, or ‘TextElements_v8’. For additional information see https://aka.ms/text-analytics-offsets

  • polling_interval (int) – Waiting time between two polls for LRO operations if no Retry-After header is present. Defaults to 5 seconds.

  • continuation_token (str) – A continuation token to restart a poller from a saved state.

Returns

An instance of an AnalyzeHealthcareEntitiesAsyncLROPoller. Call result() on the poller object to return a pageable of AnalyzeHealthcareResultItem.

Return type

AsyncLROPoller[AsyncItemPaged[ AnalyzeHealthcareEntitiesResultItem]]

Raises

HttpResponseError or TypeError or ValueError or NotImplementedError

Example:

Analyze healthcare entities in a batch of documents.
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics.aio import TextAnalyticsClient

endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

text_analytics_client = TextAnalyticsClient(
    endpoint=endpoint,
    credential=AzureKeyCredential(key),
)

documents = [
    "Subject is taking 100mg of ibuprofen twice daily"
]

async with text_analytics_client:
    poller = await text_analytics_client.begin_analyze_healthcare_entities(documents)
    result = await poller.result()
    docs = [doc async for doc in result if not doc.is_error]

print("Results of Healthcare Entities Analysis:")
for idx, doc in enumerate(docs):
    print("Document text: {}\n".format(documents[idx]))
    for entity in doc.entities:
        print("Entity: {}".format(entity.text))
        print("...Category: {}".format(entity.category))
        print("...Subcategory: {}".format(entity.subcategory))
        print("...Offset: {}".format(entity.offset))
        print("...Confidence score: {}".format(entity.confidence_score))
        if entity.data_sources is not None:
            print("...Data Sources:")
            for data_source in entity.data_sources:
                print("......Entity ID: {}".format(data_source.entity_id))
                print("......Name: {}".format(data_source.name))
        if len(entity.related_entities) > 0:
            print("...Related Entities:")
            for related_entity, relation_type in entity.related_entities.items():
                print("......Entity Text: {}".format(related_entity.text))
                print("......Relation Type: {}".format(relation_type))
    print("------------------------------------------")

async close()None

Close sockets opened by the client. Calling this method is unnecessary when using the client as a context manager.

async detect_language(documents: Union[List[str], List[azure.ai.textanalytics._models.DetectLanguageInput], List[Dict[str, str]]], **kwargs: Any) → List[Union[azure.ai.textanalytics._models.DetectLanguageResult, azure.ai.textanalytics._models.DocumentError]][source]

Detect language for a batch of documents.

Returns the detected language and a numeric score between zero and one. Scores close to one indicate 100% certainty that the identified language is true. See https://aka.ms/talangs for the list of enabled languages.

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

Parameters

documents (list[str] or list[DetectLanguageInput] or list[dict[str, str]]) – The set of documents to process as part of this batch. If you wish to specify the ID and country_hint on a per-item basis you must use as input a list[DetectLanguageInput] or a list of dict representations of DetectLanguageInput, like {“id”: “1”, “country_hint”: “us”, “text”: “hello world”}.

Keyword Arguments
  • country_hint (str) – Country of origin hint for the entire batch. Accepts two letter country codes specified by ISO 3166-1 alpha-2. Per-document country hints will take precedence over whole batch hints. Defaults to “US”. If you don’t want to use a country hint, pass the string “none”.

  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version. See here for more info: https://aka.ms/text-analytics-model-versioning

  • show_stats (bool) – If set to true, response will contain document level statistics in the statistics field of the document-level response.

Returns

The combined list of DetectLanguageResult and DocumentError in the order the original documents were passed in.

Return type

list[DetectLanguageResult, DocumentError]

Raises

HttpResponseError or TypeError or ValueError

Example:

Detecting language in a batch of documents.
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics.aio import TextAnalyticsClient

endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
documents = [
    """
    The concierge Paulette was extremely helpful. Sadly when we arrived the elevator was broken, but with Paulette's help we barely noticed this inconvenience.
    She arranged for our baggage to be brought up to our room with no extra charge and gave us a free meal to refurbish all of the calories we lost from
    walking up the stairs :). Can't say enough good things about my experience!
    """,
    """
    最近由于工作压力太大,我们决定去富酒店度假。那儿的温泉实在太舒服了,我跟我丈夫都完全恢复了工作前的青春精神!加油!
    """
]
async with text_analytics_client:
    result = await text_analytics_client.detect_language(documents)

reviewed_docs = [doc for doc in result if not doc.is_error]

print("Let's see what language each review is in!")

for idx, doc in enumerate(reviewed_docs):
    print("Review #{} is in '{}', which has ISO639-1 name '{}'\n".format(
        idx, doc.primary_language.name, doc.primary_language.iso6391_name
    ))
    if doc.is_error:
        print(doc.id, doc.error)
async extract_key_phrases(documents: Union[List[str], List[azure.ai.textanalytics._models.TextDocumentInput], List[Dict[str, str]]], **kwargs: Any) → List[Union[azure.ai.textanalytics._models.ExtractKeyPhrasesResult, azure.ai.textanalytics._models.DocumentError]][source]

Extract key phrases from a batch of documents.

Returns a list of strings denoting the key phrases in the input text. For example, for the input text “The food was delicious and there were wonderful staff”, the API returns the main talking points: “food” and “wonderful staff”

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

Parameters

documents (list[str] or list[TextDocumentInput] or list[dict[str, str]]) – The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {“id”: “1”, “language”: “en”, “text”: “hello world”}.

Keyword Arguments
  • language (str) – The 2 letter ISO 639-1 representation of language for the entire batch. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Text Analytics API.

  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version. See here for more info: https://aka.ms/text-analytics-model-versioning

  • show_stats (bool) – If set to true, response will contain document level statistics in the statistics field of the document-level response.

Returns

The combined list of ExtractKeyPhrasesResult and DocumentError in the order the original documents were passed in.

Return type

list[ExtractKeyPhrasesResult, DocumentError]

Raises

HttpResponseError or TypeError or ValueError

Example:

Extract the key phrases in a batch of documents.
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics.aio import TextAnalyticsClient

endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
articles = [
    """
    Washington, D.C. Autumn in DC is a uniquely beautiful season. The leaves fall from the trees
    in a city chockful of forrests, leaving yellow leaves on the ground and a clearer view of the
    blue sky above...
    """,
    """
    Redmond, WA. In the past few days, Microsoft has decided to further postpone the start date of
    its United States workers, due to the pandemic that rages with no end in sight...
    """,
    """
    Redmond, WA. Employees at Microsoft can be excited about the new coffee shop that will open on campus
    once workers no longer have to work remotely...
    """
]

async with text_analytics_client:
    result = await text_analytics_client.extract_key_phrases(articles)

for idx, doc in enumerate(result):
    if not doc.is_error:
        print("Key phrases in article #{}: {}".format(
            idx + 1,
            ", ".join(doc.key_phrases)
        ))
async recognize_entities(documents: Union[List[str], List[azure.ai.textanalytics._models.TextDocumentInput], List[Dict[str, str]]], **kwargs: Any) → List[Union[azure.ai.textanalytics._models.RecognizeEntitiesResult, azure.ai.textanalytics._models.DocumentError]][source]

Recognize entities for a batch of documents.

Identifies and categorizes entities in your text as people, places, organizations, date/time, quantities, percentages, currencies, and more. For the list of supported entity types, check: https://aka.ms/taner

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

Parameters

documents (list[str] or list[TextDocumentInput] or list[dict[str, str]]) – The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {“id”: “1”, “language”: “en”, “text”: “hello world”}.

Keyword Arguments
  • language (str) – The 2 letter ISO 639-1 representation of language for the entire batch. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Text Analytics API.

  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version. See here for more info: https://aka.ms/text-analytics-model-versioning

  • show_stats (bool) – If set to true, response will contain document level statistics in the statistics field of the document-level response.

  • string_index_type (str) – Specifies the method used to interpret string offsets. Can be one of ‘UnicodeCodePoint’ (default), ‘Utf16CodePoint’, or ‘TextElements_v8’. For additional information see https://aka.ms/text-analytics-offsets

Returns

The combined list of RecognizeEntitiesResult and DocumentError in the order the original documents were passed in.

Return type

list[RecognizeEntitiesResult, DocumentError]

Raises

HttpResponseError or TypeError or ValueError

Example:

Recognize entities in a batch of documents.
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics.aio import TextAnalyticsClient

endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
reviews = [
    """I work for Foo Company, and we hired Contoso for our annual founding ceremony. The food
    was amazing and we all can't say enough good words about the quality and the level of service.""",
    """We at the Foo Company re-hired Contoso after all of our past successes with the company.
    Though the food was still great, I feel there has been a quality drop since their last time
    catering for us. Is anyone else running into the same problem?""",
    """Bar Company is over the moon about the service we received from Contoso, the best sliders ever!!!!"""
]

async with text_analytics_client:
    result = await text_analytics_client.recognize_entities(reviews)

result = [review for review in result if not review.is_error]

for idx, review in enumerate(result):
    for entity in review.entities:
        print("Entity '{}' has category '{}'".format(entity.text, entity.category))
async recognize_linked_entities(documents: Union[List[str], List[azure.ai.textanalytics._models.TextDocumentInput], List[Dict[str, str]]], **kwargs: Any) → List[Union[azure.ai.textanalytics._models.RecognizeLinkedEntitiesResult, azure.ai.textanalytics._models.DocumentError]][source]

Recognize linked entities from a well-known knowledge base for a batch of documents.

Identifies and disambiguates the identity of each entity found in text (for example, determining whether an occurrence of the word Mars refers to the planet, or to the Roman god of war). Recognized entities are associated with URLs to a well-known knowledge base, like Wikipedia.

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

Parameters

documents (list[str] or list[TextDocumentInput] or list[dict[str, str]]) – The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {“id”: “1”, “language”: “en”, “text”: “hello world”}.

Keyword Arguments
  • language (str) – The 2 letter ISO 639-1 representation of language for the entire batch. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Text Analytics API.

  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version. See here for more info: https://aka.ms/text-analytics-model-versioning

  • show_stats (bool) – If set to true, response will contain document level statistics in the statistics field of the document-level response.

  • string_index_type (str) – Specifies the method used to interpret string offsets. Can be one of ‘UnicodeCodePoint’ (default), ‘Utf16CodePoint’, or ‘TextElements_v8’. For additional information see https://aka.ms/text-analytics-offsets

Returns

The combined list of RecognizeLinkedEntitiesResult and DocumentError in the order the original documents were passed in.

Return type

list[RecognizeLinkedEntitiesResult, DocumentError]

Raises

HttpResponseError or TypeError or ValueError

Example:

Recognize linked entities in a batch of documents.
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics.aio import TextAnalyticsClient

endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
documents = [
    """
    Microsoft was founded by Bill Gates with some friends he met at Harvard. One of his friends,
    Steve Ballmer, eventually became CEO after Bill Gates as well. Steve Ballmer eventually stepped
    down as CEO of Microsoft, and was succeeded by Satya Nadella.
    Microsoft originally moved its headquarters to Bellevue, Wahsington in Januaray 1979, but is now
    headquartered in Redmond.
    """
]

async with text_analytics_client:
    result = await text_analytics_client.recognize_linked_entities(documents)

docs = [doc for doc in result if not doc.is_error]

print(
    "Let's map each entity to it's Wikipedia article. I also want to see how many times each "
    "entity is mentioned in a document\n\n"
)
entity_to_url = {}
for doc in docs:
    for entity in doc.entities:
        print("Entity '{}' has been mentioned '{}' time(s)".format(
            entity.name, len(entity.matches)
        ))
        if entity.data_source == "Wikipedia":
            entity_to_url[entity.name] = entity.url
async recognize_pii_entities(documents: Union[List[str], List[azure.ai.textanalytics._models.TextDocumentInput], List[Dict[str, str]]], **kwargs: Any) → List[Union[azure.ai.textanalytics._models.RecognizePiiEntitiesResult, azure.ai.textanalytics._models.DocumentError]][source]

Recognize entities containing personal information for a batch of documents.

Returns a list of personal information entities (“SSN”, “Bank Account”, etc) in the document. For the list of supported entity types, check https://aka.ms/tanerpii

See https://docs.microsoft.com/azure/cognitive-services/text-analytics/overview#data-limits for document length limits, maximum batch size, and supported text encoding.

Parameters

documents (list[str] or list[TextDocumentInput] or list[dict[str, str]]) – The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {“id”: “1”, “language”: “en”, “text”: “hello world”}.

Keyword Arguments
  • language (str) – The 2 letter ISO 639-1 representation of language for the entire batch. For example, use “en” for English; “es” for Spanish etc. If not set, uses “en” for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Text Analytics API.

  • model_version (str) – This value indicates which model will be used for scoring, e.g. “latest”, “2019-10-01”. If a model-version is not specified, the API will default to the latest, non-preview version. See here for more info: https://aka.ms/text-analytics-model-versioning

  • show_stats (bool) – If set to true, response will contain document level statistics in the statistics field of the document-level response.

  • domain_filter (str or PiiEntityDomainType) – Filters the response entities to ones only included in the specified domain. I.e., if set to ‘phi’, will only return entities in the Protected Healthcare Information domain. See https://aka.ms/tanerpii for more information.

  • string_index_type (str) – Specifies the method used to interpret string offsets. Can be one of ‘UnicodeCodePoint’ (default), ‘Utf16CodePoint’, or ‘TextElements_v8’. For additional information see https://aka.ms/text-analytics-offsets

Returns

The combined list of RecognizePiiEntitiesResult and DocumentError in the order the original documents were passed in.

Return type

list[RecognizePiiEntitiesResult, DocumentError]

Raises

HttpResponseError or TypeError or ValueError

Example:

Recognize personally identifiable information entities in a batch of documents.
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient

endpoint = os.environ["AZURE_TEXT_ANALYTICS_ENDPOINT"]
key = os.environ["AZURE_TEXT_ANALYTICS_KEY"]

text_analytics_client = TextAnalyticsClient(
    endpoint=endpoint, credential=AzureKeyCredential(key)
)
documents = [
    """Parker Doe has repaid all of their loans as of 2020-04-25.
    Their SSN is 859-98-0987. To contact them, use their phone number
    555-555-5555. They are originally from Brazil and have Brazilian CPF number 998.214.865-68"""
]

result = text_analytics_client.recognize_pii_entities(documents)
docs = [doc for doc in result if not doc.is_error]

print(
    "Let's compare the original document with the documents after redaction. "
    "I also want to comb through all of the entities that got redacted"
)
for idx, doc in enumerate(docs):
    print("Document text: {}".format(documents[idx]))
    print("Redacted document text: {}".format(doc.redacted_text))
    for entity in doc.entities:
        print("...Entity '{}' with category '{}' got redacted".format(
            entity.text, entity.category
        ))