azure.ai.language.conversations package¶
-
class
azure.ai.language.conversations.
ConversationAnalysisClient
(endpoint: str, credential: Union[azure.core.credentials.AzureKeyCredential, azure.core.credentials.TokenCredential], **kwargs: Any)[source]¶ The language service conversations API is a suite of natural language processing (NLP) skills that can be used to analyze structured conversations (textual or spoken). Further documentation can be found in https://docs.microsoft.com/azure/cognitive-services/language-service/overview.
- Parameters
endpoint (str) – Supported Cognitive Services endpoint (e.g., https://
<resource-name>
.cognitiveservices.azure.com). Required.credential (AzureKeyCredential or TokenCredential) – Credential needed for the client to connect to Azure. This can be the an instance of AzureKeyCredential if using a Language API key or a token credential from
azure.identity
.
- Keyword Arguments
api_version (str) – Api Version. Available values are “2023-04-01” and “2022-05-01”. Default value is “2023-04-01”. Note that overriding this default value may result in unsupported behavior.
polling_interval (int) – Default waiting time between two polls for LRO operations if no Retry-After header is present.
-
analyze_conversation
(task: Union[collections.abc.MutableMapping[str, Any], IO], **kwargs: Any) → collections.abc.MutableMapping[str, Any]¶ Analyzes the input conversation utterance.
See https://learn.microsoft.com/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation for more information.
- Parameters
task (JSON or IO) – A single conversational task to execute. Is either a JSON type or a IO type. Required.
- Keyword Arguments
content_type (str) – Body Parameter content-type. Known values are: ‘application/json’. Default value is None.
- Returns
JSON object
- Return type
JSON
- Raises
Example
# The input is polymorphic. The following are possible polymorphic inputs based off discriminator "kind": # JSON input template for discriminator value "Conversation": analyze_conversation_task = { "analysisInput": { "conversationItem": { "id": "str", # The ID of a conversation item. Required. "participantId": "str", # The participant ID of a conversation item. Required. "language": "str", # Optional. The override language of a conversation item in BCP 47 language representation. "modality": "str", # Optional. Enumeration of supported conversational modalities. Known values are: "transcript" and "text". "role": "str" # Optional. Role of the participant. Known values are: "agent", "customer", and "generic". } }, "kind": "Conversation", "parameters": { "deploymentName": "str", # The name of the deployment to use. Required. "projectName": "str", # The name of the project to use. Required. "directTarget": "str", # Optional. The name of a target project to forward the request to. "isLoggingEnabled": bool, # Optional. If true, the service will keep the query for further review. "stringIndexType": "TextElements_v8", # Optional. Default value is "TextElements_v8". Specifies the method used to interpret string offsets. Set to "UnicodeCodePoint" for Python strings. Known values are: "TextElements_v8", "UnicodeCodePoint", and "Utf16CodeUnit". "targetProjectParameters": { "str": analysis_parameters }, "verbose": bool # Optional. If true, the service will return more detailed information in the response. } } # JSON input template you can fill out and use as your body input. task = analyze_conversation_task # The response is polymorphic. The following are possible polymorphic responses based off discriminator "kind": # JSON input template for discriminator value "ConversationResult": analyze_conversation_task_result = { "kind": "ConversationResult", "result": { "prediction": base_prediction, "query": "str", # The conversation utterance given by the caller. Required. "detectedLanguage": "str" # Optional. The system detected language for the query in BCP 47 language representation.. } } # JSON input template for discriminator value "Conversation": base_prediction = { "entities": [ { "category": "str", # The entity category. Required. "confidenceScore": 0.0, # The entity confidence score. Required. "length": 0, # The length of the text. Required. "offset": 0, # The starting index of this entity in the query. Required. "text": "str", # The predicted entity text. Required. "extraInformation": [ base_extra_information ], "resolutions": [ base_resolution ] } ], "intents": [ { "category": "str", # A predicted class. Required. "confidenceScore": 0.0 # The confidence score of the class from 0.0 to 1.0. Required. } ], "projectKind": "Conversation", "topIntent": "str" # Optional. The intent with the highest score. } # JSON input template for discriminator value "Orchestration": base_prediction = { "intents": { "str": target_intent_result }, "projectKind": "Orchestration", "topIntent": "str" # Optional. The intent with the highest score. } # response body for status code(s): 200 response == analyze_conversation_task_result
-
begin_conversation_analysis
(task: Union[collections.abc.MutableMapping[str, Any], IO], **kwargs: Any) → azure.core.polling._poller.LROPoller[collections.abc.MutableMapping[str, Any]]¶ Submit analysis job for conversations.
Submit a collection of conversations for analysis. Specify one or more unique tasks to be executed.
See https://learn.microsoft.com/rest/api/language/2023-04-01/analyze-conversation/submit-job for more information.
- Parameters
task (JSON or IO) – Collection of conversations to analyze and one or more tasks to execute. Is either a JSON type or a IO type. Required.
- Keyword Arguments
content_type (str) – Body Parameter content-type. Known values are: ‘application/json’. Default value is None.
continuation_token (str) – A continuation token to restart a poller from a saved state.
polling (bool or PollingMethod) – By default, your polling method will be LROBasePolling. Pass in False for this operation to not poll, or pass in your own initialized polling object for a personal polling strategy.
polling_interval (int) – Default waiting time between two polls for LRO operations if no Retry-After header is present.
- Returns
An instance of LROPoller that returns JSON object
- Return type
LROPoller[JSON]
- Raises
Example
# JSON input template you can fill out and use as your body input. task = { "analysisInput": { "conversations": [ conversation ] }, "tasks": [ analyze_conversation_lro_task ], "displayName": "str" # Optional. Display name for the analysis job. } # response body for status code(s): 200 response == { "createdDateTime": "2020-02-20 00:00:00", # Required. "jobId": "str", # Required. "lastUpdatedDateTime": "2020-02-20 00:00:00", # Required. "status": "str", # The status of the task at the mentioned last update time. Required. Known values are: "notStarted", "running", "succeeded", "failed", "cancelled", "cancelling", and "partiallyCompleted". "tasks": { "completed": 0, # Count of tasks that finished successfully. Required. "failed": 0, # Count of tasks that failed. Required. "inProgress": 0, # Count of tasks that are currently in progress. Required. "total": 0, # Total count of tasks submitted as part of the job. Required. "items": [ analyze_conversation_job_result ] }, "displayName": "str", # Optional. "errors": [ { "code": "str", # One of a server-defined set of error codes. Required. Known values are: "InvalidRequest", "InvalidArgument", "Unauthorized", "Forbidden", "NotFound", "ProjectNotFound", "OperationNotFound", "AzureCognitiveSearchNotFound", "AzureCognitiveSearchIndexNotFound", "TooManyRequests", "AzureCognitiveSearchThrottling", "AzureCognitiveSearchIndexLimitReached", "InternalServerError", "ServiceUnavailable", "Timeout", "QuotaExceeded", "Conflict", and "Warning". "message": "str", # A human-readable representation of the error. Required. "details": [ ... ], "innererror": { "code": "str", # One of a server-defined set of error codes. Required. Known values are: "InvalidRequest", "InvalidParameterValue", "KnowledgeBaseNotFound", "AzureCognitiveSearchNotFound", "AzureCognitiveSearchThrottling", "ExtractionFailure", "InvalidRequestBodyFormat", "EmptyRequest", "MissingInputDocuments", "InvalidDocument", "ModelVersionIncorrect", "InvalidDocumentBatch", "UnsupportedLanguageCode", and "InvalidCountryHint". "message": "str", # Error message. Required. "details": { "str": "str" # Optional. Error details. }, "innererror": ..., "target": "str" # Optional. Error target. }, "target": "str" # Optional. The target of the error. } ], "expirationDateTime": "2020-02-20 00:00:00", # Optional. "nextLink": "str", # Optional. "statistics": { "conversationsCount": 0, # Number of conversations submitted in the request. Required. "documentsCount": 0, # Number of documents submitted in the request. Required. "erroneousConversationsCount": 0, # Number of invalid documents. This includes documents that are empty, over the size limit, or in unsupported languages. Required. "erroneousDocumentsCount": 0, # Number of invalid documents. This includes empty, over-size limit or non-supported languages documents. Required. "transactionsCount": 0, # Number of transactions for the request. Required. "validConversationsCount": 0, # Number of conversation documents. This excludes documents that are empty, over the size limit, or in unsupported languages. Required. "validDocumentsCount": 0 # Number of valid documents. This excludes empty, over-size limit or non-supported languages documents. Required. } }
-
send_request
(request: azure.core.rest._rest_py3.HttpRequest, **kwargs: Any) → azure.core.rest._rest_py3.HttpResponse[source]¶ Runs the network request through the client’s chained policies.
>>> from azure.core.rest import HttpRequest >>> request = HttpRequest("GET", "https://www.example.org/") <HttpRequest [GET], url: 'https://www.example.org/'> >>> response = client.send_request(request) <HttpResponse: 200 OK>
For more information on this code flow, see https://aka.ms/azsdk/dpcodegen/python/send_request
- Parameters
request (HttpRequest) – The network request you want to make. Required.
- Keyword Arguments
stream (bool) – Whether the response payload will be streamed. Defaults to False.
- Returns
The response of your network call. Does not do error handling on your response.
- Return type