Class TextAnalyticsClient

java.lang.Object
com.azure.ai.textanalytics.TextAnalyticsClient

public final class TextAnalyticsClient extends Object
This class provides a synchronous client that contains all the operations that apply to Azure Text Analytics. Operations allowed by the client are language detection, entities recognition, linked entities recognition, key phrases extraction, and sentiment analysis of a document or a list of documents.

Instantiating a synchronous Text Analytics Client

 List<String> documents = Arrays.asList(
     "Elon Musk is the CEO of SpaceX and Tesla.",
     "My SSN is 859-98-0987"
 );

 SyncPoller<AnalyzeActionsOperationDetail, AnalyzeActionsResultPagedIterable> syncPoller =
     textAnalyticsClient.beginAnalyzeActions(
         documents,
         new TextAnalyticsActions().setDisplayName("{tasks_display_name}")
             .setRecognizeEntitiesActions(new RecognizeEntitiesAction())
             .setExtractKeyPhrasesActions(new ExtractKeyPhrasesAction()),
         "en",
         new AnalyzeActionsOptions().setIncludeStatistics(false));
 syncPoller.waitForCompletion();
 AnalyzeActionsResultPagedIterable result = syncPoller.getFinalResult();
 result.forEach(analyzeActionsResult -> {
     System.out.println("Entities recognition action results:");
     analyzeActionsResult.getRecognizeEntitiesResults().forEach(
         actionResult -> {
             if (!actionResult.isError()) {
                 actionResult.getDocumentsResults().forEach(
                     entitiesResult -> entitiesResult.getEntities().forEach(
                         entity -> System.out.printf(
                             "Recognized entity: %s, entity category: %s, entity subcategory: %s,"
                                 + " confidence score: %f.%n",
                             entity.getText(), entity.getCategory(), entity.getSubcategory(),
                             entity.getConfidenceScore())));
             }
         });
     System.out.println("Key phrases extraction action results:");
     analyzeActionsResult.getExtractKeyPhrasesResults().forEach(
         actionResult -> {
             if (!actionResult.isError()) {
                 actionResult.getDocumentsResults().forEach(extractKeyPhraseResult -> {
                     System.out.println("Extracted phrases:");
                     extractKeyPhraseResult.getKeyPhrases()
                         .forEach(keyPhrases -> System.out.printf("\t%s.%n", keyPhrases));
                 });
             }
         });
 });
 

View this for additional ways to construct the client.

See Also:
  • Method Details

    • getDefaultCountryHint

      public String getDefaultCountryHint()
      Gets default country hint code.
      Returns:
      The default country hint code
    • getDefaultLanguage

      public String getDefaultLanguage()
      Gets default language when the builder is setup.
      Returns:
      The default language
    • detectLanguage

      public DetectedLanguage detectLanguage(String document)
      Returns the detected language and a confidence score between zero and one. Scores close to one indicate 100% certainty that the identified language is true. This method will use the default country hint that sets up in TextAnalyticsClientBuilder.defaultCountryHint(String). If none is specified, service will use 'US' as the country hint.

      Code Sample

      Detects the language of single document.

       DetectedLanguage detectedLanguage = textAnalyticsClient.detectLanguage("Bonjour tout le monde");
       System.out.printf("Detected language name: %s, ISO 6391 name: %s, confidence score: %f.%n",
           detectedLanguage.getName(), detectedLanguage.getIso6391Name(), detectedLanguage.getConfidenceScore());
       
      Parameters:
      document - The document to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      Returns:
      The detected language of the document.
      Throws:
      NullPointerException - if document is null.
    • detectLanguage

      public DetectedLanguage detectLanguage(String document, String countryHint)
      Returns the detected language and a confidence score between zero and one. Scores close to one indicate 100% certainty that the identified language is true.

      Code Sample

      Detects the language of documents with a provided country hint.

       DetectedLanguage detectedLanguage = textAnalyticsClient.detectLanguage(
           "This text is in English", "US");
       System.out.printf("Detected language name: %s, ISO 6391 name: %s, confidence score: %f.%n",
           detectedLanguage.getName(), detectedLanguage.getIso6391Name(), detectedLanguage.getConfidenceScore());
       
      Parameters:
      document - The document to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      countryHint - Accepts two letter country codes specified by ISO 3166-1 alpha-2. Defaults to "US" if not specified. To remove this behavior you can reset this parameter by setting this value to empty string countryHint = "" or "none".
      Returns:
      The detected language of the document.
      Throws:
      NullPointerException - if document is null.
    • detectLanguageBatch

      public DetectLanguageResultCollection detectLanguageBatch(Iterable<String> documents, String countryHint, TextAnalyticsRequestOptions options)
      Detects Language for a batch of document with the provided country hint and request options.

      Code Sample

      Detects the language in a list of documents with a provided country hint and request options.

       List<String> documents = Arrays.asList(
           "This is written in English",
           "Este es un documento  escrito en Español."
       );
      
       DetectLanguageResultCollection resultCollection =
           textAnalyticsClient.detectLanguageBatch(documents, "US", null);
      
       // Batch statistics
       TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics();
       System.out.printf("A batch of documents statistics, transaction count: %s, valid document count: %s.%n",
           batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
      
       // Batch result of languages
       resultCollection.forEach(detectLanguageResult -> {
           System.out.printf("Document ID: %s%n", detectLanguageResult.getId());
           DetectedLanguage detectedLanguage = detectLanguageResult.getPrimaryLanguage();
           System.out.printf("Primary language name: %s, ISO 6391 name: %s, confidence score: %f.%n",
               detectedLanguage.getName(), detectedLanguage.getIso6391Name(),
               detectedLanguage.getConfidenceScore());
       });
       
      Parameters:
      documents - The list of documents to detect languages for. For text length limits, maximum batch size, and supported text encoding, see data limits.
      countryHint - Accepts two letter country codes specified by ISO 3166-1 alpha-2. Defaults to "US" if not specified. To remove this behavior you can reset this parameter by setting this value to empty string countryHint = "" or "none".
      options - The options to configure the scoring model for documents and show statistics.
      Returns:
      A DetectLanguageResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if TextAnalyticsRequestOptions.isServiceLogsDisabled() is true in service API version TextAnalyticsServiceVersion.V3_0. disableServiceLogs is only available for API version v3.1 and newer.
    • detectLanguageBatchWithResponse

      public com.azure.core.http.rest.Response<DetectLanguageResultCollection> detectLanguageBatchWithResponse(Iterable<DetectLanguageInput> documents, TextAnalyticsRequestOptions options, com.azure.core.util.Context context)
      Detects Language for a batch of document with provided request options.

      Code Sample

      Detects the languages with http response in a list of document with provided request options.

       List<DetectLanguageInput> detectLanguageInputs = Arrays.asList(
           new DetectLanguageInput("1", "This is written in English.", "US"),
           new DetectLanguageInput("2", "Este es un documento  escrito en Español.", "es")
       );
      
       Response<DetectLanguageResultCollection> response =
           textAnalyticsClient.detectLanguageBatchWithResponse(detectLanguageInputs,
               new TextAnalyticsRequestOptions().setIncludeStatistics(true), Context.NONE);
      
       // Response's status code
       System.out.printf("Status code of request response: %d%n", response.getStatusCode());
       DetectLanguageResultCollection detectedLanguageResultCollection = response.getValue();
      
       // Batch statistics
       TextDocumentBatchStatistics batchStatistics = detectedLanguageResultCollection.getStatistics();
       System.out.printf(
           "Documents statistics: document count = %d, erroneous document count = %d, transaction count = %d,"
               + " valid document count = %d.%n",
           batchStatistics.getDocumentCount(), batchStatistics.getInvalidDocumentCount(),
           batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
      
       // Batch result of languages
       detectedLanguageResultCollection.forEach(detectLanguageResult -> {
           System.out.printf("Document ID: %s%n", detectLanguageResult.getId());
           DetectedLanguage detectedLanguage = detectLanguageResult.getPrimaryLanguage();
           System.out.printf("Primary language name: %s, ISO 6391 name: %s, confidence score: %f.%n",
               detectedLanguage.getName(), detectedLanguage.getIso6391Name(),
               detectedLanguage.getConfidenceScore());
       });
       
      Parameters:
      documents - The list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      options - The options to configure the scoring model for documents and show statistics.
      context - Additional context that is passed through the Http pipeline during the service call.
      Returns:
      A Response that contains a DetectLanguageResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if TextAnalyticsRequestOptions.isServiceLogsDisabled() is true in service API version TextAnalyticsServiceVersion.V3_0. disableServiceLogs is only available for API version v3.1 and newer.
    • recognizeEntities

      public CategorizedEntityCollection recognizeEntities(String document)
      Returns a list of general categorized entities in the provided document. For a list of supported entity types, check: this This method will use the default language that can be set by using method TextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.

      Code Sample

      Recognize the entities of documents

       final CategorizedEntityCollection recognizeEntitiesResult =
           textAnalyticsClient.recognizeEntities("Satya Nadella is the CEO of Microsoft");
       for (CategorizedEntity entity : recognizeEntitiesResult) {
           System.out.printf("Recognized entity: %s, entity category: %s, confidence score: %f.%n",
               entity.getText(), entity.getCategory(), entity.getConfidenceScore());
       }
       
      Parameters:
      document - The document to recognize entities for. For text length limits, maximum batch size, and supported text encoding, see data limits.
      Returns:
      A CategorizedEntityCollection contains a list of recognized categorized entities and warnings.
      Throws:
      NullPointerException - if document is null.
      TextAnalyticsException - if the response returned with an error.
    • recognizeEntities

      public CategorizedEntityCollection recognizeEntities(String document, String language)
      Returns a list of general categorized entities in the provided document with provided language code. For a list of supported entity types, check: this For a list of enabled languages, check: this

      Code Sample

      Recognizes the entities in a document with a provided language code.

       final CategorizedEntityCollection recognizeEntitiesResult =
           textAnalyticsClient.recognizeEntities("Satya Nadella is the CEO of Microsoft", "en");
      
       for (CategorizedEntity entity : recognizeEntitiesResult) {
           System.out.printf("Recognized entity: %s, entity category: %s, confidence score: %f.%n",
               entity.getText(), entity.getCategory(), entity.getConfidenceScore());
       }
       
      Parameters:
      document - The document to recognize entities for. For text length limits, maximum batch size, and supported text encoding, see data limits.
      language - The 2 letter ISO 639-1 representation of language. If not set, uses "en" for English as default.
      Returns:
      The CategorizedEntityCollection contains a list of recognized categorized entities and warnings.
      Throws:
      NullPointerException - if document is null.
      TextAnalyticsException - if the response returned with an error.
    • recognizeEntitiesBatch

      public RecognizeEntitiesResultCollection recognizeEntitiesBatch(Iterable<String> documents, String language, TextAnalyticsRequestOptions options)
      Returns a list of general categorized entities for the provided list of documents with provided language code and request options.

      Code Sample

      Recognizes the entities in a list of documents with a provided language code and request options.

       List<String> documents = Arrays.asList(
           "I had a wonderful trip to Seattle last week.",
           "I work at Microsoft.");
      
       RecognizeEntitiesResultCollection resultCollection =
           textAnalyticsClient.recognizeEntitiesBatch(documents, "en", null);
      
       // Batch statistics
       TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics();
       System.out.printf(
           "A batch of documents statistics, transaction count: %s, valid document count: %s.%n",
           batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
      
       resultCollection.forEach(recognizeEntitiesResult ->
           recognizeEntitiesResult.getEntities().forEach(entity ->
               System.out.printf("Recognized entity: %s, entity category: %s, confidence score: %f.%n",
                   entity.getText(), entity.getCategory(), entity.getConfidenceScore())));
       
      Parameters:
      documents - A list of documents to recognize entities for. For text length limits, maximum batch size, and supported text encoding, see data limits.
      language - The 2 letter ISO 639-1 representation of language. If not set, uses "en" for English as default.
      options - The options to configure the scoring model for documents and show statistics.
      Returns:
      A RecognizeEntitiesResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if TextAnalyticsRequestOptions.isServiceLogsDisabled() is true in service API version TextAnalyticsServiceVersion.V3_0. disableServiceLogs is only available for API version v3.1 and newer.
    • recognizeEntitiesBatchWithResponse

      public com.azure.core.http.rest.Response<RecognizeEntitiesResultCollection> recognizeEntitiesBatchWithResponse(Iterable<TextDocumentInput> documents, TextAnalyticsRequestOptions options, com.azure.core.util.Context context)
      Returns a list of general categorized entities for the provided list of document with provided request options.

      Code Sample

      Recognizes the entities with http response in a list of document with provided request options.

       List<TextDocumentInput> textDocumentInputs = Arrays.asList(
           new TextDocumentInput("0", "I had a wonderful trip to Seattle last week.").setLanguage("en"),
           new TextDocumentInput("1", "I work at Microsoft.").setLanguage("en")
       );
      
       Response<RecognizeEntitiesResultCollection> response =
           textAnalyticsClient.recognizeEntitiesBatchWithResponse(textDocumentInputs,
               new TextAnalyticsRequestOptions().setIncludeStatistics(true), Context.NONE);
      
       // Response's status code
       System.out.printf("Status code of request response: %d%n", response.getStatusCode());
       RecognizeEntitiesResultCollection recognizeEntitiesResultCollection = response.getValue();
      
       // Batch statistics
       TextDocumentBatchStatistics batchStatistics = recognizeEntitiesResultCollection.getStatistics();
       System.out.printf(
           "A batch of documents statistics, transaction count: %s, valid document count: %s.%n",
           batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
      
       recognizeEntitiesResultCollection.forEach(recognizeEntitiesResult ->
           recognizeEntitiesResult.getEntities().forEach(entity ->
               System.out.printf("Recognized entity: %s, entity category: %s, confidence score: %f.%n",
                   entity.getText(), entity.getCategory(), entity.getConfidenceScore())));
       
      Parameters:
      documents - A list of documents to recognize entities for. For text length limits, maximum batch size, and supported text encoding, see data limits.
      options - The options to configure the scoring model for documents and show statistics.
      context - Additional context that is passed through the Http pipeline during the service call.
      Returns:
      A Response that contains a RecognizeEntitiesResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if TextAnalyticsRequestOptions.isServiceLogsDisabled() is true in service API version TextAnalyticsServiceVersion.V3_0. disableServiceLogs is only available for API version v3.1 and newer.
    • recognizePiiEntities

      public PiiEntityCollection recognizePiiEntities(String document)
      Returns a list of Personally Identifiable Information(PII) entities in the provided document. For a list of supported entity types, check: this For a list of enabled languages, check: this. This method will use the default language that is set using TextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.

      Code Sample

      Recognize the PII entities details in a document.

       PiiEntityCollection piiEntityCollection = textAnalyticsClient.recognizePiiEntities("My SSN is 859-98-0987");
       System.out.printf("Redacted Text: %s%n", piiEntityCollection.getRedactedText());
       for (PiiEntity entity : piiEntityCollection) {
           System.out.printf(
               "Recognized Personally Identifiable Information entity: %s, entity category: %s,"
                   + " entity subcategory: %s, confidence score: %f.%n",
               entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore());
       }
       
      Parameters:
      document - The document to recognize PII entities details for. For text length limits, maximum batch size, and supported text encoding, see data limits.
      Returns:
      A recognized PII entities collection.
      Throws:
      NullPointerException - if document is null.
      TextAnalyticsException - if the response returned with an error.
      UnsupportedOperationException - if recognizePiiEntities is called with service API version TextAnalyticsServiceVersion.V3_0. recognizePiiEntities is only available for API version v3.1 and newer.
    • recognizePiiEntities

      public PiiEntityCollection recognizePiiEntities(String document, String language)
      Returns a list of Personally Identifiable Information(PII) entities in the provided document with provided language code. For a list of supported entity types, check: this For a list of enabled languages, check: this

      Code Sample

      Recognizes the PII entities details in a document with a provided language code.

       PiiEntityCollection piiEntityCollection = textAnalyticsClient.recognizePiiEntities(
           "My SSN is 859-98-0987", "en");
       System.out.printf("Redacted Text: %s%n", piiEntityCollection.getRedactedText());
       piiEntityCollection.forEach(entity -> System.out.printf(
               "Recognized Personally Identifiable Information entity: %s, entity category: %s,"
                   + " entity subcategory: %s, confidence score: %f.%n",
               entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore()));
       
      Parameters:
      document - The document to recognize PII entities details for. For text length limits, maximum batch size, and supported text encoding, see data limits.
      language - The 2 letter ISO 639-1 representation of language. If not set, uses "en" for English as default.
      Returns:
      The recognized PII entities collection.
      Throws:
      NullPointerException - if document is null.
      TextAnalyticsException - if the response returned with an error.
      UnsupportedOperationException - if recognizePiiEntities is called with service API version TextAnalyticsServiceVersion.V3_0. recognizePiiEntities is only available for API version v3.1 and newer.
    • recognizePiiEntities

      public PiiEntityCollection recognizePiiEntities(String document, String language, RecognizePiiEntitiesOptions options)
      Returns a list of Personally Identifiable Information(PII) entities in the provided document with provided language code. For a list of supported entity types, check: this For a list of enabled languages, check: this

      Code Sample

      Recognizes the PII entities details in a document with a provided language code and RecognizePiiEntitiesOptions.

       PiiEntityCollection piiEntityCollection = textAnalyticsClient.recognizePiiEntities(
           "My SSN is 859-98-0987", "en",
           new RecognizePiiEntitiesOptions().setDomainFilter(PiiEntityDomain.PROTECTED_HEALTH_INFORMATION));
       System.out.printf("Redacted Text: %s%n", piiEntityCollection.getRedactedText());
       piiEntityCollection.forEach(entity -> System.out.printf(
           "Recognized Personally Identifiable Information entity: %s, entity category: %s,"
               + " entity subcategory: %s, confidence score: %f.%n",
           entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore()));
       
      Parameters:
      document - The document to recognize PII entities details for. For text length limits, maximum batch size, and supported text encoding, see data limits.
      language - The 2 letter ISO 639-1 representation of language. If not set, uses "en" for English as default.
      options - The additional configurable options that may be passed when recognizing PII entities.
      Returns:
      The recognized PII entities collection.
      Throws:
      NullPointerException - if document is null.
      TextAnalyticsException - if the response returned with an error.
      UnsupportedOperationException - if recognizePiiEntities is called with service API version TextAnalyticsServiceVersion.V3_0. recognizePiiEntities is only available for API version v3.1 and newer.
    • recognizePiiEntitiesBatch

      public RecognizePiiEntitiesResultCollection recognizePiiEntitiesBatch(Iterable<String> documents, String language, RecognizePiiEntitiesOptions options)
      Returns a list of Personally Identifiable Information(PII) entities for the provided list of documents with provided language code and request options.

      Code Sample

      Recognizes the PII entities details in a list of documents with a provided language code and request options.

       List<String> documents = Arrays.asList(
           "My SSN is 859-98-0987",
           "Visa card 4111 1111 1111 1111"
       );
      
       RecognizePiiEntitiesResultCollection resultCollection = textAnalyticsClient.recognizePiiEntitiesBatch(
           documents, "en", new RecognizePiiEntitiesOptions().setIncludeStatistics(true));
      
       // Batch statistics
       TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics();
       System.out.printf("A batch of documents statistics, transaction count: %s, valid document count: %s.%n",
           batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
      
       resultCollection.forEach(recognizePiiEntitiesResult -> {
           PiiEntityCollection piiEntityCollection = recognizePiiEntitiesResult.getEntities();
           System.out.printf("Redacted Text: %s%n", piiEntityCollection.getRedactedText());
           piiEntityCollection.forEach(entity -> System.out.printf(
               "Recognized Personally Identifiable Information entity: %s, entity category: %s,"
                   + " entity subcategory: %s, confidence score: %f.%n",
               entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore()));
       });
       
      Parameters:
      documents - A list of documents to recognize PII entities for. For text length limits, maximum batch size, and supported text encoding, see data limits.
      language - The 2 letter ISO 639-1 representation of language. If not set, uses "en" for English as default.
      options - The additional configurable options that may be passed when recognizing PII entities.
      Returns:
      A RecognizePiiEntitiesResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if recognizePiiEntitiesBatch is called with service API version TextAnalyticsServiceVersion.V3_0. recognizePiiEntitiesBatch is only available for API version v3.1 and newer.
    • recognizePiiEntitiesBatchWithResponse

      public com.azure.core.http.rest.Response<RecognizePiiEntitiesResultCollection> recognizePiiEntitiesBatchWithResponse(Iterable<TextDocumentInput> documents, RecognizePiiEntitiesOptions options, com.azure.core.util.Context context)
      Returns a list of Personally Identifiable Information(PII) entities for the provided list of document with provided request options.

      Code Sample

      Recognizes the PII entities details with http response in a list of document with provided request options.

       List<TextDocumentInput> textDocumentInputs = Arrays.asList(
           new TextDocumentInput("0", "My SSN is 859-98-0987"),
           new TextDocumentInput("1", "Visa card 4111 1111 1111 1111")
       );
      
       Response<RecognizePiiEntitiesResultCollection> response =
           textAnalyticsClient.recognizePiiEntitiesBatchWithResponse(textDocumentInputs,
               new RecognizePiiEntitiesOptions().setIncludeStatistics(true), Context.NONE);
      
       RecognizePiiEntitiesResultCollection resultCollection = response.getValue();
      
       // Batch statistics
       TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics();
       System.out.printf("A batch of documents statistics, transaction count: %s, valid document count: %s.%n",
           batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
      
       resultCollection.forEach(recognizePiiEntitiesResult -> {
           PiiEntityCollection piiEntityCollection = recognizePiiEntitiesResult.getEntities();
           System.out.printf("Redacted Text: %s%n", piiEntityCollection.getRedactedText());
           piiEntityCollection.forEach(entity -> System.out.printf(
               "Recognized Personally Identifiable Information entity: %s, entity category: %s,"
                   + " entity subcategory: %s, confidence score: %f.%n",
               entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore()));
       });
       
      Parameters:
      documents - A list of documents to recognize PII entities for. For text length limits, maximum batch size, and supported text encoding, see data limits.
      options - The additional configurable options that may be passed when recognizing PII entities.
      context - Additional context that is passed through the Http pipeline during the service call.
      Returns:
      A Response that contains a RecognizePiiEntitiesResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if recognizePiiEntitiesBatchWithResponse is called with service API version TextAnalyticsServiceVersion.V3_0. recognizePiiEntitiesBatchWithResponse is only available for API version v3.1 and newer.
    • recognizeLinkedEntities

      public LinkedEntityCollection recognizeLinkedEntities(String document)
      Returns a list of recognized entities with links to a well-known knowledge base for the provided document. See this for supported languages in Text Analytics API. This method will use the default language that can be set by using method TextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.

      Code Sample

      Recognize the linked entities of documents

       final String document = "Old Faithful is a geyser at Yellowstone Park.";
       System.out.println("Linked Entities:");
       textAnalyticsClient.recognizeLinkedEntities(document).forEach(linkedEntity -> {
           System.out.printf("Name: %s, entity ID in data source: %s, URL: %s, data source: %s.%n",
               linkedEntity.getName(), linkedEntity.getDataSourceEntityId(), linkedEntity.getUrl(),
               linkedEntity.getDataSource());
           linkedEntity.getMatches().forEach(entityMatch -> System.out.printf(
               "Matched entity: %s, confidence score: %f.%n",
               entityMatch.getText(), entityMatch.getConfidenceScore()));
       });
       
      Parameters:
      document - The document to recognize linked entities for. For text length limits, maximum batch size, and supported text encoding, see data limits.
      Returns:
      A LinkedEntityCollection contains a list of recognized linked entities.
      Throws:
      NullPointerException - if document is null.
      TextAnalyticsException - if the response returned with an error.
    • recognizeLinkedEntities

      public LinkedEntityCollection recognizeLinkedEntities(String document, String language)
      Returns a list of recognized entities with links to a well-known knowledge base for the provided document with language code. See this for supported languages in Text Analytics API.

      Code Sample

      Recognizes the linked entities in a document with a provided language code.

       String document = "Old Faithful is a geyser at Yellowstone Park.";
       textAnalyticsClient.recognizeLinkedEntities(document, "en").forEach(linkedEntity -> {
           System.out.printf("Name: %s, entity ID in data source: %s, URL: %s, data source: %s.%n",
               linkedEntity.getName(), linkedEntity.getDataSourceEntityId(), linkedEntity.getUrl(),
               linkedEntity.getDataSource());
           linkedEntity.getMatches().forEach(entityMatch -> System.out.printf(
               "Matched entity: %s, confidence score: %f.%n",
               entityMatch.getText(), entityMatch.getConfidenceScore()));
       });
       
      Parameters:
      document - The document to recognize linked entities for. For text length limits, maximum batch size, and supported text encoding, see data limits.
      language - The 2 letter ISO 639-1 representation of language for the document. If not set, uses "en" for English as default.
      Returns:
      A LinkedEntityCollection contains a list of recognized linked entities.
      Throws:
      NullPointerException - if document is null.
      TextAnalyticsException - if the response returned with an error.
    • recognizeLinkedEntitiesBatch

      public RecognizeLinkedEntitiesResultCollection recognizeLinkedEntitiesBatch(Iterable<String> documents, String language, TextAnalyticsRequestOptions options)
      Returns a list of recognized entities with links to a well-known knowledge base for the list of documents with provided language code and request options. See this for supported languages in Text Analytics API.

      Code Sample

      Recognizes the linked entities in a list of documents with a provided language code and request options.

       List<String> documents = Arrays.asList(
           "Old Faithful is a geyser at Yellowstone Park.",
           "Mount Shasta has lenticular clouds."
       );
      
       RecognizeLinkedEntitiesResultCollection resultCollection =
           textAnalyticsClient.recognizeLinkedEntitiesBatch(documents, "en", null);
      
       // Batch statistics
       TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics();
       System.out.printf("A batch of documents statistics, transaction count: %s, valid document count: %s.%n",
           batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
      
       resultCollection.forEach(recognizeLinkedEntitiesResult ->
           recognizeLinkedEntitiesResult.getEntities().forEach(linkedEntity -> {
               System.out.println("Linked Entities:");
               System.out.printf("Name: %s, entity ID in data source: %s, URL: %s, data source: %s.%n",
                   linkedEntity.getName(), linkedEntity.getDataSourceEntityId(), linkedEntity.getUrl(),
                   linkedEntity.getDataSource());
               linkedEntity.getMatches().forEach(entityMatch -> System.out.printf(
                   "Matched entity: %s, confidence score: %f.%n",
                   entityMatch.getText(), entityMatch.getConfidenceScore()));
           }));
       
      Parameters:
      documents - A list of documents to recognize linked entities for. For text length limits, maximum batch size, and supported text encoding, see data limits.
      language - The 2 letter ISO 639-1 representation of language for the documents. If not set, uses "en" for English as default.
      options - The options to configure the scoring model for documents and show statistics.
      Returns:
      A RecognizeLinkedEntitiesResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if TextAnalyticsRequestOptions.isServiceLogsDisabled() is true in service API version TextAnalyticsServiceVersion.V3_0. disableServiceLogs is only available for API version v3.1 and newer.
    • recognizeLinkedEntitiesBatchWithResponse

      public com.azure.core.http.rest.Response<RecognizeLinkedEntitiesResultCollection> recognizeLinkedEntitiesBatchWithResponse(Iterable<TextDocumentInput> documents, TextAnalyticsRequestOptions options, com.azure.core.util.Context context)
      Returns a list of recognized entities with links to a well-known knowledge base for the list of document and request options. See this for supported languages in Text Analytics API.

      Code Sample

      Recognizes the linked entities with http response in a list of TextDocumentInput with request options.

       List<TextDocumentInput> textDocumentInputs = Arrays.asList(
           new TextDocumentInput("1", "Old Faithful is a geyser at Yellowstone Park.").setLanguage("en"),
           new TextDocumentInput("2", "Mount Shasta has lenticular clouds.").setLanguage("en")
       );
      
       Response<RecognizeLinkedEntitiesResultCollection> response =
           textAnalyticsClient.recognizeLinkedEntitiesBatchWithResponse(textDocumentInputs,
               new TextAnalyticsRequestOptions().setIncludeStatistics(true), Context.NONE);
      
       // Response's status code
       System.out.printf("Status code of request response: %d%n", response.getStatusCode());
       RecognizeLinkedEntitiesResultCollection resultCollection = response.getValue();
      
       // Batch statistics
       TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics();
       System.out.printf(
           "A batch of documents statistics, transaction count: %s, valid document count: %s.%n",
           batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
      
       resultCollection.forEach(recognizeLinkedEntitiesResult ->
           recognizeLinkedEntitiesResult.getEntities().forEach(linkedEntity -> {
               System.out.println("Linked Entities:");
               System.out.printf("Name: %s, entity ID in data source: %s, URL: %s, data source: %s.%n",
                   linkedEntity.getName(), linkedEntity.getDataSourceEntityId(), linkedEntity.getUrl(),
                   linkedEntity.getDataSource());
               linkedEntity.getMatches().forEach(entityMatch -> System.out.printf(
                   "Matched entity: %s, confidence score: %.2f.%n",
                   entityMatch.getText(), entityMatch.getConfidenceScore()));
           }));
       
      Parameters:
      documents - A list of documents to recognize linked entities for. For text length limits, maximum batch size, and supported text encoding, see data limits.
      options - The options to configure the scoring model for documents and show statistics.
      context - Additional context that is passed through the Http pipeline during the service call.
      Returns:
      A Response that contains a RecognizeLinkedEntitiesResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if TextAnalyticsRequestOptions.isServiceLogsDisabled() is true in service API version TextAnalyticsServiceVersion.V3_0. disableServiceLogs is only available for API version v3.1 and newer.
    • extractKeyPhrases

      public KeyPhrasesCollection extractKeyPhrases(String document)
      Returns a list of strings denoting the key phrases in the document. This method will use the default language that can be set by using method TextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.

      Code Sample

      Extracts key phrases of documents

       System.out.println("Extracted phrases:");
       for (String keyPhrase : textAnalyticsClient.extractKeyPhrases("My cat might need to see a veterinarian.")) {
           System.out.printf("%s.%n", keyPhrase);
       }
       
      Parameters:
      document - The document to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      Returns:
      A KeyPhrasesCollection contains a list of extracted key phrases.
      Throws:
      NullPointerException - if document is null.
      TextAnalyticsException - if the response returned with an error.
    • extractKeyPhrases

      public KeyPhrasesCollection extractKeyPhrases(String document, String language)
      Returns a list of strings denoting the key phrases in the document. See this for the list of enabled languages.

      Code Sample

      Extracts key phrases in a document with a provided language representation.

       System.out.println("Extracted phrases:");
       textAnalyticsClient.extractKeyPhrases("My cat might need to see a veterinarian.", "en")
           .forEach(kegPhrase -> System.out.printf("%s.%n", kegPhrase));
       
      Parameters:
      document - The document to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      language - The 2 letter ISO 639-1 representation of language for the document. If not set, uses "en" for English as default.
      Returns:
      A KeyPhrasesCollection contains a list of extracted key phrases.
      Throws:
      NullPointerException - if document is null.
      TextAnalyticsException - if the response returned with an error.
    • extractKeyPhrasesBatch

      public ExtractKeyPhrasesResultCollection extractKeyPhrasesBatch(Iterable<String> documents, String language, TextAnalyticsRequestOptions options)
      Returns a list of strings denoting the key phrases in the documents with provided language code and request options. See this for the list of enabled languages.

      Code Sample

      Extracts key phrases in a list of documents with a provided language code and request options.

       List<String> documents = Arrays.asList(
           "My cat might need to see a veterinarian.",
           "The pitot tube is used to measure airspeed."
       );
      
       // Extracting batch key phrases
       ExtractKeyPhrasesResultCollection resultCollection =
           textAnalyticsClient.extractKeyPhrasesBatch(documents, "en", null);
      
       // Batch statistics
       TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics();
       System.out.printf(
           "A batch of documents statistics, transaction count: %s, valid document count: %s.%n",
           batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
      
       // Extracted key phrase for each of documents from a batch of documents
       resultCollection.forEach(extractKeyPhraseResult -> {
           System.out.printf("Document ID: %s%n", extractKeyPhraseResult.getId());
           // Valid document
           System.out.println("Extracted phrases:");
           extractKeyPhraseResult.getKeyPhrases().forEach(keyPhrase -> System.out.printf("%s.%n", keyPhrase));
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      language - The 2 letter ISO 639-1 representation of language for the documents. If not set, uses "en" for English as default.
      options - The options to configure the scoring model for documents and show statistics.
      Returns:
      A ExtractKeyPhrasesResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if TextAnalyticsRequestOptions.isServiceLogsDisabled() is true in service API version TextAnalyticsServiceVersion.V3_0. disableServiceLogs is only available for API version v3.1 and newer.
    • extractKeyPhrasesBatchWithResponse

      public com.azure.core.http.rest.Response<ExtractKeyPhrasesResultCollection> extractKeyPhrasesBatchWithResponse(Iterable<TextDocumentInput> documents, TextAnalyticsRequestOptions options, com.azure.core.util.Context context)
      Returns a list of strings denoting the key phrases in the a batch of document with request options. See this for the list of enabled languages.

      Code Sample

      Extracts key phrases with http response in a list of TextDocumentInput with request options.

       List<TextDocumentInput> textDocumentInputs = Arrays.asList(
           new TextDocumentInput("1", "My cat might need to see a veterinarian.").setLanguage("en"),
           new TextDocumentInput("2", "The pitot tube is used to measure airspeed.").setLanguage("en")
       );
      
       // Extracting batch key phrases
       Response<ExtractKeyPhrasesResultCollection> response =
           textAnalyticsClient.extractKeyPhrasesBatchWithResponse(textDocumentInputs,
               new TextAnalyticsRequestOptions().setIncludeStatistics(true), Context.NONE);
      
      
       // Response's status code
       System.out.printf("Status code of request response: %d%n", response.getStatusCode());
       ExtractKeyPhrasesResultCollection resultCollection = response.getValue();
      
       // Batch statistics
       TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics();
       System.out.printf(
           "A batch of documents statistics, transaction count: %s, valid document count: %s.%n",
           batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
      
       // Extracted key phrase for each of documents from a batch of documents
       resultCollection.forEach(extractKeyPhraseResult -> {
           System.out.printf("Document ID: %s%n", extractKeyPhraseResult.getId());
           // Valid document
           System.out.println("Extracted phrases:");
           extractKeyPhraseResult.getKeyPhrases().forEach(keyPhrase ->
               System.out.printf("%s.%n", keyPhrase));
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      options - The options to configure the scoring model for documents and show statistics.
      context - Additional context that is passed through the Http pipeline during the service call.
      Returns:
      A Response that contains a ExtractKeyPhrasesResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if TextAnalyticsRequestOptions.isServiceLogsDisabled() is true in service API version TextAnalyticsServiceVersion.V3_0. disableServiceLogs is only available for API version v3.1 and newer.
    • analyzeSentiment

      public DocumentSentiment analyzeSentiment(String document)
      Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it. This method will use the default language that can be set by using method TextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.

      Code Sample

      Analyze the sentiments of documents

       final DocumentSentiment documentSentiment =
           textAnalyticsClient.analyzeSentiment("The hotel was dark and unclean.");
      
       System.out.printf(
           "Recognized sentiment: %s, positive score: %.2f, neutral score: %.2f, negative score: %.2f.%n",
           documentSentiment.getSentiment(),
           documentSentiment.getConfidenceScores().getPositive(),
           documentSentiment.getConfidenceScores().getNeutral(),
           documentSentiment.getConfidenceScores().getNegative());
      
       for (SentenceSentiment sentenceSentiment : documentSentiment.getSentences()) {
           System.out.printf(
               "Recognized sentence sentiment: %s, positive score: %.2f, neutral score: %.2f, negative score: %.2f.%n",
               sentenceSentiment.getSentiment(),
               sentenceSentiment.getConfidenceScores().getPositive(),
               sentenceSentiment.getConfidenceScores().getNeutral(),
               sentenceSentiment.getConfidenceScores().getNegative());
       }
       
      Parameters:
      document - The document to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      Returns:
      A analyzed document sentiment of the document.
      Throws:
      NullPointerException - if document is null.
      TextAnalyticsException - if the response returned with an error.
    • analyzeSentiment

      public DocumentSentiment analyzeSentiment(String document, String language)
      Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it.

      Code Sample

      Analyze the sentiments in a document with a provided language representation.

       final DocumentSentiment documentSentiment = textAnalyticsClient.analyzeSentiment(
           "The hotel was dark and unclean.", "en");
      
       System.out.printf(
           "Recognized sentiment: %s, positive score: %.2f, neutral score: %.2f, negative score: %.2f.%n",
           documentSentiment.getSentiment(),
           documentSentiment.getConfidenceScores().getPositive(),
           documentSentiment.getConfidenceScores().getNeutral(),
           documentSentiment.getConfidenceScores().getNegative());
      
       for (SentenceSentiment sentenceSentiment : documentSentiment.getSentences()) {
           System.out.printf(
               "Recognized sentence sentiment: %s, positive score: %.2f, neutral score: %.2f, negative score: %.2f.%n",
               sentenceSentiment.getSentiment(),
               sentenceSentiment.getConfidenceScores().getPositive(),
               sentenceSentiment.getConfidenceScores().getNeutral(),
               sentenceSentiment.getConfidenceScores().getNegative());
       }
       
      Parameters:
      document - The document to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      language - The 2 letter ISO 639-1 representation of language for the document. If not set, uses "en" for English as default.
      Returns:
      A analyzed document sentiment of the document.
      Throws:
      NullPointerException - if document is null.
      TextAnalyticsException - if the response returned with an error.
    • analyzeSentiment

      public DocumentSentiment analyzeSentiment(String document, String language, AnalyzeSentimentOptions options)
      Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it. If the includeOpinionMining of AnalyzeSentimentOptions set to true, the output will include the opinion mining results. It mines the opinions of a sentence and conducts more granular analysis around the aspects in the text (also known as aspect-based sentiment analysis).

      Code Sample

      Analyze the sentiment and mine the opinions for each sentence in a document with a provided language representation and AnalyzeSentimentOptions options.

       final DocumentSentiment documentSentiment = textAnalyticsClient.analyzeSentiment(
           "The hotel was dark and unclean.", "en",
           new AnalyzeSentimentOptions().setIncludeOpinionMining(true));
       for (SentenceSentiment sentenceSentiment : documentSentiment.getSentences()) {
           System.out.printf("\tSentence sentiment: %s%n", sentenceSentiment.getSentiment());
           sentenceSentiment.getOpinions().forEach(opinion -> {
               TargetSentiment targetSentiment = opinion.getTarget();
               System.out.printf("\tTarget sentiment: %s, target text: %s%n", targetSentiment.getSentiment(),
                   targetSentiment.getText());
               for (AssessmentSentiment assessmentSentiment : opinion.getAssessments()) {
                   System.out.printf("\t\t'%s' sentiment because of \"%s\". Is the assessment negated: %s.%n",
                       assessmentSentiment.getSentiment(), assessmentSentiment.getText(), assessmentSentiment.isNegated());
               }
           });
       }
       
      Parameters:
      document - The document to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      language - The 2 letter ISO 639-1 representation of language for the document. If not set, uses "en" for English as default.
      options - The additional configurable options that may be passed when analyzing sentiments.
      Returns:
      A analyzed document sentiment of the document.
      Throws:
      NullPointerException - if document is null.
      TextAnalyticsException - if the response returned with an error.
      UnsupportedOperationException - if TextAnalyticsRequestOptions.isServiceLogsDisabled() or AnalyzeSentimentOptions.isIncludeOpinionMining() is true in service API version TextAnalyticsServiceVersion.V3_0. disableServiceLogs and includeOpinionMining are only available for API version v3.1 and newer.
    • analyzeSentimentBatch

      @Deprecated public AnalyzeSentimentResultCollection analyzeSentimentBatch(Iterable<String> documents, String language, TextAnalyticsRequestOptions options)
      Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it.

      Code Sample

      Analyze the sentiments in a list of documents with a provided language representation and request options.

       List<String> documents = Arrays.asList(
           "The hotel was dark and unclean. The restaurant had amazing gnocchi.",
           "The restaurant had amazing gnocchi. The hotel was dark and unclean."
       );
      
       // Analyzing batch sentiments
       AnalyzeSentimentResultCollection resultCollection = textAnalyticsClient.analyzeSentimentBatch(
           documents, "en", new TextAnalyticsRequestOptions().setIncludeStatistics(true));
      
       // Batch statistics
       TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics();
       System.out.printf("A batch of documents statistics, transaction count: %s, valid document count: %s.%n",
           batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
      
       // Analyzed sentiment for each of documents from a batch of documents
       resultCollection.forEach(analyzeSentimentResult -> {
           System.out.printf("Document ID: %s%n", analyzeSentimentResult.getId());
           // Valid document
           DocumentSentiment documentSentiment = analyzeSentimentResult.getDocumentSentiment();
           System.out.printf(
               "Recognized document sentiment: %s, positive score: %.2f, neutral score: %.2f,"
                   + " negative score: %.2f.%n",
               documentSentiment.getSentiment(),
               documentSentiment.getConfidenceScores().getPositive(),
               documentSentiment.getConfidenceScores().getNeutral(),
               documentSentiment.getConfidenceScores().getNegative());
           documentSentiment.getSentences().forEach(sentenceSentiment -> System.out.printf(
               "Recognized sentence sentiment: %s, positive score: %.2f, neutral score: %.2f,"
                   + " negative score: %.2f.%n",
               sentenceSentiment.getSentiment(),
               sentenceSentiment.getConfidenceScores().getPositive(),
               sentenceSentiment.getConfidenceScores().getNeutral(),
               sentenceSentiment.getConfidenceScores().getNegative()));
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      language - The 2 letter ISO 639-1 representation of language for the documents. If not set, uses "en" for English as default.
      options - The options to configure the scoring model for documents and show statistics.
      Returns:
      A AnalyzeSentimentResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
    • analyzeSentimentBatch

      public AnalyzeSentimentResultCollection analyzeSentimentBatch(Iterable<String> documents, String language, AnalyzeSentimentOptions options)
      Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it. If the includeOpinionMining of AnalyzeSentimentOptions set to true, the output will include the opinion mining results. It mines the opinions of a sentence and conducts more granular analysis around the aspects in the text (also known as aspect-based sentiment analysis).

      Code Sample

      Analyze the sentiments and mine the opinions for each sentence in a list of documents with a provided language representation and AnalyzeSentimentOptions options.

       List<String> documents = Arrays.asList(
           "The hotel was dark and unclean. The restaurant had amazing gnocchi.",
           "The restaurant had amazing gnocchi. The hotel was dark and unclean."
       );
      
       // Analyzing batch sentiments
       AnalyzeSentimentResultCollection resultCollection = textAnalyticsClient.analyzeSentimentBatch(
           documents, "en", new AnalyzeSentimentOptions().setIncludeOpinionMining(true));
      
       // Analyzed sentiment for each of documents from a batch of documents
       resultCollection.forEach(analyzeSentimentResult -> {
           System.out.printf("Document ID: %s%n", analyzeSentimentResult.getId());
           DocumentSentiment documentSentiment = analyzeSentimentResult.getDocumentSentiment();
           documentSentiment.getSentences().forEach(sentenceSentiment -> {
               System.out.printf("\tSentence sentiment: %s%n", sentenceSentiment.getSentiment());
               sentenceSentiment.getOpinions().forEach(opinion -> {
                   TargetSentiment targetSentiment = opinion.getTarget();
                   System.out.printf("\tTarget sentiment: %s, target text: %s%n", targetSentiment.getSentiment(),
                       targetSentiment.getText());
                   for (AssessmentSentiment assessmentSentiment : opinion.getAssessments()) {
                       System.out.printf("\t\t'%s' sentiment because of \"%s\". Is the assessment negated: %s.%n",
                           assessmentSentiment.getSentiment(), assessmentSentiment.getText(), assessmentSentiment.isNegated());
                   }
               });
           });
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      language - The 2 letter ISO 639-1 representation of language for the documents. If not set, uses "en" for English as default.
      options - The additional configurable options that may be passed when analyzing sentiments.
      Returns:
      A AnalyzeSentimentResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if TextAnalyticsRequestOptions.isServiceLogsDisabled() or AnalyzeSentimentOptions.isIncludeOpinionMining() is true in service API version TextAnalyticsServiceVersion.V3_0. disableServiceLogs and includeOpinionMining are only available for API version v3.1 and newer.
    • analyzeSentimentBatchWithResponse

      @Deprecated public com.azure.core.http.rest.Response<AnalyzeSentimentResultCollection> analyzeSentimentBatchWithResponse(Iterable<TextDocumentInput> documents, TextAnalyticsRequestOptions options, com.azure.core.util.Context context)
      Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it.

      Code Sample

      Analyze sentiment in a list of document with provided request options.

       List<TextDocumentInput> textDocumentInputs = Arrays.asList(
           new TextDocumentInput("1", "The hotel was dark and unclean. The restaurant had amazing gnocchi.")
               .setLanguage("en"),
           new TextDocumentInput("2", "The restaurant had amazing gnocchi. The hotel was dark and unclean.")
               .setLanguage("en")
       );
      
       // Analyzing batch sentiments
       Response<AnalyzeSentimentResultCollection> response =
           textAnalyticsClient.analyzeSentimentBatchWithResponse(textDocumentInputs,
               new TextAnalyticsRequestOptions().setIncludeStatistics(true), Context.NONE);
      
       // Response's status code
       System.out.printf("Status code of request response: %d%n", response.getStatusCode());
       AnalyzeSentimentResultCollection resultCollection = response.getValue();
      
       // Batch statistics
       TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics();
       System.out.printf("A batch of documents statistics, transaction count: %s, valid document count: %s.%n",
           batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
      
       // Analyzed sentiment for each of documents from a batch of documents
       resultCollection.forEach(analyzeSentimentResult -> {
           System.out.printf("Document ID: %s%n", analyzeSentimentResult.getId());
           // Valid document
           DocumentSentiment documentSentiment = analyzeSentimentResult.getDocumentSentiment();
           System.out.printf(
               "Recognized document sentiment: %s, positive score: %.2f, neutral score: %.2f, "
                   + "negative score: %.2f.%n",
               documentSentiment.getSentiment(),
               documentSentiment.getConfidenceScores().getPositive(),
               documentSentiment.getConfidenceScores().getNeutral(),
               documentSentiment.getConfidenceScores().getNegative());
           documentSentiment.getSentences().forEach(sentenceSentiment -> {
               System.out.printf(
                   "Recognized sentence sentiment: %s, positive score: %.2f, neutral score: %.2f,"
                       + " negative score: %.2f.%n",
                   sentenceSentiment.getSentiment(),
                   sentenceSentiment.getConfidenceScores().getPositive(),
                   sentenceSentiment.getConfidenceScores().getNeutral(),
                   sentenceSentiment.getConfidenceScores().getNegative());
           });
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      options - The options to configure the scoring model for documents and show statistics.
      context - Additional context that is passed through the Http pipeline during the service call.
      Returns:
      A Response that contains a AnalyzeSentimentResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
    • analyzeSentimentBatchWithResponse

      public com.azure.core.http.rest.Response<AnalyzeSentimentResultCollection> analyzeSentimentBatchWithResponse(Iterable<TextDocumentInput> documents, AnalyzeSentimentOptions options, com.azure.core.util.Context context)
      Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it. If the includeOpinionMining of AnalyzeSentimentOptions set to true, the output will include the opinion mining results. It mines the opinions of a sentence and conducts more granular analysis around the aspects in the text (also known as aspect-based sentiment analysis).

      Code Sample

      Analyze sentiment and mine the opinions for each sentence in a list of document with provided AnalyzeSentimentOptions options.

       List<TextDocumentInput> textDocumentInputs = Arrays.asList(
           new TextDocumentInput("1", "The hotel was dark and unclean. The restaurant had amazing gnocchi.")
               .setLanguage("en"),
           new TextDocumentInput("2", "The restaurant had amazing gnocchi. The hotel was dark and unclean.")
               .setLanguage("en")
       );
      
       AnalyzeSentimentOptions options = new AnalyzeSentimentOptions().setIncludeOpinionMining(true)
           .setIncludeStatistics(true);
      
       // Analyzing batch sentiments
       Response<AnalyzeSentimentResultCollection> response =
           textAnalyticsClient.analyzeSentimentBatchWithResponse(textDocumentInputs, options, Context.NONE);
      
       // Response's status code
       System.out.printf("Status code of request response: %d%n", response.getStatusCode());
       AnalyzeSentimentResultCollection resultCollection = response.getValue();
      
       // Batch statistics
       TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics();
       System.out.printf("A batch of documents statistics, transaction count: %s, valid document count: %s.%n",
           batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
      
       // Analyzed sentiment for each of documents from a batch of documents
       resultCollection.forEach(analyzeSentimentResult -> {
           System.out.printf("Document ID: %s%n", analyzeSentimentResult.getId());
           DocumentSentiment documentSentiment = analyzeSentimentResult.getDocumentSentiment();
           documentSentiment.getSentences().forEach(sentenceSentiment -> {
               System.out.printf("\tSentence sentiment: %s%n", sentenceSentiment.getSentiment());
               sentenceSentiment.getOpinions().forEach(opinion -> {
                   TargetSentiment targetSentiment = opinion.getTarget();
                   System.out.printf("\tTarget sentiment: %s, target text: %s%n", targetSentiment.getSentiment(),
                       targetSentiment.getText());
                   for (AssessmentSentiment assessmentSentiment : opinion.getAssessments()) {
                       System.out.printf("\t\t'%s' sentiment because of \"%s\". Is the assessment negated: %s.%n",
                           assessmentSentiment.getSentiment(), assessmentSentiment.getText(),
                           assessmentSentiment.isNegated());
                   }
               });
           });
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      options - The additional configurable options that may be passed when analyzing sentiments.
      context - Additional context that is passed through the Http pipeline during the service call.
      Returns:
      A Response that contains a AnalyzeSentimentResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if TextAnalyticsRequestOptions.isServiceLogsDisabled() or AnalyzeSentimentOptions.isIncludeOpinionMining() is true in service API version TextAnalyticsServiceVersion.V3_0. disableServiceLogs and includeOpinionMining are only available for API version v3.1 and newer.
    • dynamicClassifyBatch

      public DynamicClassifyDocumentResultCollection dynamicClassifyBatch(Iterable<String> documents, Iterable<String> categories, String language, DynamicClassifyOptions options)
      Perform dynamic classification on a batch of documents. On the fly classification of the input documents into one or multiple categories. Assigns either one or multiple categories per document. This type of classification doesn't require model training. See https://aka.ms/azsdk/textanalytics/data-limits for service data limits.

      Code Sample

      Dynamic classification of each document in a list of document with provided DynamicClassifyOptions options.

       List<String> documents = new ArrayList<>();
       documents.add("The WHO is issuing a warning about Monkey Pox.");
       documents.add("Mo Salah plays in Liverpool FC in England.");
       DynamicClassifyOptions options = new DynamicClassifyOptions();
      
       // Analyzing dynamic classification
       DynamicClassifyDocumentResultCollection resultCollection = textAnalyticsClient.dynamicClassifyBatch(
           documents, Arrays.asList("Health", "Politics", "Music", "Sport"), "en", options);
      
       // Result of dynamic classification
       resultCollection.forEach(documentResult -> {
           System.out.println("Document ID: " + documentResult.getId());
           for (ClassificationCategory classification : documentResult.getClassifications()) {
               System.out.printf("\tCategory: %s, confidence score: %f.%n",
                   classification.getCategory(), classification.getConfidenceScore());
           }
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      categories - A list of categories to which input is classified to. This parameter can not be empty and at least has two categories assigned.
      language - The 2 letter ISO 639-1 representation of language for the documents. If not set, uses "en" for English as default.
      options - The additional configurable options that may be passed when analyzing dynamic classification.
      Returns:
      A DynamicClassifyDocumentResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if dynamicClassifyBatch is called with service API version TextAnalyticsServiceVersion.V3_0, TextAnalyticsServiceVersion.V3_1, or TextAnalyticsServiceVersion.V2022_05_01. Those actions are only available for API version 2022-10-01-preview and newer.
      TextAnalyticsException - If analyze operation fails.
    • dynamicClassifyBatchWithResponse

      public com.azure.core.http.rest.Response<DynamicClassifyDocumentResultCollection> dynamicClassifyBatchWithResponse(Iterable<TextDocumentInput> documents, Iterable<String> categories, DynamicClassifyOptions options, com.azure.core.util.Context context)
      Perform dynamic classification on a batch of documents. On the fly classification of the input documents into one or multiple categories. Assigns either one or multiple categories per document. This type of classification doesn't require model training. See https://aka.ms/azsdk/textanalytics/data-limits for service data limits.

      Code Sample

      Dynamic classification of each document in a list of document with provided DynamicClassifyOptions options.

       List<TextDocumentInput> textDocumentInputs = Arrays.asList(
           new TextDocumentInput("1", "The hotel was dark and unclean. The restaurant had amazing gnocchi.")
               .setLanguage("en"),
           new TextDocumentInput("2", "The restaurant had amazing gnocchi. The hotel was dark and unclean.")
               .setLanguage("en")
       );
      
       AnalyzeSentimentOptions options = new AnalyzeSentimentOptions().setIncludeOpinionMining(true)
           .setIncludeStatistics(true);
      
       // Analyzing batch sentiments
       Response<AnalyzeSentimentResultCollection> response =
           textAnalyticsClient.analyzeSentimentBatchWithResponse(textDocumentInputs, options, Context.NONE);
      
       // Response's status code
       System.out.printf("Status code of request response: %d%n", response.getStatusCode());
       AnalyzeSentimentResultCollection resultCollection = response.getValue();
      
       // Batch statistics
       TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics();
       System.out.printf("A batch of documents statistics, transaction count: %s, valid document count: %s.%n",
           batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount());
      
       // Analyzed sentiment for each of documents from a batch of documents
       resultCollection.forEach(analyzeSentimentResult -> {
           System.out.printf("Document ID: %s%n", analyzeSentimentResult.getId());
           DocumentSentiment documentSentiment = analyzeSentimentResult.getDocumentSentiment();
           documentSentiment.getSentences().forEach(sentenceSentiment -> {
               System.out.printf("\tSentence sentiment: %s%n", sentenceSentiment.getSentiment());
               sentenceSentiment.getOpinions().forEach(opinion -> {
                   TargetSentiment targetSentiment = opinion.getTarget();
                   System.out.printf("\tTarget sentiment: %s, target text: %s%n", targetSentiment.getSentiment(),
                       targetSentiment.getText());
                   for (AssessmentSentiment assessmentSentiment : opinion.getAssessments()) {
                       System.out.printf("\t\t'%s' sentiment because of \"%s\". Is the assessment negated: %s.%n",
                           assessmentSentiment.getSentiment(), assessmentSentiment.getText(),
                           assessmentSentiment.isNegated());
                   }
               });
           });
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      categories - A list of categories to which input is classified to. This parameter can not be empty and at least has two categories assigned.
      options - The additional configurable options that may be passed when analyzing dynamic classification.
      context - Additional context that is passed through the Http pipeline during the service call.
      Returns:
      A Response that contains a DynamicClassifyDocumentResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if dynamicClassificationBatchWithResponse is called with service API version TextAnalyticsServiceVersion.V3_0, TextAnalyticsServiceVersion.V3_1, or TextAnalyticsServiceVersion.V2022_05_01. Those actions are only available for API version 2022-10-01-preview and newer.
      TextAnalyticsException - If analyze operation fails.
    • beginAnalyzeHealthcareEntities

      public com.azure.core.util.polling.SyncPoller<AnalyzeHealthcareEntitiesOperationDetail,AnalyzeHealthcareEntitiesPagedIterable> beginAnalyzeHealthcareEntities(Iterable<String> documents)
      Analyze healthcare entities, entity data sources, and entity relations in a list of documents. This method will use the default language that can be set by using method TextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.

      Code Sample

       List<String> documents = new ArrayList<>();
       for (int i = 0; i < 3; i++) {
           documents.add("The patient is a 54-year-old gentleman with a history of progressive angina over "
               + "the past several months.");
       }
      
       SyncPoller<AnalyzeHealthcareEntitiesOperationDetail, AnalyzeHealthcareEntitiesPagedIterable>
           syncPoller = textAnalyticsClient.beginAnalyzeHealthcareEntities(documents);
      
       syncPoller.waitForCompletion();
       AnalyzeHealthcareEntitiesPagedIterable result = syncPoller.getFinalResult();
      
       result.forEach(analyzeHealthcareEntitiesResultCollection -> {
           analyzeHealthcareEntitiesResultCollection.forEach(healthcareEntitiesResult -> {
               System.out.println("document id = " + healthcareEntitiesResult.getId());
               System.out.println("Document entities: ");
               AtomicInteger ct = new AtomicInteger();
               healthcareEntitiesResult.getEntities().forEach(healthcareEntity -> {
                   System.out.printf("\ti = %d, Text: %s, category: %s, confidence score: %f.%n",
                       ct.getAndIncrement(), healthcareEntity.getText(), healthcareEntity.getCategory(),
                       healthcareEntity.getConfidenceScore());
      
                   IterableStream<EntityDataSource> healthcareEntityDataSources =
                       healthcareEntity.getDataSources();
                   if (healthcareEntityDataSources != null) {
                       healthcareEntityDataSources.forEach(healthcareEntityLink -> System.out.printf(
                           "\t\tEntity ID in data source: %s, data source: %s.%n",
                           healthcareEntityLink.getEntityId(), healthcareEntityLink.getName()));
                   }
               });
               // Healthcare entity relation groups
               healthcareEntitiesResult.getEntityRelations().forEach(entityRelation -> {
                   System.out.printf("\tRelation type: %s.%n", entityRelation.getRelationType());
                   entityRelation.getRoles().forEach(role -> {
                       final HealthcareEntity entity = role.getEntity();
                       System.out.printf("\t\tEntity text: %s, category: %s, role: %s.%n",
                           entity.getText(), entity.getCategory(), role.getName());
                   });
                   System.out.printf("\tRelation confidence score: %f.%n",
                       entityRelation.getConfidenceScore());
               });
           });
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      Returns:
      A SyncPoller that polls the analyze healthcare operation until it has completed, has failed, or has been cancelled. The completed operation returns a PagedIterable of AnalyzeHealthcareEntitiesResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if beginAnalyzeHealthcareEntities is called with service API version TextAnalyticsServiceVersion.V3_0. beginAnalyzeHealthcareEntities is only available for API version v3.1 and newer.
      TextAnalyticsException - If analyze operation fails.
    • beginAnalyzeHealthcareEntities

      public com.azure.core.util.polling.SyncPoller<AnalyzeHealthcareEntitiesOperationDetail,AnalyzeHealthcareEntitiesPagedIterable> beginAnalyzeHealthcareEntities(Iterable<String> documents, String language, AnalyzeHealthcareEntitiesOptions options)
      Analyze healthcare entities, entity data sources, and entity relations in a list of documents with provided request options. See this supported languages in Language service API.

      Code Sample

       List<String> documents = new ArrayList<>();
       for (int i = 0; i < 3; i++) {
           documents.add("The patient is a 54-year-old gentleman with a history of progressive angina over "
               + "the past several months.");
       }
      
       // Request options: show statistics and model version
       AnalyzeHealthcareEntitiesOptions options = new AnalyzeHealthcareEntitiesOptions()
           .setIncludeStatistics(true);
      
       SyncPoller<AnalyzeHealthcareEntitiesOperationDetail, AnalyzeHealthcareEntitiesPagedIterable>
           syncPoller = textAnalyticsClient.beginAnalyzeHealthcareEntities(documents, "en", options);
      
       syncPoller.waitForCompletion();
       AnalyzeHealthcareEntitiesPagedIterable result = syncPoller.getFinalResult();
      
       result.forEach(analyzeHealthcareEntitiesResultCollection -> {
           // Model version
           System.out.printf("Results of Azure Text Analytics \"Analyze Healthcare\" Model, version: %s%n",
               analyzeHealthcareEntitiesResultCollection.getModelVersion());
      
           TextDocumentBatchStatistics healthcareTaskStatistics =
               analyzeHealthcareEntitiesResultCollection.getStatistics();
           // Batch statistics
           System.out.printf("Documents statistics: document count = %d, erroneous document count = %d,"
                   + " transaction count = %d, valid document count = %d.%n",
               healthcareTaskStatistics.getDocumentCount(), healthcareTaskStatistics.getInvalidDocumentCount(),
               healthcareTaskStatistics.getTransactionCount(), healthcareTaskStatistics.getValidDocumentCount());
      
           analyzeHealthcareEntitiesResultCollection.forEach(healthcareEntitiesResult -> {
               System.out.println("document id = " + healthcareEntitiesResult.getId());
               System.out.println("Document entities: ");
               AtomicInteger ct = new AtomicInteger();
               healthcareEntitiesResult.getEntities().forEach(healthcareEntity -> {
                   System.out.printf("\ti = %d, Text: %s, category: %s, confidence score: %f.%n",
                       ct.getAndIncrement(), healthcareEntity.getText(), healthcareEntity.getCategory(),
                       healthcareEntity.getConfidenceScore());
      
                   IterableStream<EntityDataSource> healthcareEntityDataSources =
                       healthcareEntity.getDataSources();
                   if (healthcareEntityDataSources != null) {
                       healthcareEntityDataSources.forEach(healthcareEntityLink -> System.out.printf(
                           "\t\tEntity ID in data source: %s, data source: %s.%n",
                           healthcareEntityLink.getEntityId(), healthcareEntityLink.getName()));
                   }
               });
               // Healthcare entity relation groups
               healthcareEntitiesResult.getEntityRelations().forEach(entityRelation -> {
                   System.out.printf("\tRelation type: %s.%n", entityRelation.getRelationType());
                   entityRelation.getRoles().forEach(role -> {
                       final HealthcareEntity entity = role.getEntity();
                       System.out.printf("\t\tEntity text: %s, category: %s, role: %s.%n",
                           entity.getText(), entity.getCategory(), role.getName());
                   });
                   System.out.printf("\tRelation confidence score: %f.%n", entityRelation.getConfidenceScore());
               });
           });
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      language - The 2-letter ISO 639-1 representation of language for the documents. If not set, uses "en" for English as default.
      options - The additional configurable options that may be passed when analyzing healthcare entities.
      Returns:
      A SyncPoller that polls the analyze healthcare operation until it has completed, has failed, or has been cancelled. The completed operation returns a PagedIterable of AnalyzeHealthcareEntitiesResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if beginAnalyzeHealthcareEntities is called with service API version TextAnalyticsServiceVersion.V3_0. beginAnalyzeHealthcareEntities is only available for API version v3.1 and newer.
      TextAnalyticsException - If analyze operation fails.
    • beginAnalyzeHealthcareEntities

      public com.azure.core.util.polling.SyncPoller<AnalyzeHealthcareEntitiesOperationDetail,AnalyzeHealthcareEntitiesPagedIterable> beginAnalyzeHealthcareEntities(Iterable<TextDocumentInput> documents, AnalyzeHealthcareEntitiesOptions options, com.azure.core.util.Context context)
      Analyze healthcare entities, entity data sources, and entity relations in a list of document and provided request options to show statistics. See this supported languages in Language service API.

      Code Sample

       List<TextDocumentInput> documents = new ArrayList<>();
       for (int i = 0; i < 3; i++) {
           documents.add(new TextDocumentInput(Integer.toString(i),
               "The patient is a 54-year-old gentleman with a history of progressive angina over "
                   + "the past several months."));
       }
      
       // Request options: show statistics and model version
       AnalyzeHealthcareEntitiesOptions options = new AnalyzeHealthcareEntitiesOptions()
           .setIncludeStatistics(true);
      
       SyncPoller<AnalyzeHealthcareEntitiesOperationDetail, AnalyzeHealthcareEntitiesPagedIterable>
           syncPoller = textAnalyticsClient.beginAnalyzeHealthcareEntities(documents, options, Context.NONE);
      
       syncPoller.waitForCompletion();
       AnalyzeHealthcareEntitiesPagedIterable result = syncPoller.getFinalResult();
      
       // Task operation statistics
       final AnalyzeHealthcareEntitiesOperationDetail operationResult = syncPoller.poll().getValue();
       System.out.printf("Operation created time: %s, expiration time: %s.%n",
           operationResult.getCreatedAt(), operationResult.getExpiresAt());
      
       result.forEach(analyzeHealthcareEntitiesResultCollection -> {
           // Model version
           System.out.printf("Results of Azure Text Analytics \"Analyze Healthcare\" Model, version: %s%n",
               analyzeHealthcareEntitiesResultCollection.getModelVersion());
      
           TextDocumentBatchStatistics healthcareTaskStatistics =
               analyzeHealthcareEntitiesResultCollection.getStatistics();
           // Batch statistics
           System.out.printf("Documents statistics: document count = %d, erroneous document count = %d,"
                   + " transaction count = %d, valid document count = %d.%n",
               healthcareTaskStatistics.getDocumentCount(), healthcareTaskStatistics.getInvalidDocumentCount(),
               healthcareTaskStatistics.getTransactionCount(), healthcareTaskStatistics.getValidDocumentCount());
      
           analyzeHealthcareEntitiesResultCollection.forEach(healthcareEntitiesResult -> {
               System.out.println("document id = " + healthcareEntitiesResult.getId());
               System.out.println("Document entities: ");
               AtomicInteger ct = new AtomicInteger();
               healthcareEntitiesResult.getEntities().forEach(healthcareEntity -> {
                   System.out.printf("\ti = %d, Text: %s, category: %s, confidence score: %f.%n",
                       ct.getAndIncrement(), healthcareEntity.getText(), healthcareEntity.getCategory(),
                       healthcareEntity.getConfidenceScore());
      
                   IterableStream<EntityDataSource> healthcareEntityDataSources =
                       healthcareEntity.getDataSources();
                   if (healthcareEntityDataSources != null) {
                       healthcareEntityDataSources.forEach(healthcareEntityLink -> System.out.printf(
                           "\t\tEntity ID in data source: %s, data source: %s.%n",
                           healthcareEntityLink.getEntityId(), healthcareEntityLink.getName()));
                   }
               });
               // Healthcare entity relation groups
               healthcareEntitiesResult.getEntityRelations().forEach(entityRelation -> {
                   System.out.printf("\tRelation type: %s.%n", entityRelation.getRelationType());
                   entityRelation.getRoles().forEach(role -> {
                       final HealthcareEntity entity = role.getEntity();
                       System.out.printf("\t\tEntity text: %s, category: %s, role: %s.%n",
                           entity.getText(), entity.getCategory(), role.getName());
                   });
                   System.out.printf("\tRelation confidence score: %f.%n", entityRelation.getConfidenceScore());
               });
           });
       });
       
      Parameters:
      documents - A list of documents to be analyzed.
      options - The additional configurable options that may be passed when analyzing healthcare entities.
      context - Additional context that is passed through the Http pipeline during the service call.
      Returns:
      A SyncPoller that polls the analyze healthcare operation until it has completed, has failed, or has been cancelled. The completed operation returns a PagedIterable of AnalyzeHealthcareEntitiesResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if beginAnalyzeHealthcareEntities is called with service API version TextAnalyticsServiceVersion.V3_0. beginAnalyzeHealthcareEntities is only available for API version v3.1 and newer.
      TextAnalyticsException - If analyze operation fails.
    • beginRecognizeCustomEntities

      public com.azure.core.util.polling.SyncPoller<RecognizeCustomEntitiesOperationDetail,RecognizeCustomEntitiesPagedIterable> beginRecognizeCustomEntities(Iterable<String> documents, String projectName, String deploymentName)
      Returns a list of custom entities for the provided list of document.

      This method is supported since service API version V2022_05_01.

      This method will use the default language that can be set by using method TextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.

      Code Sample

       List<String> documents = new ArrayList<>();
       for (int i = 0; i < 3; i++) {
           documents.add(
               "A recent report by the Government Accountability Office (GAO) found that the dramatic increase "
                   + "in oil and natural gas development on federal lands over the past six years has stretched the"
                   + " staff of the BLM to a point that it has been unable to meet its environmental protection "
                   + "responsibilities."); }
       SyncPoller<RecognizeCustomEntitiesOperationDetail, RecognizeCustomEntitiesPagedIterable> syncPoller =
           textAnalyticsClient.beginRecognizeCustomEntities(documents, "{project_name}", "{deployment_name}");
       syncPoller.waitForCompletion();
       syncPoller.getFinalResult().forEach(documentsResults -> {
           System.out.printf("Project name: %s, deployment name: %s.%n",
               documentsResults.getProjectName(), documentsResults.getDeploymentName());
           for (RecognizeEntitiesResult documentResult : documentsResults) {
               System.out.println("Document ID: " + documentResult.getId());
               for (CategorizedEntity entity : documentResult.getEntities()) {
                   System.out.printf(
                       "\tText: %s, category: %s, confidence score: %f.%n",
                       entity.getText(), entity.getCategory(), entity.getConfidenceScore());
               }
           }
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      projectName - The name of the project which owns the model being consumed.
      deploymentName - The name of the deployment being consumed.
      Returns:
      A SyncPoller that polls the recognize custom entities operation until it has completed, has failed, or has been cancelled. The completed operation returns a PagedIterable of RecognizeCustomEntitiesResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if beginRecognizeCustomEntities is called with service API version TextAnalyticsServiceVersion.V3_0 or TextAnalyticsServiceVersion.V3_1. beginRecognizeCustomEntities is only available for API version 2022-05-01 and newer.
      TextAnalyticsException - If analyze operation fails.
    • beginRecognizeCustomEntities

      public com.azure.core.util.polling.SyncPoller<RecognizeCustomEntitiesOperationDetail,RecognizeCustomEntitiesPagedIterable> beginRecognizeCustomEntities(Iterable<String> documents, String projectName, String deploymentName, String language, RecognizeCustomEntitiesOptions options)
      Returns a list of custom entities for the provided list of document with provided request options.

      This method is supported since service API version V2022_05_01.

      See this supported languages in Language service API.

      Code Sample

       List<String> documents = new ArrayList<>();
       for (int i = 0; i < 3; i++) {
           documents.add(
               "A recent report by the Government Accountability Office (GAO) found that the dramatic increase "
                   + "in oil and natural gas development on federal lands over the past six years has stretched the"
                   + " staff of the BLM to a point that it has been unable to meet its environmental protection "
                   + "responsibilities."); }
       RecognizeCustomEntitiesOptions options = new RecognizeCustomEntitiesOptions().setIncludeStatistics(true);
       SyncPoller<RecognizeCustomEntitiesOperationDetail, RecognizeCustomEntitiesPagedIterable> syncPoller =
           textAnalyticsClient.beginRecognizeCustomEntities(documents, "{project_name}",
               "{deployment_name}", "en", options);
       syncPoller.waitForCompletion();
       syncPoller.getFinalResult().forEach(documentsResults -> {
           System.out.printf("Project name: %s, deployment name: %s.%n",
               documentsResults.getProjectName(), documentsResults.getDeploymentName());
           for (RecognizeEntitiesResult documentResult : documentsResults) {
               System.out.println("Document ID: " + documentResult.getId());
               for (CategorizedEntity entity : documentResult.getEntities()) {
                   System.out.printf(
                       "\tText: %s, category: %s, confidence score: %f.%n",
                       entity.getText(), entity.getCategory(), entity.getConfidenceScore());
               }
           }
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      projectName - The name of the project which owns the model being consumed.
      deploymentName - The name of the deployment being consumed.
      language - The 2-letter ISO 639-1 representation of language for the documents. If not set, uses "en" for English as default.
      options - The additional configurable options that may be passed when recognizing custom entities.
      Returns:
      A SyncPoller that polls the recognize custom entities operation until it has completed, has failed, or has been cancelled. The completed operation returns a PagedIterable of RecognizeCustomEntitiesResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if beginRecognizeCustomEntities is called with service API version TextAnalyticsServiceVersion.V3_0 or TextAnalyticsServiceVersion.V3_1. beginRecognizeCustomEntities is only available for API version 2022-05-01 and newer.
      TextAnalyticsException - If analyze operation fails.
    • beginRecognizeCustomEntities

      public com.azure.core.util.polling.SyncPoller<RecognizeCustomEntitiesOperationDetail,RecognizeCustomEntitiesPagedIterable> beginRecognizeCustomEntities(Iterable<TextDocumentInput> documents, String projectName, String deploymentName, RecognizeCustomEntitiesOptions options, com.azure.core.util.Context context)
      Returns a list of custom entities for the provided list of document with provided request options.

      This method is supported since service API version V2022_05_01

      See this supported languages in Language service API.

      Code Sample

       List<TextDocumentInput> documents = new ArrayList<>();
       for (int i = 0; i < 3; i++) {
           documents.add(new TextDocumentInput(Integer.toString(i),
               "A recent report by the Government Accountability Office (GAO) found that the dramatic increase "
                   + "in oil and natural gas development on federal lands over the past six years has stretched the"
                   + " staff of the BLM to a point that it has been unable to meet its environmental protection "
                   + "responsibilities."));
           RecognizeCustomEntitiesOptions options = new RecognizeCustomEntitiesOptions().setIncludeStatistics(true);
           SyncPoller<RecognizeCustomEntitiesOperationDetail, RecognizeCustomEntitiesPagedIterable> syncPoller =
               textAnalyticsClient.beginRecognizeCustomEntities(documents, "{project_name}",
                   "{deployment_name}", options, Context.NONE);
           syncPoller.waitForCompletion();
           syncPoller.getFinalResult().forEach(documentsResults -> {
               System.out.printf("Project name: %s, deployment name: %s.%n",
                   documentsResults.getProjectName(), documentsResults.getDeploymentName());
               for (RecognizeEntitiesResult documentResult : documentsResults) {
                   System.out.println("Document ID: " + documentResult.getId());
                   for (CategorizedEntity entity : documentResult.getEntities()) {
                       System.out.printf(
                           "\tText: %s, category: %s, confidence score: %f.%n",
                           entity.getText(), entity.getCategory(), entity.getConfidenceScore());
                   }
               }
           });
       }
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      projectName - The name of the project which owns the model being consumed.
      deploymentName - The name of the deployment being consumed. English as default.
      options - The additional configurable options that may be passed when recognizing custom entities.
      context - Additional context that is passed through the Http pipeline during the service call.
      Returns:
      A SyncPoller that polls the recognize custom entities operation until it has completed, has failed, or has been cancelled. The completed operation returns a PagedIterable of RecognizeCustomEntitiesResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if beginRecognizeCustomEntities is called with service API version TextAnalyticsServiceVersion.V3_0 or TextAnalyticsServiceVersion.V3_1. beginRecognizeCustomEntities is only available for API version 2022-05-01 and newer.
      TextAnalyticsException - If analyze operation fails.
    • beginSingleLabelClassify

      public com.azure.core.util.polling.SyncPoller<ClassifyDocumentOperationDetail,ClassifyDocumentPagedIterable> beginSingleLabelClassify(Iterable<String> documents, String projectName, String deploymentName)
      Returns a list of single-label classification for the provided list of document.

      This method is supported since service API version V2022_05_01.

      This method will use the default language that can be set by using method TextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.

      Code Sample

       List<String> documents = new ArrayList<>();
       for (int i = 0; i < 3; i++) {
           documents.add(
               "A recent report by the Government Accountability Office (GAO) found that the dramatic increase "
                   + "in oil and natural gas development on federal lands over the past six years has stretched the"
                   + " staff of the BLM to a point that it has been unable to meet its environmental protection "
                   + "responsibilities."
           );
       }
       // See the service documentation for regional support and how to train a model to classify your documents,
       // see https://aka.ms/azsdk/textanalytics/customfunctionalities
       SyncPoller<ClassifyDocumentOperationDetail, ClassifyDocumentPagedIterable> syncPoller =
           textAnalyticsClient.beginSingleLabelClassify(documents, "{project_name}", "{deployment_name}");
       syncPoller.waitForCompletion();
       syncPoller.getFinalResult().forEach(documentsResults -> {
           System.out.printf("Project name: %s, deployment name: %s.%n",
               documentsResults.getProjectName(), documentsResults.getDeploymentName());
           for (ClassifyDocumentResult documentResult : documentsResults) {
               System.out.println("Document ID: " + documentResult.getId());
               for (ClassificationCategory classification : documentResult.getClassifications()) {
                   System.out.printf("\tCategory: %s, confidence score: %f.%n",
                       classification.getCategory(), classification.getConfidenceScore());
               }
           }
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      projectName - The name of the project which owns the model being consumed.
      deploymentName - The name of the deployment being consumed.
      Returns:
      A SyncPoller that polls the single-label classification operation until it has completed, has failed, or has been cancelled. The completed operation returns a PagedIterable of ClassifyDocumentResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if beginSingleLabelClassify is called with service API version TextAnalyticsServiceVersion.V3_0 or TextAnalyticsServiceVersion.V3_1. beginSingleLabelClassify is only available for API version 2022-05-01 and newer.
      TextAnalyticsException - If analyze operation fails.
    • beginSingleLabelClassify

      public com.azure.core.util.polling.SyncPoller<ClassifyDocumentOperationDetail,ClassifyDocumentPagedIterable> beginSingleLabelClassify(Iterable<String> documents, String projectName, String deploymentName, String language, SingleLabelClassifyOptions options)
      Returns a list of single-label classification for the provided list of document with provided request options.

      This method is supported since service API version V2022_05_01.

      See this supported languages in Language service API.

      Code Sample

       List<String> documents = new ArrayList<>();
       for (int i = 0; i < 3; i++) {
           documents.add(
               "A recent report by the Government Accountability Office (GAO) found that the dramatic increase "
                   + "in oil and natural gas development on federal lands over the past six years has stretched the"
                   + " staff of the BLM to a point that it has been unable to meet its environmental protection "
                   + "responsibilities."
           );
       }
       SingleLabelClassifyOptions options = new SingleLabelClassifyOptions().setIncludeStatistics(true);
       // See the service documentation for regional support and how to train a model to classify your documents,
       // see https://aka.ms/azsdk/textanalytics/customfunctionalities
       SyncPoller<ClassifyDocumentOperationDetail, ClassifyDocumentPagedIterable> syncPoller =
           textAnalyticsClient.beginSingleLabelClassify(documents, "{project_name}", "{deployment_name}",
               "en", options);
       syncPoller.waitForCompletion();
       syncPoller.getFinalResult().forEach(documentsResults -> {
           System.out.printf("Project name: %s, deployment name: %s.%n",
               documentsResults.getProjectName(), documentsResults.getDeploymentName());
           for (ClassifyDocumentResult documentResult : documentsResults) {
               System.out.println("Document ID: " + documentResult.getId());
               for (ClassificationCategory classification : documentResult.getClassifications()) {
                   System.out.printf("\tCategory: %s, confidence score: %f.%n",
                       classification.getCategory(), classification.getConfidenceScore());
               }
           }
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      projectName - The name of the project which owns the model being consumed.
      deploymentName - The name of the deployment being consumed.
      language - The 2-letter ISO 639-1 representation of language for the documents. If not set, uses "en" for English as default.
      options - The additional configurable options that may be passed when analyzing single-label classification.
      Returns:
      A SyncPoller that polls the single-label classification operation until it has completed, has failed, or has been cancelled. The completed operation returns a PagedIterable of ClassifyDocumentResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if beginSingleLabelClassify is called with service API version TextAnalyticsServiceVersion.V3_0 or TextAnalyticsServiceVersion.V3_1. beginSingleLabelClassify is only available for API version 2022-05-01 and newer.
      TextAnalyticsException - If analyze operation fails.
    • beginSingleLabelClassify

      public com.azure.core.util.polling.SyncPoller<ClassifyDocumentOperationDetail,ClassifyDocumentPagedIterable> beginSingleLabelClassify(Iterable<TextDocumentInput> documents, String projectName, String deploymentName, SingleLabelClassifyOptions options, com.azure.core.util.Context context)
      Returns a list of single-label classification for the provided list of document with provided request options.

      This method is supported since service API version V2022_05_01.

      Code Sample

       List<TextDocumentInput> documents = new ArrayList<>();
       for (int i = 0; i < 3; i++) {
           documents.add(new TextDocumentInput(Integer.toString(i),
               "A recent report by the Government Accountability Office (GAO) found that the dramatic increase "
                   + "in oil and natural gas development on federal lands over the past six years has stretched the"
                   + " staff of the BLM to a point that it has been unable to meet its environmental protection "
                   + "responsibilities."));
       }
       SingleLabelClassifyOptions options = new SingleLabelClassifyOptions().setIncludeStatistics(true);
       // See the service documentation for regional support and how to train a model to classify your documents,
       // see https://aka.ms/azsdk/textanalytics/customfunctionalities
       SyncPoller<ClassifyDocumentOperationDetail, ClassifyDocumentPagedIterable> syncPoller =
           textAnalyticsClient.beginSingleLabelClassify(documents, "{project_name}", "{deployment_name}",
               options, Context.NONE);
       syncPoller.waitForCompletion();
       syncPoller.getFinalResult().forEach(documentsResults -> {
           System.out.printf("Project name: %s, deployment name: %s.%n",
               documentsResults.getProjectName(), documentsResults.getDeploymentName());
           for (ClassifyDocumentResult documentResult : documentsResults) {
               System.out.println("Document ID: " + documentResult.getId());
               for (ClassificationCategory classification : documentResult.getClassifications()) {
                   System.out.printf("\tCategory: %s, confidence score: %f.%n",
                       classification.getCategory(), classification.getConfidenceScore());
               }
           }
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      projectName - The name of the project which owns the model being consumed.
      deploymentName - The name of the deployment being consumed.
      options - The additional configurable options that may be passed when analyzing single-label classification.
      context - Additional context that is passed through the Http pipeline during the service call.
      Returns:
      A SyncPoller that polls the single-label classification operation until it has completed, has failed, or has been cancelled. The completed operation returns a PagedIterable of ClassifyDocumentResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if beginSingleLabelClassify is called with service API version TextAnalyticsServiceVersion.V3_0 or TextAnalyticsServiceVersion.V3_1. beginSingleLabelClassify is only available for API version 2022-05-01 and newer.
      TextAnalyticsException - If analyze operation fails.
    • beginMultiLabelClassify

      public com.azure.core.util.polling.SyncPoller<ClassifyDocumentOperationDetail,ClassifyDocumentPagedIterable> beginMultiLabelClassify(Iterable<String> documents, String projectName, String deploymentName)
      Returns a list of multi-label classification for the provided list of document.

      This method is supported since service API version V2022_05_01.

      This method will use the default language that can be set by using method TextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.

      Code Sample

       List<String> documents = new ArrayList<>();
       for (int i = 0; i < 3; i++) {
           documents.add(
               "I need a reservation for an indoor restaurant in China. Please don't stop the music."
                   + " Play music and add it to my playlist");
       }
       SyncPoller<ClassifyDocumentOperationDetail, ClassifyDocumentPagedIterable> syncPoller =
           textAnalyticsClient.beginMultiLabelClassify(documents, "{project_name}", "{deployment_name}");
       syncPoller.waitForCompletion();
       syncPoller.getFinalResult().forEach(documentsResults -> {
           System.out.printf("Project name: %s, deployment name: %s.%n",
               documentsResults.getProjectName(), documentsResults.getDeploymentName());
           for (ClassifyDocumentResult documentResult : documentsResults) {
               System.out.println("Document ID: " + documentResult.getId());
               for (ClassificationCategory classification : documentResult.getClassifications()) {
                   System.out.printf("\tCategory: %s, confidence score: %f.%n",
                       classification.getCategory(), classification.getConfidenceScore());
               }
           }
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      projectName - The name of the project which owns the model being consumed.
      deploymentName - The name of the deployment being consumed.
      Returns:
      A SyncPoller that polls the multi-label classification operation until it has completed, has failed, or has been cancelled. The completed operation returns a PagedIterable of ClassifyDocumentResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if beginMultiLabelClassify is called with service API version TextAnalyticsServiceVersion.V3_0 or TextAnalyticsServiceVersion.V3_1. beginMultiLabelClassify is only available for API version 2022-05-01 and newer.
      TextAnalyticsException - If analyze operation fails.
    • beginMultiLabelClassify

      public com.azure.core.util.polling.SyncPoller<ClassifyDocumentOperationDetail,ClassifyDocumentPagedIterable> beginMultiLabelClassify(Iterable<String> documents, String projectName, String deploymentName, String language, MultiLabelClassifyOptions options)
      Returns a list of multi-label classification for the provided list of document with provided request options.

      This method is supported since service API version V2022_05_01.

      See this supported languages in Language service API.

      Code Sample

       List<String> documents = new ArrayList<>();
       for (int i = 0; i < 3; i++) {
           documents.add(
               "I need a reservation for an indoor restaurant in China. Please don't stop the music."
                   + " Play music and add it to my playlist");
       }
       MultiLabelClassifyOptions options = new MultiLabelClassifyOptions().setIncludeStatistics(true);
       SyncPoller<ClassifyDocumentOperationDetail, ClassifyDocumentPagedIterable> syncPoller =
           textAnalyticsClient.beginMultiLabelClassify(documents, "{project_name}", "{deployment_name}", "en", options);
       syncPoller.waitForCompletion();
       syncPoller.getFinalResult().forEach(documentsResults -> {
           System.out.printf("Project name: %s, deployment name: %s.%n",
               documentsResults.getProjectName(), documentsResults.getDeploymentName());
           for (ClassifyDocumentResult documentResult : documentsResults) {
               System.out.println("Document ID: " + documentResult.getId());
               for (ClassificationCategory classification : documentResult.getClassifications()) {
                   System.out.printf("\tCategory: %s, confidence score: %f.%n",
                       classification.getCategory(), classification.getConfidenceScore());
               }
           }
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      projectName - The name of the project which owns the model being consumed.
      deploymentName - The name of the deployment being consumed.
      language - The 2-letter ISO 639-1 representation of language for the documents. If not set, uses "en" for English as default.
      options - The additional configurable options that may be passed when analyzing multi-label classification.
      Returns:
      A SyncPoller that polls the multi-label classification operation until it has completed, has failed, or has been cancelled. The completed operation returns a PagedIterable of ClassifyDocumentResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if beginMultiLabelClassify is called with service API version TextAnalyticsServiceVersion.V3_0 or TextAnalyticsServiceVersion.V3_1. beginMultiLabelClassify is only available for API version 2022-05-01 and newer.
      TextAnalyticsException - If analyze operation fails.
    • beginMultiLabelClassify

      public com.azure.core.util.polling.SyncPoller<ClassifyDocumentOperationDetail,ClassifyDocumentPagedIterable> beginMultiLabelClassify(Iterable<TextDocumentInput> documents, String projectName, String deploymentName, MultiLabelClassifyOptions options, com.azure.core.util.Context context)
      Returns a list of multi-label classification for the provided list of document with provided request options.

      This method is supported since service API version V2022_05_01.

      Code Sample

       List<TextDocumentInput> documents = new ArrayList<>();
       for (int i = 0; i < 3; i++) {
           documents.add(new TextDocumentInput(Integer.toString(i),
               "I need a reservation for an indoor restaurant in China. Please don't stop the music."
                   + " Play music and add it to my playlist"));
       }
       MultiLabelClassifyOptions options = new MultiLabelClassifyOptions().setIncludeStatistics(true);
       SyncPoller<ClassifyDocumentOperationDetail, ClassifyDocumentPagedIterable> syncPoller =
           textAnalyticsClient.beginMultiLabelClassify(documents, "{project_name}", "{deployment_name}",
               options, Context.NONE);
       syncPoller.waitForCompletion();
       syncPoller.getFinalResult().forEach(documentsResults -> {
           System.out.printf("Project name: %s, deployment name: %s.%n",
               documentsResults.getProjectName(), documentsResults.getDeploymentName());
           for (ClassifyDocumentResult documentResult : documentsResults) {
               System.out.println("Document ID: " + documentResult.getId());
               for (ClassificationCategory classification : documentResult.getClassifications()) {
                   System.out.printf("\tCategory: %s, confidence score: %f.%n",
                       classification.getCategory(), classification.getConfidenceScore());
               }
           }
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      projectName - The name of the project which owns the model being consumed.
      deploymentName - The name of the deployment being consumed.
      options - The additional configurable options that may be passed when analyzing multi-label classification.
      context - Additional context that is passed through the Http pipeline during the service call.
      Returns:
      A SyncPoller that polls the multi-label classification operation until it has completed, has failed, or has been cancelled. The completed operation returns a PagedIterable of ClassifyDocumentResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if beginMultiLabelClassify is called with service API version TextAnalyticsServiceVersion.V3_0 or TextAnalyticsServiceVersion.V3_1. beginMultiLabelClassify is only available for API version 2022-05-01 and newer.
      TextAnalyticsException - If analyze operation fails.
    • beginAbstractSummary

      public com.azure.core.util.polling.SyncPoller<AbstractSummaryOperationDetail,AbstractSummaryPagedIterable> beginAbstractSummary(Iterable<String> documents)
      Returns a list of abstract summary for the provided list of document.

      This method is supported since service API version TextAnalyticsServiceVersion.V2022_10_01_PREVIEW.

      This method will use the default language that can be set by using method TextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.

      Code Sample

       List<String> documents = new ArrayList<>();
       for (int i = 0; i < 3; i++) {
           documents.add(
               "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic,"
                   + " human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI"
                   + " Cognitive Services, I have been working with a team of amazing scientists and engineers to turn "
                   + "this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship"
                   + " among three attributes of human cognition: monolingual text (X), audio or visual sensory signals,"
                   + " (Y) and multilingual (Z). At the intersection of all three, there’s magic—what we call XYZ-code"
                   + " as illustrated in Figure 1—a joint representation to create more powerful AI that can speak, hear,"
                   + " see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term"
                   + " vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have"
                   + " pretrained models that can jointly learn representations to support a broad range of downstream"
                   + " AI tasks, much in the way humans do today. Over the past five years, we have achieved human"
                   + " performance on benchmarks in conversational speech recognition, machine translation, "
                   + "conversational question answering, machine reading comprehension, and image captioning. These"
                   + " five breakthroughs provided us with strong signals toward our more ambitious aspiration to"
                   + " produce a leap in AI capabilities, achieving multisensory and multilingual learning that "
                   + "is closer in line with how humans learn and understand. I believe the joint XYZ-code is a "
                   + "foundational component of this aspiration, if grounded with external knowledge sources in "
                   + "the downstream AI tasks.");
       }
       SyncPoller<AbstractSummaryOperationDetail, AbstractSummaryPagedIterable> syncPoller =
           textAnalyticsClient.beginAbstractSummary(documents);
       syncPoller.waitForCompletion();
       syncPoller.getFinalResult().forEach(resultCollection -> {
           for (AbstractSummaryResult documentResult : resultCollection) {
               System.out.println("\tAbstract summary sentences:");
               for (AbstractiveSummary summarySentence : documentResult.getSummaries()) {
                   System.out.printf("\t\t Summary text: %s.%n", summarySentence.getText());
                   for (SummaryContext summaryContext : summarySentence.getContexts()) {
                       System.out.printf("\t\t offset: %d, length: %d%n",
                           summaryContext.getOffset(), summaryContext.getLength());
                   }
               }
           }
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits..
      Returns:
      A SyncPoller that polls the abstractive summarization operation until it has completed, has failed, or has been cancelled. The completed operation returns a PagedIterable of AbstractSummaryResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if beginAbstractSummary is called with service API version TextAnalyticsServiceVersion.V3_0, TextAnalyticsServiceVersion.V3_1, or TextAnalyticsServiceVersion.V2022_05_01. Those actions are only available for API version TextAnalyticsServiceVersion.V2022_10_01_PREVIEW and newer.
      TextAnalyticsException - If analyze operation fails.
    • beginAbstractSummary

      public com.azure.core.util.polling.SyncPoller<AbstractSummaryOperationDetail,AbstractSummaryPagedIterable> beginAbstractSummary(Iterable<String> documents, String language, AbstractSummaryOptions options)
      Returns a list of abstract summary for the provided list of document with provided request options.

      This method is supported since service API version V2022_05_01.

      See this supported languages in Language service API.

      Code Sample

       List<String> documents = new ArrayList<>();
       for (int i = 0; i < 3; i++) {
           documents.add(
               "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic,"
                   + " human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI"
                   + " Cognitive Services, I have been working with a team of amazing scientists and engineers to turn "
                   + "this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship"
                   + " among three attributes of human cognition: monolingual text (X), audio or visual sensory signals,"
                   + " (Y) and multilingual (Z). At the intersection of all three, there’s magic—what we call XYZ-code"
                   + " as illustrated in Figure 1—a joint representation to create more powerful AI that can speak, hear,"
                   + " see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term"
                   + " vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have"
                   + " pretrained models that can jointly learn representations to support a broad range of downstream"
                   + " AI tasks, much in the way humans do today. Over the past five years, we have achieved human"
                   + " performance on benchmarks in conversational speech recognition, machine translation, "
                   + "conversational question answering, machine reading comprehension, and image captioning. These"
                   + " five breakthroughs provided us with strong signals toward our more ambitious aspiration to"
                   + " produce a leap in AI capabilities, achieving multisensory and multilingual learning that "
                   + "is closer in line with how humans learn and understand. I believe the joint XYZ-code is a "
                   + "foundational component of this aspiration, if grounded with external knowledge sources in "
                   + "the downstream AI tasks.");
       }
       SyncPoller<AbstractSummaryOperationDetail, AbstractSummaryPagedIterable> syncPoller =
           textAnalyticsClient.beginAbstractSummary(documents, "en",
               new AbstractSummaryOptions().setDisplayName("{tasks_display_name}").setSentenceCount(3));
       syncPoller.waitForCompletion();
       syncPoller.getFinalResult().forEach(resultCollection -> {
           for (AbstractSummaryResult documentResult : resultCollection) {
               System.out.println("\tAbstract summary sentences:");
               for (AbstractiveSummary summarySentence : documentResult.getSummaries()) {
                   System.out.printf("\t\t Summary text: %s.%n", summarySentence.getText());
                   for (SummaryContext summaryContext : summarySentence.getContexts()) {
                       System.out.printf("\t\t offset: %d, length: %d%n",
                           summaryContext.getOffset(), summaryContext.getLength());
                   }
               }
           }
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      language - The 2-letter ISO 639-1 representation of language for the documents. If not set, uses "en" for English as default.
      options - The additional configurable options that may be passed when analyzing abstractive summarization.
      Returns:
      A SyncPoller that polls the abstractive summarization operation until it has completed, has failed, or has been cancelled. The completed operation returns a PagedIterable of AbstractSummaryResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if beginAbstractSummary is called with service API version TextAnalyticsServiceVersion.V3_0, TextAnalyticsServiceVersion.V3_1, or TextAnalyticsServiceVersion.V2022_05_01. Those actions are only available for API version TextAnalyticsServiceVersion.V2022_10_01_PREVIEW and newer.
      TextAnalyticsException - If analyze operation fails.
    • beginAbstractSummary

      public com.azure.core.util.polling.SyncPoller<AbstractSummaryOperationDetail,AbstractSummaryPagedIterable> beginAbstractSummary(Iterable<TextDocumentInput> documents, AbstractSummaryOptions options, com.azure.core.util.Context context)
      Returns a list of abstract summary for the provided list of document with provided request options.

      This method is supported since service API version TextAnalyticsServiceVersion.V2022_10_01_PREVIEW.

      Code Sample

       List<TextDocumentInput> documents = new ArrayList<>();
       for (int i = 0; i < 3; i++) {
           documents.add(new TextDocumentInput(Integer.toString(i),
               "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic,"
                   + " human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI"
                   + " Cognitive Services, I have been working with a team of amazing scientists and engineers to turn "
                   + "this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship"
                   + " among three attributes of human cognition: monolingual text (X), audio or visual sensory signals,"
                   + " (Y) and multilingual (Z). At the intersection of all three, there’s magic—what we call XYZ-code"
                   + " as illustrated in Figure 1—a joint representation to create more powerful AI that can speak, hear,"
                   + " see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term"
                   + " vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have"
                   + " pretrained models that can jointly learn representations to support a broad range of downstream"
                   + " AI tasks, much in the way humans do today. Over the past five years, we have achieved human"
                   + " performance on benchmarks in conversational speech recognition, machine translation, "
                   + "conversational question answering, machine reading comprehension, and image captioning. These"
                   + " five breakthroughs provided us with strong signals toward our more ambitious aspiration to"
                   + " produce a leap in AI capabilities, achieving multisensory and multilingual learning that "
                   + "is closer in line with how humans learn and understand. I believe the joint XYZ-code is a "
                   + "foundational component of this aspiration, if grounded with external knowledge sources in "
                   + "the downstream AI tasks."));
       }
       SyncPoller<AbstractSummaryOperationDetail, AbstractSummaryPagedIterable> syncPoller =
           textAnalyticsClient.beginAbstractSummary(documents,
               new AbstractSummaryOptions().setDisplayName("{tasks_display_name}").setSentenceCount(3),
               Context.NONE);
       syncPoller.waitForCompletion();
       syncPoller.getFinalResult().forEach(resultCollection -> {
           for (AbstractSummaryResult documentResult : resultCollection) {
               System.out.println("\tAbstract summary sentences:");
               for (AbstractiveSummary summarySentence : documentResult.getSummaries()) {
                   System.out.printf("\t\t Summary text: %s.%n", summarySentence.getText());
                   for (SummaryContext summaryContext : summarySentence.getContexts()) {
                       System.out.printf("\t\t offset: %d, length: %d%n",
                           summaryContext.getOffset(), summaryContext.getLength());
                   }
               }
           }
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      options - The additional configurable options that may be passed when analyzing abstractive summarization.
      context - Additional context that is passed through the Http pipeline during the service call.
      Returns:
      A SyncPoller that polls the abstractive summarization operation until it has completed, has failed, or has been cancelled. The completed operation returns a PagedIterable of AbstractSummaryResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if beginAbstractSummary is called with service API version TextAnalyticsServiceVersion.V3_0, TextAnalyticsServiceVersion.V3_1, or TextAnalyticsServiceVersion.V2022_05_01. Those actions are only available for API version TextAnalyticsServiceVersion.V2022_10_01_PREVIEW and newer.
      TextAnalyticsException - If analyze operation fails.
    • beginExtractSummary

      public com.azure.core.util.polling.SyncPoller<ExtractSummaryOperationDetail,ExtractSummaryPagedIterable> beginExtractSummary(Iterable<String> documents)
      Returns a list of extract summaries for the provided list of document.

      This method is supported since service API version TextAnalyticsServiceVersion.V2022_10_01_PREVIEW.

      This method will use the default language that can be set by using method TextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.

      Code Sample

       List<String> documents = new ArrayList<>();
       for (int i = 0; i < 3; i++) {
           documents.add(
               "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic,"
                   + " human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI"
                   + " Cognitive Services, I have been working with a team of amazing scientists and engineers to turn "
                   + "this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship"
                   + " among three attributes of human cognition: monolingual text (X), audio or visual sensory signals,"
                   + " (Y) and multilingual (Z). At the intersection of all three, there’s magic—what we call XYZ-code"
                   + " as illustrated in Figure 1—a joint representation to create more powerful AI that can speak, hear,"
                   + " see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term"
                   + " vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have"
                   + " pretrained models that can jointly learn representations to support a broad range of downstream"
                   + " AI tasks, much in the way humans do today. Over the past five years, we have achieved human"
                   + " performance on benchmarks in conversational speech recognition, machine translation, "
                   + "conversational question answering, machine reading comprehension, and image captioning. These"
                   + " five breakthroughs provided us with strong signals toward our more ambitious aspiration to"
                   + " produce a leap in AI capabilities, achieving multisensory and multilingual learning that "
                   + "is closer in line with how humans learn and understand. I believe the joint XYZ-code is a "
                   + "foundational component of this aspiration, if grounded with external knowledge sources in "
                   + "the downstream AI tasks.");
       }
       SyncPoller<ExtractSummaryOperationDetail, ExtractSummaryPagedIterable> syncPoller =
           textAnalyticsClient.beginExtractSummary(documents);
       syncPoller.waitForCompletion();
       syncPoller.getFinalResult().forEach(resultCollection -> {
           for (ExtractSummaryResult documentResult : resultCollection) {
               System.out.println("\tExtracted summary sentences:");
               for (SummarySentence summarySentence : documentResult.getSentences()) {
                   System.out.printf(
                       "\t\t Sentence text: %s, length: %d, offset: %d, rank score: %f.%n",
                       summarySentence.getText(), summarySentence.getLength(),
                       summarySentence.getOffset(), summarySentence.getRankScore());
               }
           }
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      Returns:
      A SyncPoller that polls the extractive summarization operation until it has completed, has failed, or has been cancelled. The completed operation returns a PagedIterable of ExtractSummaryResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if beginExtractSummary is called with service API version TextAnalyticsServiceVersion.V3_0, TextAnalyticsServiceVersion.V3_1, or TextAnalyticsServiceVersion.V2022_05_01. Those actions are only available for API version TextAnalyticsServiceVersion.V2022_10_01_PREVIEW and newer.
      TextAnalyticsException - If analyze operation fails.
    • beginExtractSummary

      public com.azure.core.util.polling.SyncPoller<ExtractSummaryOperationDetail,ExtractSummaryPagedIterable> beginExtractSummary(Iterable<String> documents, String language, ExtractSummaryOptions options)
      Returns a list of extract summaries for the provided list of document with provided request options.

      This method is supported since service API version TextAnalyticsServiceVersion.V2022_10_01_PREVIEW.

      See this supported languages in Language service API.

      Code Sample

       List<String> documents = new ArrayList<>();
       for (int i = 0; i < 3; i++) {
           documents.add(
               "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic,"
                   + " human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI"
                   + " Cognitive Services, I have been working with a team of amazing scientists and engineers to turn "
                   + "this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship"
                   + " among three attributes of human cognition: monolingual text (X), audio or visual sensory signals,"
                   + " (Y) and multilingual (Z). At the intersection of all three, there’s magic—what we call XYZ-code"
                   + " as illustrated in Figure 1—a joint representation to create more powerful AI that can speak, hear,"
                   + " see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term"
                   + " vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have"
                   + " pretrained models that can jointly learn representations to support a broad range of downstream"
                   + " AI tasks, much in the way humans do today. Over the past five years, we have achieved human"
                   + " performance on benchmarks in conversational speech recognition, machine translation, "
                   + "conversational question answering, machine reading comprehension, and image captioning. These"
                   + " five breakthroughs provided us with strong signals toward our more ambitious aspiration to"
                   + " produce a leap in AI capabilities, achieving multisensory and multilingual learning that "
                   + "is closer in line with how humans learn and understand. I believe the joint XYZ-code is a "
                   + "foundational component of this aspiration, if grounded with external knowledge sources in "
                   + "the downstream AI tasks.");
       }
       SyncPoller<ExtractSummaryOperationDetail, ExtractSummaryPagedIterable> syncPoller =
           textAnalyticsClient.beginExtractSummary(documents,
               "en",
               new ExtractSummaryOptions().setMaxSentenceCount(4).setOrderBy(SummarySentencesOrder.RANK));
       syncPoller.waitForCompletion();
       syncPoller.getFinalResult().forEach(resultCollection -> {
           for (ExtractSummaryResult documentResult : resultCollection) {
               System.out.println("\tExtracted summary sentences:");
               for (SummarySentence summarySentence : documentResult.getSentences()) {
                   System.out.printf(
                       "\t\t Sentence text: %s, length: %d, offset: %d, rank score: %f.%n",
                       summarySentence.getText(), summarySentence.getLength(),
                       summarySentence.getOffset(), summarySentence.getRankScore());
               }
           }
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      language - The 2-letter ISO 639-1 representation of language for the documents. If not set, uses "en" for English as default.
      options - The additional configurable options that may be passed when analyzing extractive summarization.
      Returns:
      A SyncPoller that polls the extractive summarization operation until it has completed, has failed, or has been cancelled. The completed operation returns a PagedIterable of ExtractSummaryResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if beginExtractSummary is called with service API version TextAnalyticsServiceVersion.V3_0, TextAnalyticsServiceVersion.V3_1, or TextAnalyticsServiceVersion.V2022_05_01. Those actions are only available for API version TextAnalyticsServiceVersion.V2022_10_01_PREVIEW and newer.
      TextAnalyticsException - If analyze operation fails.
    • beginExtractSummary

      public com.azure.core.util.polling.SyncPoller<ExtractSummaryOperationDetail,ExtractSummaryPagedIterable> beginExtractSummary(Iterable<TextDocumentInput> documents, ExtractSummaryOptions options, com.azure.core.util.Context context)
      Returns a list of extract summaries for the provided list of document with provided request options.

      This method is supported since service API version TextAnalyticsServiceVersion.V2022_10_01_PREVIEW.

      Code Sample

       List<TextDocumentInput> documents = new ArrayList<>();
       for (int i = 0; i < 3; i++) {
           documents.add(new TextDocumentInput(Integer.toString(i),
               "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic,"
                   + " human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI"
                   + " Cognitive Services, I have been working with a team of amazing scientists and engineers to turn "
                   + "this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship"
                   + " among three attributes of human cognition: monolingual text (X), audio or visual sensory signals,"
                   + " (Y) and multilingual (Z). At the intersection of all three, there’s magic—what we call XYZ-code"
                   + " as illustrated in Figure 1—a joint representation to create more powerful AI that can speak, hear,"
                   + " see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term"
                   + " vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have"
                   + " pretrained models that can jointly learn representations to support a broad range of downstream"
                   + " AI tasks, much in the way humans do today. Over the past five years, we have achieved human"
                   + " performance on benchmarks in conversational speech recognition, machine translation, "
                   + "conversational question answering, machine reading comprehension, and image captioning. These"
                   + " five breakthroughs provided us with strong signals toward our more ambitious aspiration to"
                   + " produce a leap in AI capabilities, achieving multisensory and multilingual learning that "
                   + "is closer in line with how humans learn and understand. I believe the joint XYZ-code is a "
                   + "foundational component of this aspiration, if grounded with external knowledge sources in "
                   + "the downstream AI tasks."));
       }
       SyncPoller<ExtractSummaryOperationDetail, ExtractSummaryPagedIterable> syncPoller =
           textAnalyticsClient.beginExtractSummary(documents,
               new ExtractSummaryOptions().setMaxSentenceCount(4).setOrderBy(SummarySentencesOrder.RANK),
               Context.NONE);
       syncPoller.waitForCompletion();
       syncPoller.getFinalResult().forEach(resultCollection -> {
           for (ExtractSummaryResult documentResult : resultCollection) {
               System.out.println("\tExtracted summary sentences:");
               for (SummarySentence summarySentence : documentResult.getSentences()) {
                   System.out.printf(
                       "\t\t Sentence text: %s, length: %d, offset: %d, rank score: %f.%n",
                       summarySentence.getText(), summarySentence.getLength(),
                       summarySentence.getOffset(), summarySentence.getRankScore());
               }
           }
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      options - The additional configurable options that may be passed when analyzing extractive summarization.
      context - Additional context that is passed through the Http pipeline during the service call.
      Returns:
      A SyncPoller that polls the extractive summarization operation until it has completed, has failed, or has been cancelled. The completed operation returns a PagedIterable of ExtractSummaryResultCollection.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if beginExtractSummary is called with service API version TextAnalyticsServiceVersion.V3_0, TextAnalyticsServiceVersion.V3_1, or TextAnalyticsServiceVersion.V2022_05_01. Those actions are only available for API version TextAnalyticsServiceVersion.V2022_10_01_PREVIEW and newer.
      TextAnalyticsException - If analyze operation fails.
    • beginAnalyzeActions

      public com.azure.core.util.polling.SyncPoller<AnalyzeActionsOperationDetail,AnalyzeActionsResultPagedIterable> beginAnalyzeActions(Iterable<String> documents, TextAnalyticsActions actions)
      Execute actions, such as, entities recognition, PII entities recognition and key phrases extraction for a list of documents. This method will use the default language that can be set by using method TextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.

      Code Sample

       List<String> documents = Arrays.asList(
           "Elon Musk is the CEO of SpaceX and Tesla.",
           "My SSN is 859-98-0987"
       );
      
       SyncPoller<AnalyzeActionsOperationDetail, AnalyzeActionsResultPagedIterable> syncPoller =
           textAnalyticsClient.beginAnalyzeActions(
               documents,
               new TextAnalyticsActions().setDisplayName("{tasks_display_name}")
                   .setRecognizeEntitiesActions(new RecognizeEntitiesAction())
                   .setExtractKeyPhrasesActions(new ExtractKeyPhrasesAction()));
       syncPoller.waitForCompletion();
       AnalyzeActionsResultPagedIterable result = syncPoller.getFinalResult();
       result.forEach(analyzeActionsResult -> {
           System.out.println("Entities recognition action results:");
           analyzeActionsResult.getRecognizeEntitiesResults().forEach(
               actionResult -> {
                   if (!actionResult.isError()) {
                       actionResult.getDocumentsResults().forEach(
                           entitiesResult -> entitiesResult.getEntities().forEach(
                               entity -> System.out.printf(
                                   "Recognized entity: %s, entity category: %s, entity subcategory: %s,"
                                       + " confidence score: %f.%n",
                                   entity.getText(), entity.getCategory(), entity.getSubcategory(),
                                   entity.getConfidenceScore())));
                   }
               });
           System.out.println("Key phrases extraction action results:");
           analyzeActionsResult.getExtractKeyPhrasesResults().forEach(
               actionResult -> {
                   if (!actionResult.isError()) {
                       actionResult.getDocumentsResults().forEach(extractKeyPhraseResult -> {
                           System.out.println("Extracted phrases:");
                           extractKeyPhraseResult.getKeyPhrases()
                               .forEach(keyPhrases -> System.out.printf("\t%s.%n", keyPhrases));
                       });
                   }
               });
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      actions - The actions that contains all actions to be executed. An action is one task of execution, such as a single task of 'Key Phrases Extraction' on the given document inputs.
      Returns:
      A SyncPoller that polls the analyze a collection of actions operation until it has completed, has failed, or has been cancelled. The completed operation returns a AnalyzeActionsResultPagedIterable.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if beginAnalyzeActions is called with service API version TextAnalyticsServiceVersion.V3_0. beginAnalyzeActions is only available for API version v3.1 and newer.
      UnsupportedOperationException - if request AnalyzeHealthcareEntitiesAction, RecognizeCustomEntitiesAction, SingleLabelClassifyAction, or MultiLabelClassifyAction in service API version TextAnalyticsServiceVersion.V3_0 or TextAnalyticsServiceVersion.V3_1. Those actions are only available for API version 2022-05-01 and newer.
      TextAnalyticsException - If analyze operation fails.
    • beginAnalyzeActions

      public com.azure.core.util.polling.SyncPoller<AnalyzeActionsOperationDetail,AnalyzeActionsResultPagedIterable> beginAnalyzeActions(Iterable<String> documents, TextAnalyticsActions actions, String language, AnalyzeActionsOptions options)
      Execute actions, such as, entities recognition, PII entities recognition and key phrases extraction for a list of documents with provided request options. See this supported languages in Language service API.

      Code Sample

       List<String> documents = Arrays.asList(
           "Elon Musk is the CEO of SpaceX and Tesla.",
           "My SSN is 859-98-0987"
       );
      
       SyncPoller<AnalyzeActionsOperationDetail, AnalyzeActionsResultPagedIterable> syncPoller =
           textAnalyticsClient.beginAnalyzeActions(
               documents,
               new TextAnalyticsActions().setDisplayName("{tasks_display_name}")
                   .setRecognizeEntitiesActions(new RecognizeEntitiesAction())
                   .setExtractKeyPhrasesActions(new ExtractKeyPhrasesAction()),
               "en",
               new AnalyzeActionsOptions().setIncludeStatistics(false));
       syncPoller.waitForCompletion();
       AnalyzeActionsResultPagedIterable result = syncPoller.getFinalResult();
       result.forEach(analyzeActionsResult -> {
           System.out.println("Entities recognition action results:");
           analyzeActionsResult.getRecognizeEntitiesResults().forEach(
               actionResult -> {
                   if (!actionResult.isError()) {
                       actionResult.getDocumentsResults().forEach(
                           entitiesResult -> entitiesResult.getEntities().forEach(
                               entity -> System.out.printf(
                                   "Recognized entity: %s, entity category: %s, entity subcategory: %s,"
                                       + " confidence score: %f.%n",
                                   entity.getText(), entity.getCategory(), entity.getSubcategory(),
                                   entity.getConfidenceScore())));
                   }
               });
           System.out.println("Key phrases extraction action results:");
           analyzeActionsResult.getExtractKeyPhrasesResults().forEach(
               actionResult -> {
                   if (!actionResult.isError()) {
                       actionResult.getDocumentsResults().forEach(extractKeyPhraseResult -> {
                           System.out.println("Extracted phrases:");
                           extractKeyPhraseResult.getKeyPhrases()
                               .forEach(keyPhrases -> System.out.printf("\t%s.%n", keyPhrases));
                       });
                   }
               });
       });
       
      Parameters:
      documents - A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.
      actions - The actions that contains all actions to be executed. An action is one task of execution, such as a single task of 'Key Phrases Extraction' on the given document inputs.
      language - The 2 letter ISO 639-1 representation of language for the documents. If not set, uses "en" for English as default.
      options - The additional configurable options that may be passed when analyzing a collection of actions.
      Returns:
      A SyncPoller that polls the analyze a collection of actions operation until it has completed, has failed, or has been cancelled. The completed operation returns a AnalyzeActionsResultPagedIterable.
      Throws:
      NullPointerException - if documents is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if beginAnalyzeActions is called with service API version TextAnalyticsServiceVersion.V3_0. beginAnalyzeActions is only available for API version v3.1 and newer.
      UnsupportedOperationException - if request AnalyzeHealthcareEntitiesAction, RecognizeCustomEntitiesAction, SingleLabelClassifyAction, or MultiLabelClassifyAction in service API version TextAnalyticsServiceVersion.V3_0 or TextAnalyticsServiceVersion.V3_1. Those actions are only available for API version 2022-05-01 and newer.
      TextAnalyticsException - If analyze operation fails.
    • beginAnalyzeActions

      public com.azure.core.util.polling.SyncPoller<AnalyzeActionsOperationDetail,AnalyzeActionsResultPagedIterable> beginAnalyzeActions(Iterable<TextDocumentInput> documents, TextAnalyticsActions actions, AnalyzeActionsOptions options, com.azure.core.util.Context context)
      Execute actions, such as, entities recognition, PII entities recognition and key phrases extraction for a list of documents with provided request options. See this supported languages in Language service API.

      Code Sample

       List<TextDocumentInput> documents = Arrays.asList(
           new TextDocumentInput("0", "Elon Musk is the CEO of SpaceX and Tesla.").setLanguage("en"),
           new TextDocumentInput("1", "My SSN is 859-98-0987").setLanguage("en")
       );
      
       SyncPoller<AnalyzeActionsOperationDetail, AnalyzeActionsResultPagedIterable> syncPoller =
           textAnalyticsClient.beginAnalyzeActions(
               documents,
               new TextAnalyticsActions().setDisplayName("{tasks_display_name}")
                  .setRecognizeEntitiesActions(new RecognizeEntitiesAction())
                  .setExtractKeyPhrasesActions(new ExtractKeyPhrasesAction()),
               new AnalyzeActionsOptions().setIncludeStatistics(false),
               Context.NONE);
       syncPoller.waitForCompletion();
       AnalyzeActionsResultPagedIterable result = syncPoller.getFinalResult();
       result.forEach(analyzeActionsResult -> {
           System.out.println("Entities recognition action results:");
           analyzeActionsResult.getRecognizeEntitiesResults().forEach(
               actionResult -> {
                   if (!actionResult.isError()) {
                       actionResult.getDocumentsResults().forEach(
                           entitiesResult -> entitiesResult.getEntities().forEach(
                               entity -> System.out.printf(
                                   "Recognized entity: %s, entity category: %s, entity subcategory: %s,"
                                       + " confidence score: %f.%n",
                                   entity.getText(), entity.getCategory(), entity.getSubcategory(),
                                   entity.getConfidenceScore())));
                   }
               });
           System.out.println("Key phrases extraction action results:");
           analyzeActionsResult.getExtractKeyPhrasesResults().forEach(
               actionResult -> {
                   if (!actionResult.isError()) {
                       actionResult.getDocumentsResults().forEach(extractKeyPhraseResult -> {
                           System.out.println("Extracted phrases:");
                           extractKeyPhraseResult.getKeyPhrases()
                               .forEach(keyPhrases -> System.out.printf("\t%s.%n", keyPhrases));
                       });
                   }
               });
       });
       
      Parameters:
      documents - A list of documents to be analyzed.
      actions - The actions that contains all actions to be executed. An action is one task of execution, such as a single task of 'Key Phrases Extraction' on the given document inputs.
      options - The additional configurable options that may be passed when analyzing a collection of actions.
      context - Additional context that is passed through the Http pipeline during the service call.
      Returns:
      A SyncPoller that polls the analyze a collection of actions operation until it has completed, has failed, or has been cancelled. The completed operation returns a AnalyzeActionsResultPagedIterable.
      Throws:
      NullPointerException - if documents or actions is null.
      IllegalArgumentException - if documents is empty.
      UnsupportedOperationException - if beginAnalyzeActions is called with service API version TextAnalyticsServiceVersion.V3_0. beginAnalyzeActions is only available for API version v3.1 and newer.
      UnsupportedOperationException - if request AnalyzeHealthcareEntitiesAction, RecognizeCustomEntitiesAction, SingleLabelClassifyAction, or MultiLabelClassifyAction in service API version TextAnalyticsServiceVersion.V3_0 or TextAnalyticsServiceVersion.V3_1. Those actions are only available for API version 2022-05-01 and newer.
      TextAnalyticsException - If analyze operation fails.