public final class TextAnalyticsClient extends Object
Instantiating a synchronous Text Analytics Client
TextAnalyticsClient textAnalyticsClient = new TextAnalyticsClientBuilder() .credential(new AzureKeyCredential("{key}")) .endpoint("{endpoint}") .buildClient();
View this
for additional ways to construct the client.
TextAnalyticsClientBuilder
Modifier and Type | Method and Description |
---|---|
DocumentSentiment |
analyzeSentiment(String document)
Returns a sentiment prediction, as well as confidence scores for each sentiment label
(Positive, Negative, and Neutral) for the document and each sentence within it.
|
DocumentSentiment |
analyzeSentiment(String document,
String language)
Returns a sentiment prediction, as well as confidence scores for each sentiment label
(Positive, Negative, and Neutral) for the document and each sentence within it.
|
DocumentSentiment |
analyzeSentiment(String document,
String language,
AnalyzeSentimentOptions options)
Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and
Neutral) for the document and each sentence within it.
|
AnalyzeSentimentResultCollection |
analyzeSentimentBatch(Iterable<String> documents,
String language,
AnalyzeSentimentOptions options)
Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and
Neutral) for the document and each sentence within it.
|
AnalyzeSentimentResultCollection |
analyzeSentimentBatch(Iterable<String> documents,
String language,
TextAnalyticsRequestOptions options)
Deprecated.
|
com.azure.core.http.rest.Response<AnalyzeSentimentResultCollection> |
analyzeSentimentBatchWithResponse(Iterable<TextDocumentInput> documents,
AnalyzeSentimentOptions options,
com.azure.core.util.Context context)
Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and
Neutral) for the document and each sentence within it.
|
com.azure.core.http.rest.Response<AnalyzeSentimentResultCollection> |
analyzeSentimentBatchWithResponse(Iterable<TextDocumentInput> documents,
TextAnalyticsRequestOptions options,
com.azure.core.util.Context context)
Deprecated.
|
com.azure.core.util.polling.SyncPoller<AnalyzeActionsOperationDetail,AnalyzeActionsResultPagedIterable> |
beginAnalyzeActions(Iterable<String> documents,
TextAnalyticsActions actions,
String language,
AnalyzeActionsOptions options)
Execute actions, such as, entities recognition, PII entities recognition and key phrases extraction for a list of
documents with provided request options. |
com.azure.core.util.polling.SyncPoller<AnalyzeActionsOperationDetail,AnalyzeActionsResultPagedIterable> |
beginAnalyzeActions(Iterable<TextDocumentInput> documents,
TextAnalyticsActions actions,
AnalyzeActionsOptions options,
com.azure.core.util.Context context)
Execute actions, such as, entities recognition, PII entities recognition and key phrases extraction for a list of
documents with provided request options. |
com.azure.core.util.polling.SyncPoller<AnalyzeHealthcareEntitiesOperationDetail,AnalyzeHealthcareEntitiesPagedIterable> |
beginAnalyzeHealthcareEntities(Iterable<String> documents,
String language,
AnalyzeHealthcareEntitiesOptions options)
Analyze healthcare entities, entity data sources, and entity relations in a list of
documents with provided request options. |
com.azure.core.util.polling.SyncPoller<AnalyzeHealthcareEntitiesOperationDetail,AnalyzeHealthcareEntitiesPagedIterable> |
beginAnalyzeHealthcareEntities(Iterable<TextDocumentInput> documents,
AnalyzeHealthcareEntitiesOptions options,
com.azure.core.util.Context context)
Analyze healthcare entities, entity data sources, and entity relations in a list of
documents with provided request options. |
DetectedLanguage |
detectLanguage(String document)
Returns the detected language and a confidence score between zero and one.
|
DetectedLanguage |
detectLanguage(String document,
String countryHint)
Returns the detected language and a confidence score between zero and one.
|
DetectLanguageResultCollection |
detectLanguageBatch(Iterable<String> documents,
String countryHint,
TextAnalyticsRequestOptions options)
Detects Language for a batch of document with the provided country hint and request options.
|
com.azure.core.http.rest.Response<DetectLanguageResultCollection> |
detectLanguageBatchWithResponse(Iterable<DetectLanguageInput> documents,
TextAnalyticsRequestOptions options,
com.azure.core.util.Context context)
Detects Language for a batch of
document with provided request options. |
KeyPhrasesCollection |
extractKeyPhrases(String document)
Returns a list of strings denoting the key phrases in the document.
|
KeyPhrasesCollection |
extractKeyPhrases(String document,
String language)
Returns a list of strings denoting the key phrases in the document.
|
ExtractKeyPhrasesResultCollection |
extractKeyPhrasesBatch(Iterable<String> documents,
String language,
TextAnalyticsRequestOptions options)
Returns a list of strings denoting the key phrases in the documents with provided language code and
request options.
|
com.azure.core.http.rest.Response<ExtractKeyPhrasesResultCollection> |
extractKeyPhrasesBatchWithResponse(Iterable<TextDocumentInput> documents,
TextAnalyticsRequestOptions options,
com.azure.core.util.Context context)
Returns a list of strings denoting the key phrases in the a batch of
document with
request options. |
String |
getDefaultCountryHint()
Gets default country hint code.
|
String |
getDefaultLanguage()
Gets default language when the builder is setup.
|
CategorizedEntityCollection |
recognizeEntities(String document)
Returns a list of general categorized entities in the provided document.
|
CategorizedEntityCollection |
recognizeEntities(String document,
String language)
Returns a list of general categorized entities in the provided document with provided language code.
|
RecognizeEntitiesResultCollection |
recognizeEntitiesBatch(Iterable<String> documents,
String language,
TextAnalyticsRequestOptions options)
Returns a list of general categorized entities for the provided list of documents with provided language code
and request options.
|
com.azure.core.http.rest.Response<RecognizeEntitiesResultCollection> |
recognizeEntitiesBatchWithResponse(Iterable<TextDocumentInput> documents,
TextAnalyticsRequestOptions options,
com.azure.core.util.Context context)
Returns a list of general categorized entities for the provided list of
document with
provided request options. |
LinkedEntityCollection |
recognizeLinkedEntities(String document)
Returns a list of recognized entities with links to a well-known knowledge base for the provided document.
|
LinkedEntityCollection |
recognizeLinkedEntities(String document,
String language)
Returns a list of recognized entities with links to a well-known knowledge base for the provided document with
language code.
|
RecognizeLinkedEntitiesResultCollection |
recognizeLinkedEntitiesBatch(Iterable<String> documents,
String language,
TextAnalyticsRequestOptions options)
Returns a list of recognized entities with links to a well-known knowledge base for the list of documents with
provided language code and request options.
|
com.azure.core.http.rest.Response<RecognizeLinkedEntitiesResultCollection> |
recognizeLinkedEntitiesBatchWithResponse(Iterable<TextDocumentInput> documents,
TextAnalyticsRequestOptions options,
com.azure.core.util.Context context)
Returns a list of recognized entities with links to a well-known knowledge base for the list of
document and request options. |
PiiEntityCollection |
recognizePiiEntities(String document)
Returns a list of Personally Identifiable Information(PII) entities in the provided document.
|
PiiEntityCollection |
recognizePiiEntities(String document,
String language)
Returns a list of Personally Identifiable Information(PII) entities in the provided document
with provided language code.
|
PiiEntityCollection |
recognizePiiEntities(String document,
String language,
RecognizePiiEntitiesOptions options)
Returns a list of Personally Identifiable Information(PII) entities in the provided document
with provided language code.
|
RecognizePiiEntitiesResultCollection |
recognizePiiEntitiesBatch(Iterable<String> documents,
String language,
RecognizePiiEntitiesOptions options)
Returns a list of Personally Identifiable Information(PII) entities for the provided list of documents with
provided language code and request options.
|
com.azure.core.http.rest.Response<RecognizePiiEntitiesResultCollection> |
recognizePiiEntitiesBatchWithResponse(Iterable<TextDocumentInput> documents,
RecognizePiiEntitiesOptions options,
com.azure.core.util.Context context)
Returns a list of Personally Identifiable Information(PII) entities for the provided list of
document with provided request options. |
public String getDefaultCountryHint()
public String getDefaultLanguage()
public DetectedLanguage detectLanguage(String document)
TextAnalyticsClientBuilder.defaultCountryHint(String)
. If none is specified, service will use 'US' as
the country hint.
Code Sample
Detects the language of single document.
DetectedLanguage detectedLanguage = textAnalyticsClient.detectLanguage("Bonjour tout le monde"); System.out.printf("Detected language name: %s, ISO 6391 name: %s, confidence score: %f.%n", detectedLanguage.getName(), detectedLanguage.getIso6391Name(), detectedLanguage.getConfidenceScore());
document
- The document to be analyzed.
For text length limits, maximum batch size, and supported text encoding, see
data limits.detected language
of the document.NullPointerException
- if document
is null.public DetectedLanguage detectLanguage(String document, String countryHint)
Code Sample
Detects the language of documents with a provided country hint.
DetectedLanguage detectedLanguage = textAnalyticsClient.detectLanguage( "This text is in English", "US"); System.out.printf("Detected language name: %s, ISO 6391 name: %s, confidence score: %f.%n", detectedLanguage.getName(), detectedLanguage.getIso6391Name(), detectedLanguage.getConfidenceScore());
document
- The document to be analyzed.
For text length limits, maximum batch size, and supported text encoding, see
data limits.countryHint
- Accepts two letter country codes specified by ISO 3166-1 alpha-2. Defaults to "US" if not
specified. To remove this behavior you can reset this parameter by setting this value to empty string
countryHint
= "" or "none".detected language
of the document.NullPointerException
- if document
is null.public DetectLanguageResultCollection detectLanguageBatch(Iterable<String> documents, String countryHint, TextAnalyticsRequestOptions options)
Code Sample
Detects the language in a list of documents with a provided country hint and request options.
List<String> documents = Arrays.asList( "This is written in English", "Este es un documento escrito en Español." ); DetectLanguageResultCollection resultCollection = textAnalyticsClient.detectLanguageBatch(documents, "US", null); // Batch statistics TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics(); System.out.printf("A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); // Batch result of languages resultCollection.forEach(detectLanguageResult -> { System.out.printf("Document ID: %s%n", detectLanguageResult.getId()); DetectedLanguage detectedLanguage = detectLanguageResult.getPrimaryLanguage(); System.out.printf("Primary language name: %s, ISO 6391 name: %s, confidence score: %f.%n", detectedLanguage.getName(), detectedLanguage.getIso6391Name(), detectedLanguage.getConfidenceScore()); });
documents
- The list of documents to detect languages for.
For text length limits, maximum batch size, and supported text encoding, see
data limits.countryHint
- Accepts two letter country codes specified by ISO 3166-1 alpha-2. Defaults to "US" if not
specified. To remove this behavior you can reset this parameter by setting this value to empty string
countryHint
= "" or "none".options
- The options
to configure the scoring model for documents
and show statistics.DetectLanguageResultCollection
.NullPointerException
- if documents
is null.IllegalArgumentException
- if documents
is empty.public com.azure.core.http.rest.Response<DetectLanguageResultCollection> detectLanguageBatchWithResponse(Iterable<DetectLanguageInput> documents, TextAnalyticsRequestOptions options, com.azure.core.util.Context context)
document
with provided request options.
Code Sample
Detects the languages with http response in a list of document
with provided
request options.
List<DetectLanguageInput> detectLanguageInputs = Arrays.asList( new DetectLanguageInput("1", "This is written in English.", "US"), new DetectLanguageInput("2", "Este es un documento escrito en Español.", "es") ); Response<DetectLanguageResultCollection> response = textAnalyticsClient.detectLanguageBatchWithResponse(detectLanguageInputs, new TextAnalyticsRequestOptions().setIncludeStatistics(true), Context.NONE); // Response's status code System.out.printf("Status code of request response: %d%n", response.getStatusCode()); DetectLanguageResultCollection detectedLanguageResultCollection = response.getValue(); // Batch statistics TextDocumentBatchStatistics batchStatistics = detectedLanguageResultCollection.getStatistics(); System.out.printf( "Documents statistics: document count = %s, erroneous document count = %s, transaction count = %s," + " valid document count = %s.%n", batchStatistics.getDocumentCount(), batchStatistics.getInvalidDocumentCount(), batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); // Batch result of languages detectedLanguageResultCollection.forEach(detectLanguageResult -> { System.out.printf("Document ID: %s%n", detectLanguageResult.getId()); DetectedLanguage detectedLanguage = detectLanguageResult.getPrimaryLanguage(); System.out.printf("Primary language name: %s, ISO 6391 name: %s, confidence score: %f.%n", detectedLanguage.getName(), detectedLanguage.getIso6391Name(), detectedLanguage.getConfidenceScore()); });
documents
- The list of documents
to be analyzed.
For text length limits, maximum batch size, and supported text encoding, see
data limits.options
- The options
to configure the scoring model for documents
and show statistics.context
- Additional context that is passed through the Http pipeline during the service call.Response
that contains a DetectLanguageResultCollection
.NullPointerException
- if documents
is null.IllegalArgumentException
- if documents
is empty.public CategorizedEntityCollection recognizeEntities(String document)
TextAnalyticsClientBuilder.defaultLanguage(String)
. If none is specified, service will use 'en' as
the language.
Code Sample
Recognize the entities of documents
final CategorizedEntityCollection recognizeEntitiesResult = textAnalyticsClient.recognizeEntities("Satya Nadella is the CEO of Microsoft"); for (CategorizedEntity entity : recognizeEntitiesResult) { System.out.printf("Recognized entity: %s, entity category: %s, confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getConfidenceScore()); }
document
- The document to recognize entities for.
For text length limits, maximum batch size, and supported text encoding, see
data limits.CategorizedEntityCollection
contains a list of
recognized categorized entities
and warnings.NullPointerException
- if document
is null.TextAnalyticsException
- if the response returned with an error
.public CategorizedEntityCollection recognizeEntities(String document, String language)
Code Sample
Recognizes the entities in a document with a provided language code.
final CategorizedEntityCollection recognizeEntitiesResult = textAnalyticsClient.recognizeEntities("Satya Nadella is the CEO of Microsoft", "en"); for (CategorizedEntity entity : recognizeEntitiesResult) { System.out.printf("Recognized entity: %s, entity category: %s, confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getConfidenceScore()); }
document
- The document to recognize entities for.
For text length limits, maximum batch size, and supported text encoding, see
data limits.language
- The 2 letter ISO 639-1 representation of language. If not set, uses "en" for English as default.CategorizedEntityCollection
contains a list of
recognized categorized entities
and warnings.NullPointerException
- if document
is null.TextAnalyticsException
- if the response returned with an error
.public RecognizeEntitiesResultCollection recognizeEntitiesBatch(Iterable<String> documents, String language, TextAnalyticsRequestOptions options)
Code Sample
Recognizes the entities in a list of documents with a provided language code and request options.
List<String> documents = Arrays.asList( "I had a wonderful trip to Seattle last week.", "I work at Microsoft."); RecognizeEntitiesResultCollection resultCollection = textAnalyticsClient.recognizeEntitiesBatch(documents, "en", null); // Batch statistics TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics(); System.out.printf( "A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); resultCollection.forEach(recognizeEntitiesResult -> recognizeEntitiesResult.getEntities().forEach(entity -> System.out.printf("Recognized entity: %s, entity category: %s, confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getConfidenceScore())));
documents
- A list of documents to recognize entities for.
For text length limits, maximum batch size, and supported text encoding, see
data limits.language
- The 2 letter ISO 639-1 representation of language. If not set, uses "en" for English as default.options
- The options
to configure the scoring model for documents
and show statistics.RecognizeEntitiesResultCollection
.NullPointerException
- if documents
is null.IllegalArgumentException
- if documents
is empty.public com.azure.core.http.rest.Response<RecognizeEntitiesResultCollection> recognizeEntitiesBatchWithResponse(Iterable<TextDocumentInput> documents, TextAnalyticsRequestOptions options, com.azure.core.util.Context context)
document
with
provided request options.
Code Sample
Recognizes the entities with http response in a list of document
with provided
request options.
List<TextDocumentInput> textDocumentInputs = Arrays.asList( new TextDocumentInput("0", "I had a wonderful trip to Seattle last week.").setLanguage("en"), new TextDocumentInput("1", "I work at Microsoft.").setLanguage("en") ); Response<RecognizeEntitiesResultCollection> response = textAnalyticsClient.recognizeEntitiesBatchWithResponse(textDocumentInputs, new TextAnalyticsRequestOptions().setIncludeStatistics(true), Context.NONE); // Response's status code System.out.printf("Status code of request response: %d%n", response.getStatusCode()); RecognizeEntitiesResultCollection recognizeEntitiesResultCollection = response.getValue(); // Batch statistics TextDocumentBatchStatistics batchStatistics = recognizeEntitiesResultCollection.getStatistics(); System.out.printf( "A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); recognizeEntitiesResultCollection.forEach(recognizeEntitiesResult -> recognizeEntitiesResult.getEntities().forEach(entity -> System.out.printf("Recognized entity: %s, entity category: %s, confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getConfidenceScore())));
documents
- A list of documents
to recognize entities for.
For text length limits, maximum batch size, and supported text encoding, see
data limits.options
- The options
to configure the scoring model for documents
and show statistics.context
- Additional context that is passed through the Http pipeline during the service call.Response
that contains a RecognizeEntitiesResultCollection
.NullPointerException
- if documents
is null.IllegalArgumentException
- if documents
is empty.public PiiEntityCollection recognizePiiEntities(String document)
TextAnalyticsClientBuilder.defaultLanguage(String)
. If none is
specified, service will use 'en' as the language.
Code Sample
Recognize the PII entities details in a document.
PiiEntityCollection piiEntityCollection = textAnalyticsClient.recognizePiiEntities("My SSN is 859-98-0987"); System.out.printf("Redacted Text: %s%n", piiEntityCollection.getRedactedText()); for (PiiEntity entity : piiEntityCollection) { System.out.printf( "Recognized Personally Identifiable Information entity: %s, entity category: %s," + " entity subcategory: %s, confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore()); }
document
- The document to recognize PII entities details for.
For text length limits, maximum batch size, and supported text encoding, see
data limits.recognized PII entities collection
.NullPointerException
- if document
is null.TextAnalyticsException
- if the response returned with an error
.public PiiEntityCollection recognizePiiEntities(String document, String language)
Code Sample
Recognizes the PII entities details in a document with a provided language code.
PiiEntityCollection piiEntityCollection = textAnalyticsClient.recognizePiiEntities( "My SSN is 859-98-0987", "en"); System.out.printf("Redacted Text: %s%n", piiEntityCollection.getRedactedText()); piiEntityCollection.forEach(entity -> System.out.printf( "Recognized Personally Identifiable Information entity: %s, entity category: %s," + " entity subcategory: %s, confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore()));
document
- The document to recognize PII entities details for.
For text length limits, maximum batch size, and supported text encoding, see
data limits.language
- The 2 letter ISO 639-1 representation of language. If not set, uses "en" for English as default.recognized PII entities collection
.NullPointerException
- if document
is null.TextAnalyticsException
- if the response returned with an error
.public PiiEntityCollection recognizePiiEntities(String document, String language, RecognizePiiEntitiesOptions options)
Code Sample
Recognizes the PII entities details in a document with a provided language code and
RecognizePiiEntitiesOptions
.
PiiEntityCollection piiEntityCollection = textAnalyticsClient.recognizePiiEntities( "My SSN is 859-98-0987", "en", new RecognizePiiEntitiesOptions().setDomainFilter(PiiEntityDomain.PROTECTED_HEALTH_INFORMATION)); System.out.printf("Redacted Text: %s%n", piiEntityCollection.getRedactedText()); piiEntityCollection.forEach(entity -> System.out.printf( "Recognized Personally Identifiable Information entity: %s, entity category: %s," + " entity subcategory: %s, confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore()));
document
- The document to recognize PII entities details for.
For text length limits, maximum batch size, and supported text encoding, see
data limits.language
- The 2 letter ISO 639-1 representation of language. If not set, uses "en" for English as default.options
- The additional configurable options
that may be passed when
recognizing PII entities.recognized PII entities collection
.NullPointerException
- if document
is null.TextAnalyticsException
- if the response returned with an error
.public RecognizePiiEntitiesResultCollection recognizePiiEntitiesBatch(Iterable<String> documents, String language, RecognizePiiEntitiesOptions options)
Code Sample
Recognizes the PII entities details in a list of documents with a provided language code and request options.
List<String> documents = Arrays.asList( "My SSN is 859-98-0987", "Visa card 4111 1111 1111 1111" ); RecognizePiiEntitiesResultCollection resultCollection = textAnalyticsClient.recognizePiiEntitiesBatch( documents, "en", new RecognizePiiEntitiesOptions().setIncludeStatistics(true)); // Batch statistics TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics(); System.out.printf("A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); resultCollection.forEach(recognizePiiEntitiesResult -> { PiiEntityCollection piiEntityCollection = recognizePiiEntitiesResult.getEntities(); System.out.printf("Redacted Text: %s%n", piiEntityCollection.getRedactedText()); piiEntityCollection.forEach(entity -> System.out.printf( "Recognized Personally Identifiable Information entity: %s, entity category: %s," + " entity subcategory: %s, confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore())); });
documents
- A list of documents to recognize PII entities for.
For text length limits, maximum batch size, and supported text encoding, see
data limits.language
- The 2 letter ISO 639-1 representation of language. If not set, uses "en" for English as default.options
- The additional configurable options
that may be passed when
recognizing PII entities.RecognizePiiEntitiesResultCollection
.NullPointerException
- if documents
is null.IllegalArgumentException
- if documents
is empty.public com.azure.core.http.rest.Response<RecognizePiiEntitiesResultCollection> recognizePiiEntitiesBatchWithResponse(Iterable<TextDocumentInput> documents, RecognizePiiEntitiesOptions options, com.azure.core.util.Context context)
document
with provided request options.
Code Sample
Recognizes the PII entities details with http response in a list of document
with provided request options.
List<TextDocumentInput> textDocumentInputs = Arrays.asList( new TextDocumentInput("0", "My SSN is 859-98-0987"), new TextDocumentInput("1", "Visa card 4111 1111 1111 1111") ); Response<RecognizePiiEntitiesResultCollection> response = textAnalyticsClient.recognizePiiEntitiesBatchWithResponse(textDocumentInputs, new RecognizePiiEntitiesOptions().setIncludeStatistics(true), Context.NONE); RecognizePiiEntitiesResultCollection resultCollection = response.getValue(); // Batch statistics TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics(); System.out.printf("A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); resultCollection.forEach(recognizePiiEntitiesResult -> { PiiEntityCollection piiEntityCollection = recognizePiiEntitiesResult.getEntities(); System.out.printf("Redacted Text: %s%n", piiEntityCollection.getRedactedText()); piiEntityCollection.forEach(entity -> System.out.printf( "Recognized Personally Identifiable Information entity: %s, entity category: %s," + " entity subcategory: %s, confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore())); });
documents
- A list of documents
to recognize PII entities for.
For text length limits, maximum batch size, and supported text encoding, see
data limits.options
- The additional configurable options
that may be passed when
recognizing PII entities.context
- Additional context that is passed through the Http pipeline during the service call.Response
that contains a RecognizePiiEntitiesResultCollection
.NullPointerException
- if documents
is null.IllegalArgumentException
- if documents
is empty.public LinkedEntityCollection recognizeLinkedEntities(String document)
TextAnalyticsClientBuilder.defaultLanguage(String)
. If none is specified, service will use 'en' as
the language.
Code Sample
Recognize the linked entities of documents
final String document = "Old Faithful is a geyser at Yellowstone Park."; System.out.println("Linked Entities:"); textAnalyticsClient.recognizeLinkedEntities(document).forEach(linkedEntity -> { System.out.printf("Name: %s, entity ID in data source: %s, URL: %s, data source: %s.%n", linkedEntity.getName(), linkedEntity.getDataSourceEntityId(), linkedEntity.getUrl(), linkedEntity.getDataSource()); linkedEntity.getMatches().forEach(entityMatch -> System.out.printf( "Matched entity: %s, confidence score: %f.%n", entityMatch.getText(), entityMatch.getConfidenceScore())); });
document
- The document to recognize linked entities for.
For text length limits, maximum batch size, and supported text encoding, see
data limits.LinkedEntityCollection
contains a list of recognized linked entities
.NullPointerException
- if document
is null.TextAnalyticsException
- if the response returned with an error
.public LinkedEntityCollection recognizeLinkedEntities(String document, String language)
Code Sample
Recognizes the linked entities in a document with a provided language code.
String document = "Old Faithful is a geyser at Yellowstone Park."; textAnalyticsClient.recognizeLinkedEntities(document, "en").forEach(linkedEntity -> { System.out.printf("Name: %s, entity ID in data source: %s, URL: %s, data source: %s.%n", linkedEntity.getName(), linkedEntity.getDataSourceEntityId(), linkedEntity.getUrl(), linkedEntity.getDataSource()); linkedEntity.getMatches().forEach(entityMatch -> System.out.printf( "Matched entity: %s, confidence score: %f.%n", entityMatch.getText(), entityMatch.getConfidenceScore())); });
document
- The document to recognize linked entities for.
For text length limits, maximum batch size, and supported text encoding, see
data limits.language
- The 2 letter ISO 639-1 representation of language for the document. If not set, uses "en" for
English as default.LinkedEntityCollection
contains a list of recognized linked entities
.NullPointerException
- if document
is null.TextAnalyticsException
- if the response returned with an error
.public RecognizeLinkedEntitiesResultCollection recognizeLinkedEntitiesBatch(Iterable<String> documents, String language, TextAnalyticsRequestOptions options)
Code Sample
Recognizes the linked entities in a list of documents with a provided language code and request options.
List<String> documents = Arrays.asList( "Old Faithful is a geyser at Yellowstone Park.", "Mount Shasta has lenticular clouds." ); RecognizeLinkedEntitiesResultCollection resultCollection = textAnalyticsClient.recognizeLinkedEntitiesBatch(documents, "en", null); // Batch statistics TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics(); System.out.printf("A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); resultCollection.forEach(recognizeLinkedEntitiesResult -> recognizeLinkedEntitiesResult.getEntities().forEach(linkedEntity -> { System.out.println("Linked Entities:"); System.out.printf("Name: %s, entity ID in data source: %s, URL: %s, data source: %s.%n", linkedEntity.getName(), linkedEntity.getDataSourceEntityId(), linkedEntity.getUrl(), linkedEntity.getDataSource()); linkedEntity.getMatches().forEach(entityMatch -> System.out.printf( "Matched entity: %s, confidence score: %f.%n", entityMatch.getText(), entityMatch.getConfidenceScore())); }));
documents
- A list of documents to recognize linked entities for.
For text length limits, maximum batch size, and supported text encoding, see
data limits.language
- The 2 letter ISO 639-1 representation of language for the documents. If not set, uses "en" for
English as default.options
- The options
to configure the scoring model for documents
and show statistics.RecognizeLinkedEntitiesResultCollection
.NullPointerException
- if documents
is null.IllegalArgumentException
- if documents
is empty.public com.azure.core.http.rest.Response<RecognizeLinkedEntitiesResultCollection> recognizeLinkedEntitiesBatchWithResponse(Iterable<TextDocumentInput> documents, TextAnalyticsRequestOptions options, com.azure.core.util.Context context)
document
and request options.
See this for supported languages in Text Analytics API.
Code Sample
Recognizes the linked entities with http response in a list of TextDocumentInput
with request options.
List<TextDocumentInput> textDocumentInputs = Arrays.asList( new TextDocumentInput("1", "Old Faithful is a geyser at Yellowstone Park.").setLanguage("en"), new TextDocumentInput("2", "Mount Shasta has lenticular clouds.").setLanguage("en") ); Response<RecognizeLinkedEntitiesResultCollection> response = textAnalyticsClient.recognizeLinkedEntitiesBatchWithResponse(textDocumentInputs, new TextAnalyticsRequestOptions().setIncludeStatistics(true), Context.NONE); // Response's status code System.out.printf("Status code of request response: %d%n", response.getStatusCode()); RecognizeLinkedEntitiesResultCollection resultCollection = response.getValue(); // Batch statistics TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics(); System.out.printf( "A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); resultCollection.forEach(recognizeLinkedEntitiesResult -> recognizeLinkedEntitiesResult.getEntities().forEach(linkedEntity -> { System.out.println("Linked Entities:"); System.out.printf("Name: %s, entity ID in data source: %s, URL: %s, data source: %s.%n", linkedEntity.getName(), linkedEntity.getDataSourceEntityId(), linkedEntity.getUrl(), linkedEntity.getDataSource()); linkedEntity.getMatches().forEach(entityMatch -> System.out.printf( "Matched entity: %s, confidence score: %.2f.%n", entityMatch.getText(), entityMatch.getConfidenceScore())); }));
documents
- A list of documents
to recognize linked entities for.
For text length limits, maximum batch size, and supported text encoding, see
data limits.options
- The options
to configure the scoring model for documents
and show statistics.context
- Additional context that is passed through the Http pipeline during the service call.Response
that contains a RecognizeLinkedEntitiesResultCollection
.NullPointerException
- if documents
is null.IllegalArgumentException
- if documents
is empty.public KeyPhrasesCollection extractKeyPhrases(String document)
TextAnalyticsClientBuilder.defaultLanguage(String)
. If none is specified, service will use 'en' as
the language.
Code Sample
Extracts key phrases of documents
System.out.println("Extracted phrases:"); for (String keyPhrase : textAnalyticsClient.extractKeyPhrases("My cat might need to see a veterinarian.")) { System.out.printf("%s.%n", keyPhrase); }
document
- The document to be analyzed.
For text length limits, maximum batch size, and supported text encoding, see
data limits.KeyPhrasesCollection
contains a list of extracted key phrases.NullPointerException
- if document
is null.TextAnalyticsException
- if the response returned with an error
.public KeyPhrasesCollection extractKeyPhrases(String document, String language)
Code Sample
Extracts key phrases in a document with a provided language representation.
System.out.println("Extracted phrases:"); textAnalyticsClient.extractKeyPhrases("My cat might need to see a veterinarian.", "en") .forEach(kegPhrase -> System.out.printf("%s.%n", kegPhrase));
document
- The document to be analyzed.
For text length limits, maximum batch size, and supported text encoding, see
data limits.language
- The 2 letter ISO 639-1 representation of language for the document. If not set, uses "en" for
English as default.KeyPhrasesCollection
contains a list of extracted key phrases.NullPointerException
- if document
is null.TextAnalyticsException
- if the response returned with an error
.public ExtractKeyPhrasesResultCollection extractKeyPhrasesBatch(Iterable<String> documents, String language, TextAnalyticsRequestOptions options)
Code Sample
Extracts key phrases in a list of documents with a provided language code and request options.
List<String> documents = Arrays.asList( "My cat might need to see a veterinarian.", "The pitot tube is used to measure airspeed." ); // Extracting batch key phrases ExtractKeyPhrasesResultCollection resultCollection = textAnalyticsClient.extractKeyPhrasesBatch(documents, "en", null); // Batch statistics TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics(); System.out.printf( "A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); // Extracted key phrase for each of documents from a batch of documents resultCollection.forEach(extractKeyPhraseResult -> { System.out.printf("Document ID: %s%n", extractKeyPhraseResult.getId()); // Valid document System.out.println("Extracted phrases:"); extractKeyPhraseResult.getKeyPhrases().forEach(keyPhrase -> System.out.printf("%s.%n", keyPhrase)); });
documents
- A list of documents to be analyzed.
For text length limits, maximum batch size, and supported text encoding, see
data limits.language
- The 2 letter ISO 639-1 representation of language for the documents. If not set, uses "en" for
English as default.options
- The options
to configure the scoring model for documents
and show statistics.ExtractKeyPhrasesResultCollection
.NullPointerException
- if documents
is null.IllegalArgumentException
- if documents
is empty.public com.azure.core.http.rest.Response<ExtractKeyPhrasesResultCollection> extractKeyPhrasesBatchWithResponse(Iterable<TextDocumentInput> documents, TextAnalyticsRequestOptions options, com.azure.core.util.Context context)
document
with
request options.
See this for the list of enabled languages.
Code Sample
Extracts key phrases with http response in a list of TextDocumentInput
with request options.
List<TextDocumentInput> textDocumentInputs = Arrays.asList( new TextDocumentInput("1", "My cat might need to see a veterinarian.").setLanguage("en"), new TextDocumentInput("2", "The pitot tube is used to measure airspeed.").setLanguage("en") ); // Extracting batch key phrases Response<ExtractKeyPhrasesResultCollection> response = textAnalyticsClient.extractKeyPhrasesBatchWithResponse(textDocumentInputs, new TextAnalyticsRequestOptions().setIncludeStatistics(true), Context.NONE); // Response's status code System.out.printf("Status code of request response: %d%n", response.getStatusCode()); ExtractKeyPhrasesResultCollection resultCollection = response.getValue(); // Batch statistics TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics(); System.out.printf( "A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); // Extracted key phrase for each of documents from a batch of documents resultCollection.forEach(extractKeyPhraseResult -> { System.out.printf("Document ID: %s%n", extractKeyPhraseResult.getId()); // Valid document System.out.println("Extracted phrases:"); extractKeyPhraseResult.getKeyPhrases().forEach(keyPhrase -> System.out.printf("%s.%n", keyPhrase)); });
documents
- A list of documents
to be analyzed.
For text length limits, maximum batch size, and supported text encoding, see
data limits.options
- The options
to configure the scoring model for documents
and show statistics.context
- Additional context that is passed through the Http pipeline during the service call.Response
that contains a ExtractKeyPhrasesResultCollection
.NullPointerException
- if documents
is null.IllegalArgumentException
- if documents
is empty.public DocumentSentiment analyzeSentiment(String document)
TextAnalyticsClientBuilder.defaultLanguage(String)
. If none is specified, service will use 'en' as
the language.
Code Sample
Analyze the sentiments of documents
final DocumentSentiment documentSentiment = textAnalyticsClient.analyzeSentiment("The hotel was dark and unclean."); System.out.printf( "Recognized sentiment: %s, positive score: %.2f, neutral score: %.2f, negative score: %.2f.%n", documentSentiment.getSentiment(), documentSentiment.getConfidenceScores().getPositive(), documentSentiment.getConfidenceScores().getNeutral(), documentSentiment.getConfidenceScores().getNegative()); for (SentenceSentiment sentenceSentiment : documentSentiment.getSentences()) { System.out.printf( "Recognized sentence sentiment: %s, positive score: %.2f, neutral score: %.2f, negative score: %.2f.%n", sentenceSentiment.getSentiment(), sentenceSentiment.getConfidenceScores().getPositive(), sentenceSentiment.getConfidenceScores().getNeutral(), sentenceSentiment.getConfidenceScores().getNegative()); }
document
- The document to be analyzed.
For text length limits, maximum batch size, and supported text encoding, see
data limits.analyzed document sentiment
of the document.NullPointerException
- if document
is null.TextAnalyticsException
- if the response returned with an error
.public DocumentSentiment analyzeSentiment(String document, String language)
Code Sample
Analyze the sentiments in a document with a provided language representation.
final DocumentSentiment documentSentiment = textAnalyticsClient.analyzeSentiment( "The hotel was dark and unclean.", "en"); System.out.printf( "Recognized sentiment: %s, positive score: %.2f, neutral score: %.2f, negative score: %.2f.%n", documentSentiment.getSentiment(), documentSentiment.getConfidenceScores().getPositive(), documentSentiment.getConfidenceScores().getNeutral(), documentSentiment.getConfidenceScores().getNegative()); for (SentenceSentiment sentenceSentiment : documentSentiment.getSentences()) { System.out.printf( "Recognized sentence sentiment: %s, positive score: %.2f, neutral score: %.2f, negative score: %.2f.%n", sentenceSentiment.getSentiment(), sentenceSentiment.getConfidenceScores().getPositive(), sentenceSentiment.getConfidenceScores().getNeutral(), sentenceSentiment.getConfidenceScores().getNegative()); }
document
- The document to be analyzed.
For text length limits, maximum batch size, and supported text encoding, see
data limits.language
- The 2 letter ISO 639-1 representation of language for the document. If not set, uses "en" for
English as default.analyzed document sentiment
of the document.NullPointerException
- if document
is null.TextAnalyticsException
- if the response returned with an error
.public DocumentSentiment analyzeSentiment(String document, String language, AnalyzeSentimentOptions options)
includeOpinionMining
of
AnalyzeSentimentOptions
set to true, the output will include the opinion mining results. It mines the
opinions of a sentence and conducts more granular analysis around the aspects in the text
(also known as aspect-based sentiment analysis).
Code Sample
Analyze the sentiment and mine the opinions for each sentence in a document with a provided language
representation and AnalyzeSentimentOptions
options.
final DocumentSentiment documentSentiment = textAnalyticsClient.analyzeSentiment( "The hotel was dark and unclean.", "en", new AnalyzeSentimentOptions().setIncludeOpinionMining(true)); for (SentenceSentiment sentenceSentiment : documentSentiment.getSentences()) { System.out.printf("\tSentence sentiment: %s%n", sentenceSentiment.getSentiment()); sentenceSentiment.getOpinions().forEach(opinion -> { TargetSentiment targetSentiment = opinion.getTarget(); System.out.printf("\tTarget sentiment: %s, target text: %s%n", targetSentiment.getSentiment(), targetSentiment.getText()); for (AssessmentSentiment assessmentSentiment : opinion.getAssessments()) { System.out.printf("\t\t'%s' sentiment because of \"%s\". Is the assessment negated: %s.%n", assessmentSentiment.getSentiment(), assessmentSentiment.getText(), assessmentSentiment.isNegated()); } }); }
document
- The document to be analyzed.
For text length limits, maximum batch size, and supported text encoding, see
data limits.language
- The 2 letter ISO 639-1 representation of language for the document. If not set, uses "en" for
English as default.options
- The additional configurable options
that may be passed when
analyzing sentiments.analyzed document sentiment
of the document.NullPointerException
- if document
is null.TextAnalyticsException
- if the response returned with an error
.@Deprecated public AnalyzeSentimentResultCollection analyzeSentimentBatch(Iterable<String> documents, String language, TextAnalyticsRequestOptions options)
TextAnalyticsClient.analyzeSentimentBatch(Iterable, String, AnalyzeSentimentOptions)
.Code Sample
Analyze the sentiments in a list of documents with a provided language representation and request options.
List<String> documents = Arrays.asList( "The hotel was dark and unclean. The restaurant had amazing gnocchi.", "The restaurant had amazing gnocchi. The hotel was dark and unclean." ); // Analyzing batch sentiments AnalyzeSentimentResultCollection resultCollection = textAnalyticsClient.analyzeSentimentBatch( documents, "en", new TextAnalyticsRequestOptions().setIncludeStatistics(true)); // Batch statistics TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics(); System.out.printf("A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); // Analyzed sentiment for each of documents from a batch of documents resultCollection.forEach(analyzeSentimentResult -> { System.out.printf("Document ID: %s%n", analyzeSentimentResult.getId()); // Valid document DocumentSentiment documentSentiment = analyzeSentimentResult.getDocumentSentiment(); System.out.printf( "Recognized document sentiment: %s, positive score: %.2f, neutral score: %.2f," + " negative score: %.2f.%n", documentSentiment.getSentiment(), documentSentiment.getConfidenceScores().getPositive(), documentSentiment.getConfidenceScores().getNeutral(), documentSentiment.getConfidenceScores().getNegative()); documentSentiment.getSentences().forEach(sentenceSentiment -> System.out.printf( "Recognized sentence sentiment: %s, positive score: %.2f, neutral score: %.2f," + " negative score: %.2f.%n", sentenceSentiment.getSentiment(), sentenceSentiment.getConfidenceScores().getPositive(), sentenceSentiment.getConfidenceScores().getNeutral(), sentenceSentiment.getConfidenceScores().getNegative())); });
documents
- A list of documents to be analyzed.
For text length limits, maximum batch size, and supported text encoding, see
data limits.language
- The 2 letter ISO 639-1 representation of language for the documents. If not set, uses "en" for
English as default.options
- The options
to configure the scoring model for documents
and show statistics.AnalyzeSentimentResultCollection
.NullPointerException
- if documents
is null.IllegalArgumentException
- if documents
is empty.public AnalyzeSentimentResultCollection analyzeSentimentBatch(Iterable<String> documents, String language, AnalyzeSentimentOptions options)
includeOpinionMining
of
AnalyzeSentimentOptions
set to true, the output will include the opinion mining results. It mines the
opinions of a sentence and conducts more granular analysis around the aspects in the text
(also known as aspect-based sentiment analysis).
Code Sample
Analyze the sentiments and mine the opinions for each sentence in a list of documents with a provided language
representation and AnalyzeSentimentOptions
options.
List<String> documents = Arrays.asList( "The hotel was dark and unclean. The restaurant had amazing gnocchi.", "The restaurant had amazing gnocchi. The hotel was dark and unclean." ); // Analyzing batch sentiments AnalyzeSentimentResultCollection resultCollection = textAnalyticsClient.analyzeSentimentBatch( documents, "en", new AnalyzeSentimentOptions().setIncludeOpinionMining(true)); // Analyzed sentiment for each of documents from a batch of documents resultCollection.forEach(analyzeSentimentResult -> { System.out.printf("Document ID: %s%n", analyzeSentimentResult.getId()); DocumentSentiment documentSentiment = analyzeSentimentResult.getDocumentSentiment(); documentSentiment.getSentences().forEach(sentenceSentiment -> { System.out.printf("\tSentence sentiment: %s%n", sentenceSentiment.getSentiment()); sentenceSentiment.getOpinions().forEach(opinion -> { TargetSentiment targetSentiment = opinion.getTarget(); System.out.printf("\tTarget sentiment: %s, target text: %s%n", targetSentiment.getSentiment(), targetSentiment.getText()); for (AssessmentSentiment assessmentSentiment : opinion.getAssessments()) { System.out.printf("\t\t'%s' sentiment because of \"%s\". Is the assessment negated: %s.%n", assessmentSentiment.getSentiment(), assessmentSentiment.getText(), assessmentSentiment.isNegated()); } }); }); });
documents
- A list of documents to be analyzed.
For text length limits, maximum batch size, and supported text encoding, see
data limits.language
- The 2 letter ISO 639-1 representation of language for the documents. If not set, uses "en" for
English as default.options
- The additional configurable options
that may be passed when
analyzing sentiments.AnalyzeSentimentResultCollection
.NullPointerException
- if documents
is null.IllegalArgumentException
- if documents
is empty.@Deprecated public com.azure.core.http.rest.Response<AnalyzeSentimentResultCollection> analyzeSentimentBatchWithResponse(Iterable<TextDocumentInput> documents, TextAnalyticsRequestOptions options, com.azure.core.util.Context context)
TextAnalyticsClient.analyzeSentimentBatchWithResponse(Iterable, AnalyzeSentimentOptions, Context)
.Code Sample
Analyze sentiment in a list of document
with provided request options.
List<TextDocumentInput> textDocumentInputs = Arrays.asList( new TextDocumentInput("1", "The hotel was dark and unclean. The restaurant had amazing gnocchi.") .setLanguage("en"), new TextDocumentInput("2", "The restaurant had amazing gnocchi. The hotel was dark and unclean.") .setLanguage("en") ); // Analyzing batch sentiments Response<AnalyzeSentimentResultCollection> response = textAnalyticsClient.analyzeSentimentBatchWithResponse(textDocumentInputs, new TextAnalyticsRequestOptions().setIncludeStatistics(true), Context.NONE); // Response's status code System.out.printf("Status code of request response: %d%n", response.getStatusCode()); AnalyzeSentimentResultCollection resultCollection = response.getValue(); // Batch statistics TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics(); System.out.printf("A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); // Analyzed sentiment for each of documents from a batch of documents resultCollection.forEach(analyzeSentimentResult -> { System.out.printf("Document ID: %s%n", analyzeSentimentResult.getId()); // Valid document DocumentSentiment documentSentiment = analyzeSentimentResult.getDocumentSentiment(); System.out.printf( "Recognized document sentiment: %s, positive score: %.2f, neutral score: %.2f, " + "negative score: %.2f.%n", documentSentiment.getSentiment(), documentSentiment.getConfidenceScores().getPositive(), documentSentiment.getConfidenceScores().getNeutral(), documentSentiment.getConfidenceScores().getNegative()); documentSentiment.getSentences().forEach(sentenceSentiment -> { System.out.printf( "Recognized sentence sentiment: %s, positive score: %.2f, neutral score: %.2f," + " negative score: %.2f.%n", sentenceSentiment.getSentiment(), sentenceSentiment.getConfidenceScores().getPositive(), sentenceSentiment.getConfidenceScores().getNeutral(), sentenceSentiment.getConfidenceScores().getNegative()); }); });
documents
- A list of documents
to be analyzed.
For text length limits, maximum batch size, and supported text encoding, see
data limits.options
- The options
to configure the scoring model for documents
and show statistics.context
- Additional context that is passed through the Http pipeline during the service call.Response
that contains a AnalyzeSentimentResultCollection
.NullPointerException
- if documents
is null.IllegalArgumentException
- if documents
is empty.public com.azure.core.http.rest.Response<AnalyzeSentimentResultCollection> analyzeSentimentBatchWithResponse(Iterable<TextDocumentInput> documents, AnalyzeSentimentOptions options, com.azure.core.util.Context context)
includeOpinionMining
of
AnalyzeSentimentOptions
set to true, the output will include the opinion mining results. It mines the
opinions of a sentence and conducts more granular analysis around the aspects in the text
(also known as aspect-based sentiment analysis).
Code Sample
Analyze sentiment and mine the opinions for each sentence in a list of
document
with provided AnalyzeSentimentOptions
options.
List<TextDocumentInput> textDocumentInputs = Arrays.asList( new TextDocumentInput("1", "The hotel was dark and unclean. The restaurant had amazing gnocchi.") .setLanguage("en"), new TextDocumentInput("2", "The restaurant had amazing gnocchi. The hotel was dark and unclean.") .setLanguage("en") ); AnalyzeSentimentOptions options = new AnalyzeSentimentOptions().setIncludeOpinionMining(true) .setIncludeStatistics(true); // Analyzing batch sentiments Response<AnalyzeSentimentResultCollection> response = textAnalyticsClient.analyzeSentimentBatchWithResponse(textDocumentInputs, options, Context.NONE); // Response's status code System.out.printf("Status code of request response: %d%n", response.getStatusCode()); AnalyzeSentimentResultCollection resultCollection = response.getValue(); // Batch statistics TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics(); System.out.printf("A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); // Analyzed sentiment for each of documents from a batch of documents resultCollection.forEach(analyzeSentimentResult -> { System.out.printf("Document ID: %s%n", analyzeSentimentResult.getId()); DocumentSentiment documentSentiment = analyzeSentimentResult.getDocumentSentiment(); documentSentiment.getSentences().forEach(sentenceSentiment -> { System.out.printf("\tSentence sentiment: %s%n", sentenceSentiment.getSentiment()); sentenceSentiment.getOpinions().forEach(opinion -> { TargetSentiment targetSentiment = opinion.getTarget(); System.out.printf("\tTarget sentiment: %s, target text: %s%n", targetSentiment.getSentiment(), targetSentiment.getText()); for (AssessmentSentiment assessmentSentiment : opinion.getAssessments()) { System.out.printf("\t\t'%s' sentiment because of \"%s\". Is the assessment negated: %s.%n", assessmentSentiment.getSentiment(), assessmentSentiment.getText(), assessmentSentiment.isNegated()); } }); }); });
documents
- A list of documents
to be analyzed.
For text length limits, maximum batch size, and supported text encoding, see
data limits.options
- The additional configurable options
that may be passed when
analyzing sentiments.context
- Additional context that is passed through the Http pipeline during the service call.Response
that contains a AnalyzeSentimentResultCollection
.NullPointerException
- if documents
is null.IllegalArgumentException
- if documents
is empty.public com.azure.core.util.polling.SyncPoller<AnalyzeHealthcareEntitiesOperationDetail,AnalyzeHealthcareEntitiesPagedIterable> beginAnalyzeHealthcareEntities(Iterable<String> documents, String language, AnalyzeHealthcareEntitiesOptions options)
documents
with provided request options.
Note: In order to use this functionality, request to access public preview is required.
Azure Active Directory (AAD) is not currently supported. For more information see
this.
See this supported languages in Text Analytics API.documents
- A list of documents to be analyzed.
For text length limits, maximum batch size, and supported text encoding, see
data limits.language
- The 2 letter ISO 639-1 representation of language for the documents. If not set, uses "en" for
English as default.options
- The additional configurable options
that may be passed
when analyzing healthcare entities.SyncPoller
that polls the analyze healthcare operation until it has completed, has failed,
or has been cancelled. The completed operation returns a PagedIterable
of
AnalyzeHealthcareEntitiesResultCollection
.NullPointerException
- if documents
is null.IllegalArgumentException
- if documents
is empty.TextAnalyticsException
- If analyze operation fails.public com.azure.core.util.polling.SyncPoller<AnalyzeHealthcareEntitiesOperationDetail,AnalyzeHealthcareEntitiesPagedIterable> beginAnalyzeHealthcareEntities(Iterable<TextDocumentInput> documents, AnalyzeHealthcareEntitiesOptions options, com.azure.core.util.Context context)
documents
with provided request options.
Note: In order to use this functionality, request to access public preview is required.
Azure Active Directory (AAD) is not currently supported. For more information see
this.
See this supported languages in Text Analytics API.
Code Sample
Analyze healthcare entities, entity data sources, and entity relations in a list of
document
and provided request options to
show statistics.
List<TextDocumentInput> documents = new ArrayList<>(); for (int i = 0; i < 3; i++) { documents.add(new TextDocumentInput(Integer.toString(i), "The patient is a 54-year-old gentleman with a history of progressive angina over " + "the past several months.")); } // Request options: show statistics and model version AnalyzeHealthcareEntitiesOptions options = new AnalyzeHealthcareEntitiesOptions() .setIncludeStatistics(true); SyncPoller<AnalyzeHealthcareEntitiesOperationDetail, AnalyzeHealthcareEntitiesPagedIterable> syncPoller = textAnalyticsClient.beginAnalyzeHealthcareEntities(documents, options, Context.NONE); syncPoller.waitForCompletion(); AnalyzeHealthcareEntitiesPagedIterable result = syncPoller.getFinalResult(); // Task operation statistics final AnalyzeHealthcareEntitiesOperationDetail operationResult = syncPoller.poll().getValue(); System.out.printf("Operation created time: %s, expiration time: %s.%n", operationResult.getCreatedAt(), operationResult.getExpiresAt()); result.forEach(analyzeHealthcareEntitiesResultCollection -> { // Model version System.out.printf("Results of Azure Text Analytics \"Analyze Healthcare\" Model, version: %s%n", analyzeHealthcareEntitiesResultCollection.getModelVersion()); TextDocumentBatchStatistics healthcareTaskStatistics = analyzeHealthcareEntitiesResultCollection.getStatistics(); // Batch statistics System.out.printf("Documents statistics: document count = %s, erroneous document count = %s," + " transaction count = %s, valid document count = %s.%n", healthcareTaskStatistics.getDocumentCount(), healthcareTaskStatistics.getInvalidDocumentCount(), healthcareTaskStatistics.getTransactionCount(), healthcareTaskStatistics.getValidDocumentCount()); analyzeHealthcareEntitiesResultCollection.forEach(healthcareEntitiesResult -> { System.out.println("document id = " + healthcareEntitiesResult.getId()); System.out.println("Document entities: "); AtomicInteger ct = new AtomicInteger(); healthcareEntitiesResult.getEntities().forEach(healthcareEntity -> { System.out.printf("\ti = %d, Text: %s, category: %s, confidence score: %f.%n", ct.getAndIncrement(), healthcareEntity.getText(), healthcareEntity.getCategory(), healthcareEntity.getConfidenceScore()); IterableStream<EntityDataSource> healthcareEntityDataSources = healthcareEntity.getDataSources(); if (healthcareEntityDataSources != null) { healthcareEntityDataSources.forEach(healthcareEntityLink -> System.out.printf( "\t\tEntity ID in data source: %s, data source: %s.%n", healthcareEntityLink.getEntityId(), healthcareEntityLink.getName())); } }); // Healthcare entity relation groups healthcareEntitiesResult.getEntityRelations().forEach(entityRelation -> { System.out.printf("\tRelation type: %s.%n", entityRelation.getRelationType()); entityRelation.getRoles().forEach(role -> { final HealthcareEntity entity = role.getEntity(); System.out.printf("\t\tEntity text: %s, category: %s, role: %s.%n", entity.getText(), entity.getCategory(), role.getName()); }); }); }); });
documents
- A list of documents
to be analyzed.options
- The additional configurable options
that may be passed
when analyzing healthcare entities.context
- Additional context that is passed through the Http pipeline during the service call.SyncPoller
that polls the analyze healthcare operation until it has completed, has failed,
or has been cancelled. The completed operation returns a PagedIterable
of
AnalyzeHealthcareEntitiesResultCollection
.NullPointerException
- if documents
is null.IllegalArgumentException
- if documents
is empty.TextAnalyticsException
- If analyze operation fails.public com.azure.core.util.polling.SyncPoller<AnalyzeActionsOperationDetail,AnalyzeActionsResultPagedIterable> beginAnalyzeActions(Iterable<String> documents, TextAnalyticsActions actions, String language, AnalyzeActionsOptions options)
documents
with provided request options.
See this supported languages in Text Analytics API.
Code Sample
List<String> documents = Arrays.asList( "Elon Musk is the CEO of SpaceX and Tesla.", "My SSN is 859-98-0987" ); SyncPoller<AnalyzeActionsOperationDetail, AnalyzeActionsResultPagedIterable> syncPoller = textAnalyticsClient.beginAnalyzeActions( documents, new TextAnalyticsActions().setDisplayName("{tasks_display_name}") .setRecognizeEntitiesActions(new RecognizeEntitiesAction()) .setExtractKeyPhrasesActions(new ExtractKeyPhrasesAction()), "en", new AnalyzeActionsOptions().setIncludeStatistics(false)); syncPoller.waitForCompletion(); AnalyzeActionsResultPagedIterable result = syncPoller.getFinalResult(); result.forEach(analyzeActionsResult -> { System.out.println("Entities recognition action results:"); analyzeActionsResult.getRecognizeEntitiesResults().forEach( actionResult -> { if (!actionResult.isError()) { actionResult.getDocumentsResults().forEach( entitiesResult -> entitiesResult.getEntities().forEach( entity -> System.out.printf( "Recognized entity: %s, entity category: %s, entity subcategory: %s," + " confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore()))); } }); System.out.println("Key phrases extraction action results:"); analyzeActionsResult.getExtractKeyPhrasesResults().forEach( actionResult -> { if (!actionResult.isError()) { actionResult.getDocumentsResults().forEach(extractKeyPhraseResult -> { System.out.println("Extracted phrases:"); extractKeyPhraseResult.getKeyPhrases() .forEach(keyPhrases -> System.out.printf("\t%s.%n", keyPhrases)); }); } }); });
documents
- A list of documents to be analyzed.
For text length limits, maximum batch size, and supported text encoding, see
data limits.actions
- The actions
that contains all actions to be executed.
An action is one task of execution, such as a single task of 'Key Phrases Extraction' on the given document
inputs.language
- The 2 letter ISO 639-1 representation of language for the documents. If not set, uses "en" for
English as default.options
- The additional configurable options
that may be passed when
analyzing a collection of actions.SyncPoller
that polls the analyze a collection of actions operation until it has completed,
has failed, or has been cancelled. The completed operation returns a AnalyzeActionsResultPagedIterable
.NullPointerException
- if documents
is null.IllegalArgumentException
- if documents
is empty.TextAnalyticsException
- If analyze operation fails.public com.azure.core.util.polling.SyncPoller<AnalyzeActionsOperationDetail,AnalyzeActionsResultPagedIterable> beginAnalyzeActions(Iterable<TextDocumentInput> documents, TextAnalyticsActions actions, AnalyzeActionsOptions options, com.azure.core.util.Context context)
documents
with provided request options.
See this supported languages in Text Analytics API.
Code Sample
List<TextDocumentInput> documents = Arrays.asList( new TextDocumentInput("0", "Elon Musk is the CEO of SpaceX and Tesla.").setLanguage("en"), new TextDocumentInput("1", "My SSN is 859-98-0987").setLanguage("en") ); SyncPoller<AnalyzeActionsOperationDetail, AnalyzeActionsResultPagedIterable> syncPoller = textAnalyticsClient.beginAnalyzeActions( documents, new TextAnalyticsActions().setDisplayName("{tasks_display_name}") .setRecognizeEntitiesActions(new RecognizeEntitiesAction()) .setExtractKeyPhrasesActions(new ExtractKeyPhrasesAction()), new AnalyzeActionsOptions().setIncludeStatistics(false), Context.NONE); syncPoller.waitForCompletion(); AnalyzeActionsResultPagedIterable result = syncPoller.getFinalResult(); result.forEach(analyzeActionsResult -> { System.out.println("Entities recognition action results:"); analyzeActionsResult.getRecognizeEntitiesResults().forEach( actionResult -> { if (!actionResult.isError()) { actionResult.getDocumentsResults().forEach( entitiesResult -> entitiesResult.getEntities().forEach( entity -> System.out.printf( "Recognized entity: %s, entity category: %s, entity subcategory: %s," + " confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore()))); } }); System.out.println("Key phrases extraction action results:"); analyzeActionsResult.getExtractKeyPhrasesResults().forEach( actionResult -> { if (!actionResult.isError()) { actionResult.getDocumentsResults().forEach(extractKeyPhraseResult -> { System.out.println("Extracted phrases:"); extractKeyPhraseResult.getKeyPhrases() .forEach(keyPhrases -> System.out.printf("\t%s.%n", keyPhrases)); }); } }); });
documents
- A list of documents
to be analyzed.actions
- The actions
that contains all actions to be executed.
An action is one task of execution, such as a single task of 'Key Phrases Extraction' on the given document
inputs.options
- The additional configurable options
that may be passed when
analyzing a collection of actions.context
- Additional context that is passed through the Http pipeline during the service call.SyncPoller
that polls the analyze a collection of actions operation until it has completed,
has failed, or has been cancelled. The completed operation returns a AnalyzeActionsResultPagedIterable
.NullPointerException
- if documents
is null.IllegalArgumentException
- if documents
is empty.TextAnalyticsException
- If analyze operation fails.Copyright © 2021 Microsoft Corporation. All rights reserved.