Package com.azure.ai.formrecognizer.documentanalysis


package com.azure.ai.formrecognizer.documentanalysis

Azure Form Recognizer is a cloud-based service provided by Microsoft Azure that utilizes machine learning to extract information from various types of documents. Form Recognizer applies machine-learning-based optical character recognition (OCR) and document understanding technologies to classify documents, extract text, tables, structure, and key-value pairs from documents. You can also label and train custom models to automate data extraction from structured, semi-structured, and unstructured documents.

The service uses advanced optical character recognition (OCR) technology to extract text and key-value pairs from documents, enabling organizations to automate data entry tasks that would otherwise require manual effort. It can recognize and extract information like dates, addresses, invoice numbers, line items, and other relevant data points from documents.

The Azure Form Recognizer client library allows Java developers to interact with the Azure Form Recognizer service. It provides a set of classes and methods that abstract the underlying RESTful API of Azure Form Recognizer, making it easier to integrate the service into Java applications.

The Azure Form Recognizer client library provides the following capabilities:

  1. Document Analysis: It allows you to submit documents for analysis to detect and extract information like text, key-value pairs, tables, language, and fields. You can analyze both structured and unstructured documents.
  2. Model Management: It enables you to manage models created in your account by building, listing, deleting, and see the limit of custom models your account.
  3. Analysis Results: It provides methods to retrieve and interpret analysis results, including extracted text and field values, confidence scores, and document layout information.
  4. Polling and Callbacks: It includes mechanisms for polling the service to check the status of an analysis operation or registering callbacks to receive notifications when the analysis is complete.

Getting Started

The Azure Form Recognizer library provides analysis clients like DocumentAnalysisAsyncClient and DocumentAnalysisClient to connect to the Form Recognizer Azure Cognitive Service to analyze information from documents and extract it into structured data. It also provides administration clients like DocumentModelAdministrationClient and DocumentModelAdministrationAsyncClient to build and manage models from custom documents.

Note:This client only supports DocumentAnalysisServiceVersion.V2022_08_31 and newer. To use an older service version, FormRecognizerClient and FormTrainingClient.

Service clients are the point of interaction for developers to use Azure Form Recognizer. DocumentAnalysisClient is the synchronous service client and DocumentAnalysisAsyncClient is the asynchronous service client. The examples shown in this document use a credential object named DefaultAzureCredential for authentication, which is appropriate for most scenarios, including local development and production environments. Additionally, we recommend using managed identity for authentication in production environments. You can find more information on different ways of authenticating and their corresponding credential types in the Azure Identity documentation".

Sample: Construct a DocumentAnalysisClient with DefaultAzureCredential

The following code sample demonstrates the creation of a DocumentAnalysisClient, using the `DefaultAzureCredentialBuilder` to configure it.

 DocumentAnalysisClient documentAnalysisClient = new DocumentAnalysisClientBuilder()
     .endpoint("{endpoint}")
     .credential(new DefaultAzureCredentialBuilder().build())
     .buildClient();
 

Further, see the code sample below to use AzureKeyCredential for client creation.

 DocumentAnalysisClient documentAnalysisClient = new DocumentAnalysisClientBuilder()
     .credential(new AzureKeyCredential("{key}"))
     .endpoint("{endpoint}")
     .buildClient();
 

Let's take a look at the analysis client scenarios and their respective usage below.



Analyzing documents with prebuilt models

Form Recognizer models and their associated output to help you choose the best model to address your document scenario needs.

You can use a prebuilt document analysis or domain specific model or build a custom model tailored to your specific business needs and use cases.

Sample: Analyze with the prebuilt read model from url source

The following code sample demonstrates how to analyze textual elements, such as words, lines, styles, and text language information from documents using the prebuilt read model.

 String documentUrl = "documentUrl";

 SyncPoller<OperationResult, AnalyzeResult> analyzeResultPoller =
     documentAnalysisClient.beginAnalyzeDocumentFromUrl("prebuilt-read", documentUrl);
 AnalyzeResult analyzeResult = analyzeResultPoller.getFinalResult();

 System.out.println("Detected Languages: ");
 for (DocumentLanguage language : analyzeResult.getLanguages()) {
     System.out.printf("Found language with locale %s and confidence %.2f",
         language.getLocale(),
         language.getConfidence());
 }

 System.out.println("Detected Styles: ");
 for (DocumentStyle style: analyzeResult.getStyles()) {
     if (style.isHandwritten()) {
         System.out.printf("Found handwritten content %s with confidence %.2f",
             style.getSpans().stream().map(span -> analyzeResult.getContent()
                 .substring(span.getOffset(), span.getLength())),
             style.getConfidence());
     }
 }

 // pages
 analyzeResult.getPages().forEach(documentPage -> {
     System.out.printf("Page has width: %.2f and height: %.2f, measured with unit: %s%n",
         documentPage.getWidth(),
         documentPage.getHeight(),
         documentPage.getUnit());

     // lines
     documentPage.getLines().forEach(documentLine ->
         System.out.printf("Line '%s' is within a bounding polygon %s.%n",
             documentLine.getContent(),
             documentLine.getBoundingPolygon().stream().map(point -> String.format("[%.2f, %.2f]", point.getX(),
                 point.getY())).collect(Collectors.joining(", "))));
 });
 

You can also analyze a local file with prebuilt models using the DocumentAnalysisClient.

Sample: Analyze local file with the prebuilt read model

The following code sample demonstrates how to analyze a local file with "prebuilt-read" analysis model.

 File document = new File("{local/file_path/fileName.jpg}");
 SyncPoller<OperationResult, AnalyzeResult> analyzeResultPoller =
     documentAnalysisClient.beginAnalyzeDocument("prebuilt-read",
         BinaryData.fromFile(document.toPath(),
             (int) document.length()));
 AnalyzeResult analyzeResult = analyzeResultPoller.getFinalResult();

 System.out.println("Detected Languages: ");
 for (DocumentLanguage language : analyzeResult.getLanguages()) {
     System.out.printf("Found language with locale %s and confidence %.2f",
         language.getLocale(),
         language.getConfidence());
 }

 System.out.println("Detected Styles: ");
 for (DocumentStyle style: analyzeResult.getStyles()) {
     if (style.isHandwritten()) {
         System.out.printf("Found handwritten content %s with confidence %.2f",
             style.getSpans().stream().map(span -> analyzeResult.getContent()
                 .substring(span.getOffset(), span.getLength())),
             style.getConfidence());
     }
 }

 // pages
 analyzeResult.getPages().forEach(documentPage -> {
     System.out.printf("Page has width: %.2f and height: %.2f, measured with unit: %s%n",
         documentPage.getWidth(),
         documentPage.getHeight(),
         documentPage.getUnit());

     // lines
     documentPage.getLines().forEach(documentLine ->
         System.out.printf("Line '%s' is within a bounding polygon %s.%n",
             documentLine.getContent(),
             documentLine.getBoundingPolygon().stream().map(point -> String.format("[%.2f, %.2f]", point.getX(),
                 point.getY())).collect(Collectors.joining(", "))));
 });
 

For more information on which supported model you should use refer to models usage documentation.


Analyzing documents with custom models

Custom models are trained with your own data, so they're tailored to your documents. For more information on how to build your own custom model, see build a model.

Sample: Analyze documents using custom trained model

This sample demonstrates how to analyze text, field values, selection marks, and table data from custom documents.

 String blobContainerUrl = "{SAS_URL_of_your_container_in_blob_storage}";
 // The shared access signature (SAS) Url of your Azure Blob Storage container with your custom documents.
 String prefix = "{blob_name_prefix}}";
 // Build custom document analysis model
 SyncPoller<OperationResult, DocumentModelDetails> buildOperationPoller =
     documentModelAdminClient.beginBuildDocumentModel(blobContainerUrl,
         DocumentModelBuildMode.TEMPLATE,
         prefix,
         new BuildDocumentModelOptions().setModelId("my-custom-built-model").setDescription("model desc"),
         Context.NONE);

 DocumentModelDetails customBuildModel = buildOperationPoller.getFinalResult();

 // analyze using custom-built model
 String modelId = customBuildModel.getModelId();
 String documentUrl = "documentUrl";
 SyncPoller<OperationResult, AnalyzeResult> analyzeDocumentPoller =
     documentAnalysisClient.beginAnalyzeDocumentFromUrl(modelId, documentUrl);

 AnalyzeResult analyzeResult = analyzeDocumentPoller.getFinalResult();

 for (int i = 0; i < analyzeResult.getDocuments().size(); i++) {
     final AnalyzedDocument analyzedDocument = analyzeResult.getDocuments().get(i);
     System.out.printf("----------- Analyzing custom document %d -----------%n", i);
     System.out.printf("Analyzed document has doc type %s with confidence : %.2f%n",
         analyzedDocument.getDocType(), analyzedDocument.getConfidence());
     analyzedDocument.getFields().forEach((key, documentField) -> {
         System.out.printf("Document Field content: %s%n", documentField.getContent());
         System.out.printf("Document Field confidence: %.2f%n", documentField.getConfidence());
         System.out.printf("Document Field Type: %s%n", documentField.getType());
         System.out.printf("Document Field found within bounding region: %s%n",
             documentField.getBoundingRegions().toString());
     });
 }

 analyzeResult.getPages().forEach(documentPage -> {
     System.out.printf("Page has width: %.2f and height: %.2f, measured with unit: %s%n",
         documentPage.getWidth(),
         documentPage.getHeight(),
         documentPage.getUnit());

     // lines
     documentPage.getLines().forEach(documentLine ->
         System.out.printf("Line '%s' is within a bounding box %s.%n",
             documentLine.getContent(),
             documentLine.getBoundingPolygon().toString()));

     // words
     documentPage.getWords().forEach(documentWord ->
         System.out.printf("Word '%s' has a confidence score of %.2f.%n",
             documentWord.getContent(),
             documentWord.getConfidence()));
 });

 // tables
 List<DocumentTable> tables = analyzeResult.getTables();
 for (int i = 0; i < tables.size(); i++) {
     DocumentTable documentTable = tables.get(i);
     System.out.printf("Table %d has %d rows and %d columns.%n", i, documentTable.getRowCount(),
         documentTable.getColumnCount());
     documentTable.getCells().forEach(documentTableCell -> {
         System.out.printf("Cell '%s', has row index %d and column index %d.%n",
             documentTableCell.getContent(),
             documentTableCell.getRowIndex(), documentTableCell.getColumnIndex());
     });
     System.out.println();
 }
 
See Also: