Class EncryptedBlobAsyncClient


  • public class EncryptedBlobAsyncClient
    extends BlobAsyncClient
    This class provides a client side encryption client that contains generic blob operations for Azure Storage Blobs. Operations allowed by the client are uploading, downloading and copying a blob, retrieving and setting metadata, retrieving and setting HTTP headers, and deleting and un-deleting a blob. The upload and download operation allow for encryption and decryption of the data client side. Note: setting metadata in particular is unsafe and should only be done so with caution.

    Please refer to the Azure Docs For Client-Side Encryption for more information.

    This client is instantiated through EncryptedBlobClientBuilder

    For operations on a specific blob type (i.e. append, block, or page) use getAppendBlobAsyncClient, getBlockBlobAsyncClient, or getPageBlobAsyncClient to construct a client that allows blob specific operations. Note, these types do not support client-side encryption, though decryption is possible in case the associated block/page/append blob contains encrypted data.

    Please refer to the Azure Docs for more information.

    • Method Detail

      • upload

        public Mono<BlockBlobItem> upload​(Flux<ByteBuffer> data,
                                          ParallelTransferOptions parallelTransferOptions)
        Creates a new block blob. By default, this method will not overwrite an existing blob.

        Updating an existing block blob overwrites any existing blob metadata. Partial updates are not supported with this method; the content of the existing blob is overwritten with the new content. To perform a partial update of block blob's, use stageBlock and BlockBlobAsyncClient.commitBlockList(List) on a regular blob client. For more information, see the Azure Docs for Put Block and the Azure Docs for Put Block List.

        The data passed need not support multiple subscriptions/be replayable as is required in other upload methods when retries are enabled, and the length of the data need not be known in advance. Therefore, this method should support uploading any arbitrary data source, including network streams. This behavior is possible because this method will perform some internal buffering as configured by the blockSize and numBuffers parameters, so while this method may offer additional convenience, it will not be as performant as other options, which should be preferred when possible.

        Typically, the greater the number of buffers used, the greater the possible parallelism when transferring the data. Larger buffers means we will have to stage fewer blocks and therefore require fewer IO operations. The trade-offs between these values are context-dependent, so some experimentation may be required to optimize inputs for a given scenario.

        Code Samples

         ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions()
             .setBlockSizeLong(blockSize)
             .setMaxConcurrency(maxConcurrency);
         client.upload(data, parallelTransferOptions).subscribe(response ->
             System.out.printf("Uploaded BlockBlob MD5 is %s%n",
                 Base64.getEncoder().encodeToString(response.getContentMd5())));
         
        Overrides:
        upload in class BlobAsyncClient
        Parameters:
        data - The data to write to the blob. Unlike other upload methods, this method does not require that the Flux be replayable. In other words, it does not have to support multiple subscribers and is not expected to produce the same values across subscriptions.
        parallelTransferOptions - ParallelTransferOptions used to configure buffered uploading.
        Returns:
        A reactive response containing the information of the uploaded block blob.
      • upload

        public Mono<BlockBlobItem> upload​(Flux<ByteBuffer> data,
                                          ParallelTransferOptions parallelTransferOptions,
                                          boolean overwrite)
        Creates a new block blob, or updates the content of an existing block blob.

        Updating an existing block blob overwrites any existing blob metadata. Partial updates are not supported with this method; the content of the existing blob is overwritten with the new content. To perform a partial update of block blob's, use stageBlock and BlockBlobAsyncClient.commitBlockList(List) on a regular blob client. For more information, see the Azure Docs for Put Block and the Azure Docs for Put Block List.

        The data passed need not support multiple subscriptions/be replayable as is required in other upload methods when retries are enabled, and the length of the data need not be known in advance. Therefore, this method should support uploading any arbitrary data source, including network streams. This behavior is possible because this method will perform some internal buffering as configured by the blockSize and numBuffers parameters, so while this method may offer additional convenience, it will not be as performant as other options, which should be preferred when possible.

        Typically, the greater the number of buffers used, the greater the possible parallelism when transferring the data. Larger buffers means we will have to stage fewer blocks and therefore require fewer IO operations. The trade-offs between these values are context-dependent, so some experimentation may be required to optimize inputs for a given scenario.

        Code Samples

         ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions()
             .setBlockSizeLong(blockSize)
             .setMaxConcurrency(maxConcurrency);
         boolean overwrite = false; // Default behavior
         client.upload(data, parallelTransferOptions, overwrite).subscribe(response ->
             System.out.printf("Uploaded BlockBlob MD5 is %s%n",
                 Base64.getEncoder().encodeToString(response.getContentMd5())));
         
        Overrides:
        upload in class BlobAsyncClient
        Parameters:
        data - The data to write to the blob. Unlike other upload methods, this method does not require that the Flux be replayable. In other words, it does not have to support multiple subscribers and is not expected to produce the same values across subscriptions.
        parallelTransferOptions - ParallelTransferOptions used to configure buffered uploading.
        overwrite - Whether to overwrite if the blob exists.
        Returns:
        A reactive response containing the information of the uploaded block blob.
      • uploadWithResponse

        public Mono<com.azure.core.http.rest.Response<BlockBlobItem>> uploadWithResponse​(Flux<ByteBuffer> data,
                                                                                         ParallelTransferOptions parallelTransferOptions,
                                                                                         BlobHttpHeaders headers,
                                                                                         Map<String,​String> metadata,
                                                                                         AccessTier tier,
                                                                                         BlobRequestConditions requestConditions)
        Creates a new block blob, or updates the content of an existing block blob. Updating an existing block blob overwrites any existing blob metadata. Partial updates are not supported with this method; the content of the existing blob is overwritten with the new content. To perform a partial update of a block blob's, use stageBlock and BlockBlobAsyncClient.commitBlockList(List), which this method uses internally. For more information, see the Azure Docs for Put Block and the Azure Docs for Put Block List.

        The data passed need not support multiple subscriptions/be replayable as is required in other upload methods when retries are enabled, and the length of the data need not be known in advance. Therefore, this method should support uploading any arbitrary data source, including network streams. This behavior is possible because this method will perform some internal buffering as configured by the blockSize and numBuffers parameters, so while this method may offer additional convenience, it will not be as performant as other options, which should be preferred when possible.

        Typically, the greater the number of buffers used, the greater the possible parallelism when transferring the data. Larger buffers means we will have to stage fewer blocks and therefore require fewer IO operations. The trade-offs between these values are context-dependent, so some experimentation may be required to optimize inputs for a given scenario.

        Code Samples

         BlobHttpHeaders headers = new BlobHttpHeaders()
             .setContentMd5("data".getBytes(StandardCharsets.UTF_8))
             .setContentLanguage("en-US")
             .setContentType("binary");
        
         Map<String, String> metadata = new HashMap<>(Collections.singletonMap("metadata", "value"));
         BlobRequestConditions requestConditions = new BlobRequestConditions()
             .setLeaseId(leaseId)
             .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
         ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions()
             .setBlockSizeLong(blockSize)
             .setMaxConcurrency(maxConcurrency);
        
         client.uploadWithResponse(data, parallelTransferOptions, headers, metadata, AccessTier.HOT, requestConditions)
             .subscribe(response -> System.out.printf("Uploaded BlockBlob MD5 is %s%n",
                 Base64.getEncoder().encodeToString(response.getValue().getContentMd5())));
         
        Overrides:
        uploadWithResponse in class BlobAsyncClient
        Parameters:
        data - The data to write to the blob. Unlike other upload methods, this method does not require that the Flux be replayable. In other words, it does not have to support multiple subscribers and is not expected to produce the same values across subscriptions.
        parallelTransferOptions - ParallelTransferOptions used to configure buffered uploading.
        headers - BlobHttpHeaders
        metadata - Metadata to associate with the blob. If there is leading or trailing whitespace in any metadata key or value, it must be removed or encoded.
        tier - AccessTier for the destination blob.
        requestConditions - BlobRequestConditions
        Returns:
        A reactive response containing the information of the uploaded block blob.
      • uploadWithResponse

        public Mono<com.azure.core.http.rest.Response<BlockBlobItem>> uploadWithResponse​(BlobParallelUploadOptions options)
        Creates a new block blob, or updates the content of an existing block blob. Updating an existing block blob overwrites any existing blob metadata. Partial updates are not supported with this method; the content of the existing blob is overwritten with the new content. To perform a partial update of a block blob's, use stageBlock and BlockBlobAsyncClient.commitBlockList(List), which this method uses internally. For more information, see the Azure Docs for Put Block and the Azure Docs for Put Block List.

        The data passed need not support multiple subscriptions/be replayable as is required in other upload methods when retries are enabled, and the length of the data need not be known in advance. Therefore, this method should support uploading any arbitrary data source, including network streams. This behavior is possible because this method will perform some internal buffering as configured by the blockSize and numBuffers parameters, so while this method may offer additional convenience, it will not be as performant as other options, which should be preferred when possible.

        Typically, the greater the number of buffers used, the greater the possible parallelism when transferring the data. Larger buffers means we will have to stage fewer blocks and therefore require fewer IO operations. The trade-offs between these values are context-dependent, so some experimentation may be required to optimize inputs for a given scenario.

        Code Samples

         BlobHttpHeaders headers = new BlobHttpHeaders()
             .setContentMd5("data".getBytes(StandardCharsets.UTF_8))
             .setContentLanguage("en-US")
             .setContentType("binary");
        
         Map<String, String> metadata = new HashMap<>(Collections.singletonMap("metadata", "value"));
         Map<String, String> tags = new HashMap<>(Collections.singletonMap("tag", "value"));
         BlobRequestConditions requestConditions = new BlobRequestConditions()
             .setLeaseId(leaseId)
             .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
         ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions()
             .setBlockSizeLong(blockSize)
             .setMaxConcurrency(maxConcurrency);
        
         client.uploadWithResponse(new BlobParallelUploadOptions(data)
             .setParallelTransferOptions(parallelTransferOptions).setHeaders(headers).setMetadata(metadata)
             .setTags(tags).setTier(AccessTier.HOT).setRequestConditions(requestConditions))
             .subscribe(response -> System.out.printf("Uploaded BlockBlob MD5 is %s%n",
                 Base64.getEncoder().encodeToString(response.getValue().getContentMd5())));
         
        Flux be replayable. In other words, it does not have to support multiple subscribers and is not expected to produce the same values across subscriptions.
        Overrides:
        uploadWithResponse in class BlobAsyncClient
        Parameters:
        options - BlobParallelUploadOptions
        Returns:
        A reactive response containing the information of the uploaded block blob.
      • uploadFromFile

        public Mono<Void> uploadFromFile​(String filePath)
        Creates a new block blob with the content of the specified file. By default, this method will not overwrite existing data

        Code Samples

         client.uploadFromFile(filePath)
             .doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage()))
             .subscribe(completion -> System.out.println("Upload from file succeeded"));
         
        Overrides:
        uploadFromFile in class BlobAsyncClient
        Parameters:
        filePath - Path to the upload file
        Returns:
        An empty response
      • uploadFromFile

        public Mono<Void> uploadFromFile​(String filePath,
                                         boolean overwrite)
        Creates a new block blob, or updates the content of an existing block blob, with the content of the specified file.

        Code Samples

         boolean overwrite = false; // Default behavior
         client.uploadFromFile(filePath, overwrite)
             .doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage()))
             .subscribe(completion -> System.out.println("Upload from file succeeded"));
         
        Overrides:
        uploadFromFile in class BlobAsyncClient
        Parameters:
        filePath - Path to the upload file
        overwrite - Whether to overwrite should the blob exist.
        Returns:
        An empty response
      • uploadFromFile

        public Mono<Void> uploadFromFile​(String filePath,
                                         ParallelTransferOptions parallelTransferOptions,
                                         BlobHttpHeaders headers,
                                         Map<String,​String> metadata,
                                         AccessTier tier,
                                         BlobRequestConditions requestConditions)
        Creates a new block blob, or updates the content of an existing block blob, with the content of the specified file.

        Code Samples

         BlobHttpHeaders headers = new BlobHttpHeaders()
             .setContentMd5("data".getBytes(StandardCharsets.UTF_8))
             .setContentLanguage("en-US")
             .setContentType("binary");
        
         Map<String, String> metadata = new HashMap<>(Collections.singletonMap("metadata", "value"));
         BlobRequestConditions requestConditions = new BlobRequestConditions()
             .setLeaseId(leaseId)
             .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
        
         ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions()
             .setBlockSizeLong(blockSize);
        
         client.uploadFromFile(filePath, parallelTransferOptions, headers, metadata, AccessTier.HOT, requestConditions)
             .doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage()))
             .subscribe(completion -> System.out.println("Upload from file succeeded"));
         
        Overrides:
        uploadFromFile in class BlobAsyncClient
        Parameters:
        filePath - Path to the upload file
        parallelTransferOptions - ParallelTransferOptions to use to upload from file.
        headers - BlobHttpHeaders
        metadata - Metadata to associate with the blob. If there is leading or trailing whitespace in any metadata key or value, it must be removed or encoded.
        tier - AccessTier for the destination blob.
        requestConditions - BlobRequestConditions
        Returns:
        An empty response
        Throws:
        IllegalArgumentException - If blockSize is less than 0 or greater than 4000 MB
        UncheckedIOException - If an I/O error occurs
      • uploadFromFileWithResponse

        public Mono<com.azure.core.http.rest.Response<BlockBlobItem>> uploadFromFileWithResponse​(BlobUploadFromFileOptions options)
        Creates a new block blob, or updates the content of an existing block blob, with the content of the specified file.

        Code Samples

         BlobHttpHeaders headers = new BlobHttpHeaders()
             .setContentMd5("data".getBytes(StandardCharsets.UTF_8))
             .setContentLanguage("en-US")
             .setContentType("binary");
        
         Map<String, String> metadata = new HashMap<>(Collections.singletonMap("metadata", "value"));
         Map<String, String> tags = new HashMap<>(Collections.singletonMap("tag", "value"));
         BlobRequestConditions requestConditions = new BlobRequestConditions()
             .setLeaseId(leaseId)
             .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
        
         ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions()
             .setBlockSizeLong(blockSize);
        
         client.uploadFromFileWithResponse(new BlobUploadFromFileOptions(filePath)
             .setParallelTransferOptions(parallelTransferOptions).setHeaders(headers).setMetadata(metadata).setTags(tags)
             .setTier(AccessTier.HOT).setRequestConditions(requestConditions))
             .doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage()))
             .subscribe(completion -> System.out.println("Upload from file succeeded"));
         
        Overrides:
        uploadFromFileWithResponse in class BlobAsyncClient
        Parameters:
        options - BlobUploadFromFileOptions
        Returns:
        A reactive response containing the information of the uploaded block blob.
        Throws:
        IllegalArgumentException - If blockSize is less than 0 or greater than 4000 MB
        UncheckedIOException - If an I/O error occurs