public class BlobAsyncClient extends BlobAsyncClientBase
This client is instantiated through BlobClientBuilder
or retrieved via getBlobAsyncClient
.
For operations on a specific blob type (i.e append, block, or page) use getAppendBlobAsyncClient
, getBlockBlobAsyncClient
, or getPageBlobAsyncClient
to construct a client that allows blob specific operations.
Please refer to the Azure Docs for more information.
Modifier and Type | Field and Description |
---|---|
static int |
BLOB_DEFAULT_HTBB_UPLOAD_BLOCK_SIZE
If a blob is known to be greater than 100MB, using a larger block size will trigger some server-side
optimizations.
|
static int |
BLOB_DEFAULT_NUMBER_OF_BUFFERS |
static int |
BLOB_DEFAULT_UPLOAD_BLOCK_SIZE |
accountName, azureBlobStorage, blobName, containerName, serviceVersion
Modifier | Constructor and Description |
---|---|
protected |
BlobAsyncClient(HttpPipeline pipeline,
String url,
BlobServiceVersion serviceVersion,
String accountName,
String containerName,
String blobName,
String snapshot,
CpkInfo customerProvidedKey)
Package-private constructor for use by
BlobClientBuilder . |
Modifier and Type | Method and Description |
---|---|
AppendBlobAsyncClient |
getAppendBlobAsyncClient()
Creates a new
AppendBlobAsyncClient associated to this blob. |
BlockBlobAsyncClient |
getBlockBlobAsyncClient()
Creates a new
BlockBlobAsyncClient associated to this blob. |
PageBlobAsyncClient |
getPageBlobAsyncClient()
Creates a new
PageBlobAsyncClient associated to this blob. |
BlobAsyncClient |
getSnapshotClient(String snapshot)
Creates a new
BlobAsyncClient linked to the snapshot of this blob resource. |
Mono<BlockBlobItem> |
upload(Flux<ByteBuffer> data,
ParallelTransferOptions parallelTransferOptions)
Creates a new block blob.
|
Mono<BlockBlobItem> |
upload(Flux<ByteBuffer> data,
ParallelTransferOptions parallelTransferOptions,
boolean overwrite)
Creates a new block blob, or updates the content of an existing block blob.
|
protected AsynchronousFileChannel |
uploadFileResourceSupplier(String filePath)
Resource Supplier for UploadFile
|
Mono<Void> |
uploadFromFile(String filePath)
Creates a new block blob with the content of the specified file.
|
Mono<Void> |
uploadFromFile(String filePath,
boolean overwrite)
Creates a new block blob, or updates the content of an existing block blob, with the content of the specified
file.
|
Mono<Void> |
uploadFromFile(String filePath,
ParallelTransferOptions parallelTransferOptions,
BlobHttpHeaders headers,
Map<String,String> metadata,
AccessTier tier,
BlobRequestConditions accessConditions)
Creates a new block blob, or updates the content of an existing block blob, with the content of the specified
file.
|
Mono<Response<BlockBlobItem>> |
uploadWithResponse(Flux<ByteBuffer> data,
ParallelTransferOptions parallelTransferOptions,
BlobHttpHeaders headers,
Map<String,String> metadata,
AccessTier tier,
BlobRequestConditions accessConditions)
Creates a new block blob, or updates the content of an existing block blob.
|
abortCopyFromUrl, abortCopyFromUrlWithResponse, beginCopy, beginCopy, copyFromUrl, copyFromUrlWithResponse, createSnapshot, createSnapshotWithResponse, delete, deleteWithResponse, download, downloadToFile, downloadToFileWithResponse, downloadWithResponse, exists, existsWithResponse, getAccountInfo, getAccountInfoWithResponse, getAccountName, getBlobName, getBlobUrl, getContainerName, getCustomerProvidedKey, getHttpPipeline, getProperties, getPropertiesWithResponse, getServiceVersion, getSnapshotId, isSnapshot, setAccessTier, setAccessTierWithResponse, setHttpHeaders, setHttpHeadersWithResponse, setMetadata, setMetadataWithResponse, undelete, undeleteWithResponse
public static final int BLOB_DEFAULT_UPLOAD_BLOCK_SIZE
public static final int BLOB_DEFAULT_NUMBER_OF_BUFFERS
public static final int BLOB_DEFAULT_HTBB_UPLOAD_BLOCK_SIZE
protected BlobAsyncClient(HttpPipeline pipeline, String url, BlobServiceVersion serviceVersion, String accountName, String containerName, String blobName, String snapshot, CpkInfo customerProvidedKey)
BlobClientBuilder
.pipeline
- The pipeline used to send and receive service requests.url
- The endpoint where to send service requests.serviceVersion
- The version of the service to receive requests.accountName
- The storage account name.containerName
- The container name.blobName
- The blob name.snapshot
- The snapshot identifier for the blob, pass null
to interact with the blob directly.customerProvidedKey
- Customer provided key used during encryption of the blob's data on the server, pass
null
to allow the service to use its own encryption.public BlobAsyncClient getSnapshotClient(String snapshot)
BlobAsyncClient
linked to the snapshot
of this blob resource.getSnapshotClient
in class BlobAsyncClientBase
snapshot
- the identifier for a specific snapshot of this blobBlobAsyncClient
used to interact with the specific snapshot.public AppendBlobAsyncClient getAppendBlobAsyncClient()
AppendBlobAsyncClient
associated to this blob.AppendBlobAsyncClient
associated to this blob.public BlockBlobAsyncClient getBlockBlobAsyncClient()
BlockBlobAsyncClient
associated to this blob.BlockBlobAsyncClient
associated to this blob.public PageBlobAsyncClient getPageBlobAsyncClient()
PageBlobAsyncClient
associated to this blob.PageBlobAsyncClient
associated to this blob.public Mono<BlockBlobItem> upload(Flux<ByteBuffer> data, ParallelTransferOptions parallelTransferOptions)
Updating an existing block blob overwrites any existing metadata on the blob. Partial updates are not supported
with this method; the content of the existing blob is overwritten with the new content. To perform a partial
update of a block blob's, use stageBlock
and BlockBlobAsyncClient.commitBlockList(List)
. For more information, see the
Azure Docs for Put Block and the
Azure Docs for Put Block List.
The data passed need not support multiple subscriptions/be replayable as is required in other upload methods when retries are enabled, and the length of the data need not be known in advance. Therefore, this method should support uploading any arbitrary data source, including network streams. This behavior is possible because this method will perform some internal buffering as configured by the blockSize and numBuffers parameters, so while this method may offer additional convenience, it will not be as performant as other options, which should be preferred when possible.
Typically, the greater the number of buffers used, the greater the possible parallelism when transferring the data. Larger buffers means we will have to stage fewer blocks and therefore require fewer IO operations. The trade-offs between these values are context-dependent, so some experimentation may be required to optimize inputs for a given scenario.
Code Samples
ParallelTransferOptions
parallelTransferOptions = newParallelTransferOptions
(blockSize, numBuffers, null); client.upload(data, parallelTransferOptions).subscribe(response ->System
.out.printf("Uploaded BlockBlob MD5 is %s%n",Base64
.getEncoder().encodeToString(response.getContentMd5())));
data
- The data to write to the blob. Unlike other upload methods, this method does not require that the
Flux
be replayable. In other words, it does not have to support multiple subscribers and is not expected
to produce the same values across subscriptions.parallelTransferOptions
- ParallelTransferOptions
used to configure buffered uploading.public Mono<BlockBlobItem> upload(Flux<ByteBuffer> data, ParallelTransferOptions parallelTransferOptions, boolean overwrite)
Updating an existing block blob overwrites any existing metadata on the blob. Partial updates are not supported
with this method; the content of the existing blob is overwritten with the new content. To perform a partial
update of a block blob's, use stageBlock
and BlockBlobAsyncClient.commitBlockList(List)
. For more information, see the
Azure Docs for Put Block and the
Azure Docs for Put Block List.
The data passed need not support multiple subscriptions/be replayable as is required in other upload methods when retries are enabled, and the length of the data need not be known in advance. Therefore, this method should support uploading any arbitrary data source, including network streams. This behavior is possible because this method will perform some internal buffering as configured by the blockSize and numBuffers parameters, so while this method may offer additional convenience, it will not be as performant as other options, which should be preferred when possible.
Typically, the greater the number of buffers used, the greater the possible parallelism when transferring the data. Larger buffers means we will have to stage fewer blocks and therefore require fewer IO operations. The trade-offs between these values are context-dependent, so some experimentation may be required to optimize inputs for a given scenario.
Code Samples
ParallelTransferOptions
parallelTransferOptions = newParallelTransferOptions
(blockSize, numBuffers, null); boolean overwrite = false; // Default behavior client.upload(data, parallelTransferOptions, overwrite).subscribe(response ->System
.out.printf("Uploaded BlockBlob MD5 is %s%n",Base64
.getEncoder().encodeToString(response.getContentMd5())));
data
- The data to write to the blob. Unlike other upload methods, this method does not require that the
Flux
be replayable. In other words, it does not have to support multiple subscribers and is not expected
to produce the same values across subscriptions.parallelTransferOptions
- ParallelTransferOptions
used to configure buffered uploading.overwrite
- Whether or not to overwrite, should the blob already exist.public Mono<Response<BlockBlobItem>> uploadWithResponse(Flux<ByteBuffer> data, ParallelTransferOptions parallelTransferOptions, BlobHttpHeaders headers, Map<String,String> metadata, AccessTier tier, BlobRequestConditions accessConditions)
stageBlock
and BlockBlobAsyncClient.commitBlockList(List)
, which this method uses internally. For more information, see the
Azure Docs for Put Block and the
Azure Docs for Put Block List.
The data passed need not support multiple subscriptions/be replayable as is required in other upload methods when retries are enabled, and the length of the data need not be known in advance. Therefore, this method should support uploading any arbitrary data source, including network streams. This behavior is possible because this method will perform some internal buffering as configured by the blockSize and numBuffers parameters, so while this method may offer additional convenience, it will not be as performant as other options, which should be preferred when possible.
Typically, the greater the number of buffers used, the greater the possible parallelism when transferring the data. Larger buffers means we will have to stage fewer blocks and therefore require fewer IO operations. The trade-offs between these values are context-dependent, so some experimentation may be required to optimize inputs for a given scenario.
Code Samples
BlobHttpHeaders
headers = newBlobHttpHeaders
() .setContentMd5("data".getBytes(StandardCharsets
.UTF_8)) .setContentLanguage("en-US") .setContentType("binary");Map
<String
,String
> metadata =Collections
.singletonMap("metadata", "value");BlobRequestConditions
accessConditions = newBlobRequestConditions
() .setLeaseId(leaseId) .setIfUnmodifiedSince(OffsetDateTime
.now().minusDays(3));ParallelTransferOptions
parallelTransferOptions = newParallelTransferOptions
(blockSize, numBuffers, null); client.uploadWithResponse(data, parallelTransferOptions, headers, metadata,AccessTier
.HOT, accessConditions) .subscribe(response ->System
.out.printf("Uploaded BlockBlob MD5 is %s%n",Base64
.getEncoder().encodeToString(response.getValue().getContentMd5())));
Using Progress Reporting
BlobHttpHeaders
headers = newBlobHttpHeaders
() .setContentMd5("data".getBytes(StandardCharsets
.UTF_8)) .setContentLanguage("en-US") .setContentType("binary");Map
<String
,String
> metadata =Collections
.singletonMap("metadata", "value");BlobRequestConditions
accessConditions = newBlobRequestConditions
() .setLeaseId(leaseId) .setIfUnmodifiedSince(OffsetDateTime
.now().minusDays(3));ParallelTransferOptions
parallelTransferOptions = newParallelTransferOptions
(blockSize, numBuffers, bytesTransferred ->System
.out.printf("Upload progress: %s bytes sent", bytesTransferred)); client.uploadWithResponse(data, parallelTransferOptions, headers, metadata,AccessTier
.HOT, accessConditions) .subscribe(response ->System
.out.printf("Uploaded BlockBlob MD5 is %s%n",Base64
.getEncoder().encodeToString(response.getValue().getContentMd5())));
data
- The data to write to the blob. Unlike other upload methods, this method does not require that the
Flux
be replayable. In other words, it does not have to support multiple subscribers and is not expected
to produce the same values across subscriptions.parallelTransferOptions
- ParallelTransferOptions
used to configure buffered uploading.headers
- BlobHttpHeaders
metadata
- Metadata to associate with the blob.tier
- AccessTier
for the destination blob.accessConditions
- BlobRequestConditions
public Mono<Void> uploadFromFile(String filePath)
Code Samples
client.uploadFromFile(filePath) .doOnError(throwable ->System
.err.printf("Failed to upload from file %s%n", throwable.getMessage())) .subscribe(completion ->System
.out.println("Upload from file succeeded"));
filePath
- Path to the upload fileUncheckedIOException
- If an I/O error occurspublic Mono<Void> uploadFromFile(String filePath, boolean overwrite)
Code Samples
boolean overwrite = false; // Default behavior client.uploadFromFile(filePath, overwrite) .doOnError(throwable ->System
.err.printf("Failed to upload from file %s%n", throwable.getMessage())) .subscribe(completion ->System
.out.println("Upload from file succeeded"));
filePath
- Path to the upload fileoverwrite
- Whether or not to overwrite, should the blob already exist.UncheckedIOException
- If an I/O error occurspublic Mono<Void> uploadFromFile(String filePath, ParallelTransferOptions parallelTransferOptions, BlobHttpHeaders headers, Map<String,String> metadata, AccessTier tier, BlobRequestConditions accessConditions)
Code Samples
BlobHttpHeaders
headers = newBlobHttpHeaders
() .setContentMd5("data".getBytes(StandardCharsets
.UTF_8)) .setContentLanguage("en-US") .setContentType("binary");Map
<String
,String
> metadata =Collections
.singletonMap("metadata", "value");BlobRequestConditions
accessConditions = newBlobRequestConditions
() .setLeaseId(leaseId) .setIfUnmodifiedSince(OffsetDateTime
.now().minusDays(3)); client.uploadFromFile(filePath, newParallelTransferOptions
(BlobAsyncClient
.BLOB_MAX_UPLOAD_BLOCK_SIZE, null, null), headers, metadata,AccessTier
.HOT, accessConditions) .doOnError(throwable ->System
.err.printf("Failed to upload from file %s%n", throwable.getMessage())) .subscribe(completion ->System
.out.println("Upload from file succeeded"));
filePath
- Path to the upload fileparallelTransferOptions
- ParallelTransferOptions
to use to upload from file. Number of parallel
transfers parameter is ignored.headers
- BlobHttpHeaders
metadata
- Metadata to associate with the blob.tier
- AccessTier
for the destination blob.accessConditions
- BlobRequestConditions
UncheckedIOException
- If an I/O error occursprotected AsynchronousFileChannel uploadFileResourceSupplier(String filePath)
filePath
- The path for the fileAsynchronousFileChannel
UncheckedIOException
- an input output exception.Copyright © 2019 Microsoft Corporation. All rights reserved.