azure.storage.blob package

exception azure.storage.blob.PartialBatchErrorException(message, response, parts)[source]

There is a partial failure in batch operations.

Parameters
  • message (str) – The message of the exception.

  • response – Server response to be deserialized.

  • parts (list) – A list of the parts in multipart response.

raise_with_traceback()
with_traceback()

Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.

args
class azure.storage.blob.BlobServiceClient(account_url: str, credential: Optional[Any] = None, **kwargs: Any)[source]

A client to interact with the Blob Service at the account level.

This client provides operations to retrieve and configure the account properties as well as list, create and delete containers within the account. For operations relating to a specific container or blob, clients for those entities can also be retrieved using the get_client functions.

Parameters
  • account_url (str) – The URL to the blob storage account. Any other entities included in the URL path (e.g. container or blob) will be discarded. This URL can be optionally authenticated with a SAS token.

  • credential – The credentials with which to authenticate. This is optional if the account URL already has a SAS token. The value can be a SAS token string, an account shared access key, or an instance of a TokenCredentials class from azure.identity. If the URL already has a SAS token, specifying an explicit credential will take priority.

Keyword Arguments
  • api_version (str) –

    The Storage API version to use for requests. Default value is ‘2019-07-07’. Setting to an older version may result in reduced feature compatibility.

    New in version 12.2.0.

  • secondary_hostname (str) – The hostname of the secondary endpoint.

  • max_block_size (int) – The maximum chunk size for uploading a block blob in chunks. Defaults to 4*1024*1024, or 4MB.

  • max_single_put_size (int) – If the blob size is less than or equal max_single_put_size, then the blob will be uploaded with only one http PUT request. If the blob size is larger than max_single_put_size, the blob will be uploaded in chunks. Defaults to 64*1024*1024, or 64MB.

  • min_large_block_upload_threshold (int) – The minimum chunk size required to use the memory efficient algorithm when uploading a block blob. Defaults to 4*1024*1024+1.

  • use_byte_buffer (bool) – Use a byte buffer for block blob uploads. Defaults to False.

  • max_page_size (int) – The maximum chunk size for uploading a page blob. Defaults to 4*1024*1024, or 4MB.

  • max_single_get_size (int) – The maximum size for a blob to be downloaded in a single call, the exceeded part will be downloaded in chunks (could be parallel). Defaults to 32*1024*1024, or 32MB.

  • max_chunk_get_size (int) – The maximum chunk size used for downloading a blob. Defaults to 4*1024*1024, or 4MB.

Example:

Creating the BlobServiceClient with account url and credential.
from azure.storage.blob import BlobServiceClient
blob_service_client = BlobServiceClient(account_url=self.url, credential=self.shared_access_key)
Creating the BlobServiceClient with Azure Identity credentials.
# Get a token credential for authentication
from azure.identity import ClientSecretCredential
token_credential = ClientSecretCredential(
    self.active_directory_tenant_id,
    self.active_directory_application_id,
    self.active_directory_application_secret
)

# Instantiate a BlobServiceClient using a token credential
from azure.storage.blob import BlobServiceClient
blob_service_client = BlobServiceClient(account_url=self.oauth_url, credential=token_credential)
close()

This method is to close the sockets opened by the client. It need not be used when using with a context manager.

create_container(name: str, metadata: Optional[Dict[str, str]] = None, public_access: Optional[Union[PublicAccess, str]] = None, **kwargs) → ContainerClient[source]

Creates a new container under the specified account.

If the container with the same name already exists, a ResourceExistsError will be raised. This method returns a client with which to interact with the newly created container.

Parameters
  • name (str) – The name of the container to create.

  • metadata (dict(str, str)) – A dict with name-value pairs to associate with the container as metadata. Example: {‘Category’:’test’}

  • public_access (str or PublicAccess) – Possible values include: ‘container’, ‘blob’.

Keyword Arguments
  • container_encryption_scope (dict or ContainerEncryptionScope) –

    Specifies the default encryption scope to set on the container and use for all future writes.

    New in version 12.2.0.

  • timeout (int) – The timeout parameter is expressed in seconds.

Return type

ContainerClient

Example:

Creating a container in the blob service.
try:
    new_container = blob_service_client.create_container("containerfromblobservice")
    properties = new_container.get_container_properties()
except ResourceExistsError:
    print("Container already exists.")
delete_container(container: Union[ContainerProperties, str], lease: Optional[Union[BlobLeaseClient, str]] = None, **kwargs)None[source]

Marks the specified container for deletion.

The container and any blobs contained within it are later deleted during garbage collection. If the container is not found, a ResourceNotFoundError will be raised.

Parameters
  • container (str or ContainerProperties) – The container to delete. This can either be the name of the container, or an instance of ContainerProperties.

  • lease – If specified, delete_container only succeeds if the container’s lease is active and matches this ID. Required if the container has an active lease.

Keyword Arguments
  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

  • timeout (int) – The timeout parameter is expressed in seconds.

Return type

None

Example:

Deleting a container in the blob service.
# Delete container if it exists
try:
    blob_service_client.delete_container("containerfromblobservice")
except ResourceNotFoundError:
    print("Container already deleted.")
find_blobs_by_tags(filter_expression: str, **kwargs: Any) → ItemPaged[FilteredBlob][source]

The Filter Blobs operation enables callers to list blobs across all containers whose tags match a given search expression. Filter blobs searches across all containers within a storage account but can be scoped within the expression to a single container.

Parameters

filter_expression (str) – The expression to find blobs whose tags matches the specified condition. eg. “”yourtagname”=’firsttag’ and “yourtagname2”=’secondtag’” To specify a container, eg. “@container=’containerName’ and “Name”=’C’”

Keyword Arguments
  • results_per_page (int) – The max result per page when paginating.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

An iterable (auto-paging) response of BlobProperties.

Return type

ItemPaged[FilteredBlob]

classmethod from_connection_string(conn_str: str, credential: Optional[Any] = None, **kwargs: Any) → azure.storage.blob._blob_service_client.BlobServiceClient[source]

Create BlobServiceClient from a Connection String.

Parameters
  • conn_str (str) – A connection string to an Azure Storage account.

  • credential – The credentials with which to authenticate. This is optional if the account URL already has a SAS token, or the connection string already has shared access key values. The value can be a SAS token string, an account shared access key, or an instance of a TokenCredentials class from azure.identity. Credentials provided here will take precedence over those in the connection string.

Returns

A Blob service client.

Return type

BlobServiceClient

Example:

Creating the BlobServiceClient from a connection string.
from azure.storage.blob import BlobServiceClient
blob_service_client = BlobServiceClient.from_connection_string(self.connection_string)
get_account_information(**kwargs: Any) → Dict[str, str][source]

Gets information related to the storage account.

The information can also be retrieved if the user has a SAS to a container or blob. The keys in the returned dictionary include ‘sku_name’ and ‘account_kind’.

Returns

A dict of account information (SKU and account type).

Return type

dict(str, str)

Example:

Getting account information for the blob service.
account_info = blob_service_client.get_account_information()
print('Using Storage SKU: {}'.format(account_info['sku_name']))
get_blob_client(container: Union[ContainerProperties, str], blob: Union[BlobProperties, str], snapshot: Optional[Union[Dict[str, Any], str]] = None) → BlobClient[source]

Get a client to interact with the specified blob.

The blob need not already exist.

Parameters
  • container (str or ContainerProperties) – The container that the blob is in. This can either be the name of the container, or an instance of ContainerProperties.

  • blob (str or BlobProperties) – The blob with which to interact. This can either be the name of the blob, or an instance of BlobProperties.

  • snapshot (str or dict(str, Any)) – The optional blob snapshot on which to operate. This can either be the ID of the snapshot, or a dictionary output returned by create_snapshot().

Returns

A BlobClient.

Return type

BlobClient

Example:

Getting the blob client to interact with a specific blob.
blob_client = blob_service_client.get_blob_client(container="containertest", blob="my_blob")
try:
    stream = blob_client.download_blob()
except ResourceNotFoundError:
    print("No blob found.")
get_container_client(container: Union[ContainerProperties, str]) → ContainerClient[source]

Get a client to interact with the specified container.

The container need not already exist.

Parameters

container (str or ContainerProperties) – The container. This can either be the name of the container, or an instance of ContainerProperties.

Returns

A ContainerClient.

Return type

ContainerClient

Example:

Getting the container client to interact with a specific container.
# Get a client to interact with a specific container - though it may not yet exist
container_client = blob_service_client.get_container_client("containertest")
try:
    for blob in container_client.list_blobs():
        print("Found blob: ", blob.name)
except ResourceNotFoundError:
    print("Container not found.")
get_service_properties(**kwargs: Any) → Dict[str, Any][source]

Gets the properties of a storage account’s Blob service, including Azure Storage Analytics.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

An object containing blob service properties such as analytics logging, hour/minute metrics, cors rules, etc.

Return type

Dict[str, Any]

Example:

Getting service properties for the blob service.
properties = blob_service_client.get_service_properties()
get_service_stats(**kwargs: Any) → Dict[str, Any][source]

Retrieves statistics related to replication for the Blob service.

It is only available when read-access geo-redundant replication is enabled for the storage account.

With geo-redundant replication, Azure Storage maintains your data durable in two locations. In both locations, Azure Storage constantly maintains multiple healthy replicas of your data. The location where you read, create, update, or delete data is the primary storage account location. The primary location exists in the region you choose at the time you create an account via the Azure Management Azure classic portal, for example, North Central US. The location to which your data is replicated is the secondary location. The secondary location is automatically determined based on the location of the primary; it is in a second data center that resides in the same region as the primary location. Read-only access is available from the secondary location, if read-access geo-redundant replication is enabled for your storage account.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

The blob service stats.

Return type

Dict[str, Any]

Example:

Getting service stats for the blob service.
stats = blob_service_client.get_service_stats()
get_user_delegation_key(key_start_time: datetime, key_expiry_time: datetime, **kwargs: Any) → UserDelegationKey[source]

Obtain a user delegation key for the purpose of signing SAS tokens. A token credential must be present on the service object for this request to succeed.

Parameters
  • key_start_time (datetime) – A DateTime value. Indicates when the key becomes valid.

  • key_expiry_time (datetime) – A DateTime value. Indicates when the key stops being valid.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

The user delegation key.

Return type

UserDelegationKey

list_containers(name_starts_with: Optional[str] = None, include_metadata: Optional[bool] = False, **kwargs) → ItemPaged[ContainerProperties][source]

Returns a generator to list the containers under the specified account.

The generator will lazily follow the continuation tokens returned by the service and stop when all containers have been returned.

Parameters
  • name_starts_with (str) – Filters the results to return only containers whose names begin with the specified prefix.

  • include_metadata (bool) – Specifies that container metadata to be returned in the response. The default value is False.

Keyword Arguments
  • results_per_page (int) – The maximum number of container names to retrieve per API call. If the request does not specify the server will return up to 5,000 items.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

An iterable (auto-paging) of ContainerProperties.

Return type

ItemPaged[ContainerProperties]

Example:

Listing the containers in the blob service.
# List all containers
all_containers = blob_service_client.list_containers(include_metadata=True)
for container in all_containers:
    print(container['name'], container['metadata'])

# Filter results with name prefix
test_containers = blob_service_client.list_containers(name_starts_with='test-')
for container in test_containers:
    blob_service_client.delete_container(container)
set_service_properties(analytics_logging: Optional[BlobAnalyticsLogging] = None, hour_metrics: Optional[Metrics] = None, minute_metrics: Optional[Metrics] = None, cors: Optional[List[CorsRule]] = None, target_version: Optional[str] = None, delete_retention_policy: Optional[RetentionPolicy] = None, static_website: Optional[StaticWebsite] = None, **kwargs)None[source]

Sets the properties of a storage account’s Blob service, including Azure Storage Analytics.

If an element (e.g. analytics_logging) is left as None, the existing settings on the service for that functionality are preserved.

Parameters
  • analytics_logging (BlobAnalyticsLogging) – Groups the Azure Analytics Logging settings.

  • hour_metrics (Metrics) – The hour metrics settings provide a summary of request statistics grouped by API in hourly aggregates for blobs.

  • minute_metrics (Metrics) – The minute metrics settings provide request statistics for each minute for blobs.

  • cors (list[CorsRule]) – You can include up to five CorsRule elements in the list. If an empty list is specified, all CORS rules will be deleted, and CORS will be disabled for the service.

  • target_version (str) – Indicates the default version to use for requests if an incoming request’s version is not specified.

  • delete_retention_policy (RetentionPolicy) – The delete retention policy specifies whether to retain deleted blobs. It also specifies the number of days and versions of blob to keep.

  • static_website (StaticWebsite) – Specifies whether the static website feature is enabled, and if yes, indicates the index document and 404 error document to use.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Return type

None

Example:

Setting service properties for the blob service.
# Create service properties
from azure.storage.blob import BlobAnalyticsLogging, Metrics, CorsRule, RetentionPolicy

# Create logging settings
logging = BlobAnalyticsLogging(read=True, write=True, delete=True, retention_policy=RetentionPolicy(enabled=True, days=5))

# Create metrics for requests statistics
hour_metrics = Metrics(enabled=True, include_apis=True, retention_policy=RetentionPolicy(enabled=True, days=5))
minute_metrics = Metrics(enabled=True, include_apis=True,
                         retention_policy=RetentionPolicy(enabled=True, days=5))

# Create CORS rules
cors_rule = CorsRule(['www.xyz.com'], ['GET'])
cors = [cors_rule]

# Set the service properties
blob_service_client.set_service_properties(logging, hour_metrics, minute_metrics, cors)
property api_version

The version of the Storage API used for requests.

Type

str

property location_mode

The location mode that the client is currently using.

By default this will be “primary”. Options include “primary” and “secondary”.

Type

str

property primary_endpoint

The full primary endpoint URL.

Type

str

property primary_hostname

The hostname of the primary endpoint.

Type

str

property secondary_endpoint

The full secondary endpoint URL if configured.

If not available a ValueError will be raised. To explicitly specify a secondary hostname, use the optional secondary_hostname keyword argument on instantiation.

Type

str

Raises

ValueError

property secondary_hostname

The hostname of the secondary endpoint.

If not available this will be None. To explicitly specify a secondary hostname, use the optional secondary_hostname keyword argument on instantiation.

Type

str or None

property url

The full endpoint URL to this entity, including SAS token if used.

This could be either the primary endpoint, or the secondary endpoint depending on the current location_mode().

class azure.storage.blob.ContainerClient(account_url: str, container_name: str, credential: Optional[Any] = None, **kwargs: Any)[source]

A client to interact with a specific container, although that container may not yet exist.

For operations relating to a specific blob within this container, a blob client can be retrieved using the get_blob_client() function.

Parameters
  • account_url (str) – The URI to the storage account. In order to create a client given the full URI to the container, use the from_container_url() classmethod.

  • container_name (str) – The name of the container for the blob.

  • credential – The credentials with which to authenticate. This is optional if the account URL already has a SAS token. The value can be a SAS token string, an account shared access key, or an instance of a TokenCredentials class from azure.identity. If the URL already has a SAS token, specifying an explicit credential will take priority.

Keyword Arguments
  • api_version (str) –

    The Storage API version to use for requests. Default value is ‘2019-07-07’. Setting to an older version may result in reduced feature compatibility.

    New in version 12.2.0.

  • secondary_hostname (str) – The hostname of the secondary endpoint.

  • max_block_size (int) – The maximum chunk size for uploading a block blob in chunks. Defaults to 4*1024*1024, or 4MB.

  • max_single_put_size (int) – If the blob size is less than or equal max_single_put_size, then the blob will be uploaded with only one http PUT request. If the blob size is larger than max_single_put_size, the blob will be uploaded in chunks. Defaults to 64*1024*1024, or 64MB.

  • min_large_block_upload_threshold (int) – The minimum chunk size required to use the memory efficient algorithm when uploading a block blob. Defaults to 4*1024*1024+1.

  • use_byte_buffer (bool) – Use a byte buffer for block blob uploads. Defaults to False.

  • max_page_size (int) – The maximum chunk size for uploading a page blob. Defaults to 4*1024*1024, or 4MB.

  • max_single_get_size (int) – The maximum size for a blob to be downloaded in a single call, the exceeded part will be downloaded in chunks (could be parallel). Defaults to 32*1024*1024, or 32MB.

  • max_chunk_get_size (int) – The maximum chunk size used for downloading a blob. Defaults to 4*1024*1024, or 4MB.

Example:

Get a ContainerClient from an existing BlobServiceClient.
# Instantiate a BlobServiceClient using a connection string
from azure.storage.blob import BlobServiceClient
blob_service_client = BlobServiceClient.from_connection_string(self.connection_string)

# Instantiate a ContainerClient
container_client = blob_service_client.get_container_client("mynewcontainer")
Creating the container client directly.
from azure.storage.blob import ContainerClient

sas_url = "https://account.blob.core.windows.net/mycontainer?sv=2015-04-05&st=2015-04-29T22%3A18%3A26Z&se=2015-04-30T02%3A23%3A26Z&sr=b&sp=rw&sip=168.1.5.60-168.1.5.70&spr=https&sig=Z%2FRHIX5Xcg0Mq2rqI3OlWTjEg2tYkboXr1P9ZUXDtkk%3D"
container = ContainerClient.from_container_url(sas_url)
acquire_lease(lease_duration: int = - 1, lease_id: Optional[str] = None, **kwargs) → azure.storage.blob._lease.BlobLeaseClient[source]

Requests a new lease. If the container does not have an active lease, the Blob service creates a lease on the container and returns a new lease ID.

Parameters
  • lease_duration (int) – Specifies the duration of the lease, in seconds, or negative one (-1) for a lease that never expires. A non-infinite lease can be between 15 and 60 seconds. A lease duration cannot be changed using renew or change. Default is -1 (infinite lease).

  • lease_id (str) – Proposed lease ID, in a GUID string format. The Blob service returns 400 (Invalid request) if the proposed lease ID is not in the correct format.

Keyword Arguments
  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

A BlobLeaseClient object, that can be run in a context manager.

Return type

BlobLeaseClient

Example:

Acquiring a lease on the container.
# Acquire a lease on the container
lease = container_client.acquire_lease()

# Delete container by passing in the lease
container_client.delete_container(lease=lease)
close()

This method is to close the sockets opened by the client. It need not be used when using with a context manager.

create_container(metadata: Optional[Dict[str, str]] = None, public_access: Optional[Union[PublicAccess, str]] = None, **kwargs: Any)None[source]

Creates a new container under the specified account. If the container with the same name already exists, the operation fails.

Parameters
  • metadata (dict[str, str]) – A dict with name_value pairs to associate with the container as metadata. Example:{‘Category’:’test’}

  • public_access (PublicAccess) – Possible values include: ‘container’, ‘blob’.

Keyword Arguments
  • container_encryption_scope (dict or ContainerEncryptionScope) –

    Specifies the default encryption scope to set on the container and use for all future writes.

    New in version 12.2.0.

  • timeout (int) – The timeout parameter is expressed in seconds.

Return type

None

Example:

Creating a container to store blobs.
container_client.create_container()
delete_blob(blob: Union[str, azure.storage.blob._models.BlobProperties], delete_snapshots: Optional[str] = None, **kwargs)None[source]

Marks the specified blob or snapshot for deletion.

The blob is later deleted during garbage collection. Note that in order to delete a blob, you must delete all of its snapshots. You can delete both at the same time with the delete_blob operation.

If a delete retention policy is enabled for the service, then this operation soft deletes the blob or snapshot and retains the blob or snapshot for specified number of days. After specified number of days, blob’s data is removed from the service during garbage collection. Soft deleted blob or snapshot is accessible through list_blobs() specifying include=[“deleted”] option. Soft-deleted blob or snapshot can be restored using undelete()

Parameters
  • blob (str or BlobProperties) – The blob with which to interact. If specified, this value will override a blob value specified in the blob URL.

  • delete_snapshots (str) –

    Required if the blob has associated snapshots. Values include:
    • ”only”: Deletes only the blobs snapshots.

    • ”include”: Deletes the blob along with all snapshots.

Keyword Arguments
  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Return type

None

delete_blobs(*blobs, **kwargs) → Iterator[HttpResponse][source]

Marks the specified blobs or snapshots for deletion.

The blobs are later deleted during garbage collection. Note that in order to delete blobs, you must delete all of their snapshots. You can delete both at the same time with the delete_blobs operation.

If a delete retention policy is enabled for the service, then this operation soft deletes the blobs or snapshots and retains the blobs or snapshots for specified number of days. After specified number of days, blobs’ data is removed from the service during garbage collection. Soft deleted blobs or snapshots are accessible through list_blobs() specifying include=[“deleted”] Soft-deleted blobs or snapshots can be restored using undelete()

Parameters

blobs (list[str], list[dict], or list[BlobProperties]) –

The blobs to delete. This can be a single blob, or multiple values can be supplied, where each value is either the name of the blob (str) or BlobProperties.

Note

When the blob type is dict, here’s a list of keys, value rules.

blob name:

key: ‘name’, value type: str

snapshot you want to delete:

key: ‘snapshot’, value type: str

whether to delete snapthots when deleting blob:

key: ‘delete_snapshots’, value: ‘include’ or ‘only’

if the blob modified or not:

key: ‘if_modified_since’, ‘if_unmodified_since’, value type: datetime

etag:

key: ‘etag’, value type: str

match the etag or not:

key: ‘match_condition’, value type: MatchConditions

tags match condition:

key: ‘if_tags_match_condition’, value type: str

lease:

key: ‘lease_id’, value type: Union[str, LeaseClient]

timeout for subrequest:

key: ‘timeout’, value type: int

Keyword Arguments
  • delete_snapshots (str) –

    Required if a blob has associated snapshots. Values include:
    • ”only”: Deletes only the blobs snapshots.

    • ”include”: Deletes the blob along with all snapshots.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments
  • raise_on_any_failure (bool) – This is a boolean param which defaults to True. When this is set, an exception is raised even if there is a single operation failure.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

An iterator of responses, one for each blob in order

Return type

Iterator[HttpResponse]

Example:

Deleting multiple blobs.
# Delete multiple blobs in the container by name
container_client.delete_blobs("my_blob1", "my_blob2")

# Delete multiple blobs by properties iterator
my_blobs = container_client.list_blobs(name_starts_with="my_blob")
container_client.delete_blobs(*my_blobs)
delete_container(**kwargs: Any)None[source]

Marks the specified container for deletion. The container and any blobs contained within it are later deleted during garbage collection.

Keyword Arguments
  • lease (BlobLeaseClient or str) – If specified, delete_container only succeeds if the container’s lease is active and matches this ID. Required if the container has an active lease.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

  • timeout (int) – The timeout parameter is expressed in seconds.

Return type

None

Example:

Delete a container.
container_client.delete_container()
download_blob(blob: Union[str, BlobProperties], offset: Optional[int] = None, length: Optional[int] = None, **kwargs: Any) → StorageStreamDownloader[source]

Downloads a blob to the StorageStreamDownloader. The readall() method must be used to read all the content or readinto() must be used to download the blob into a stream.

Parameters
  • blob (str or BlobProperties) – The blob with which to interact. If specified, this value will override a blob value specified in the blob URL.

  • offset (int) – Start of byte range to use for downloading a section of the blob. Must be set if length is provided.

  • length (int) – Number of bytes to read from the stream. This is optional, but should be supplied for optimal performance.

Keyword Arguments
  • validate_content (bool) – If true, calculates an MD5 hash for each chunk of the blob. The storage service checks the hash of the content that has arrived with the hash that was sent. This is primarily valuable for detecting bitflips on the wire if using http instead of https, as https (the default), will already validate. Note that this MD5 hash is not stored with the blob. Also note that if enabled, the memory-efficient upload algorithm will not be used because computing the MD5 hash requires buffering entire blocks, and doing so defeats the purpose of the memory-efficient algorithm.

  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. If specified, download_blob only succeeds if the blob’s lease is active and matches this ID. Value can be a BlobLeaseClient object or the lease ID as a string.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments
  • cpk (CustomerProvidedEncryptionKey) – Encrypts the data on the service-side with the given key. Use of customer-provided keys must be done over HTTPS. As the encryption key itself is provided in the request, a secure connection must be established to transfer the key.

  • max_concurrency (int) – The number of parallel connections with which to download.

  • encoding (str) – Encoding to decode the downloaded bytes. Default is None, i.e. no decoding.

  • timeout (int) – The timeout parameter is expressed in seconds. This method may make multiple calls to the Azure service and the timeout will apply to each call individually.

Returns

A streaming object (StorageStreamDownloader)

Return type

StorageStreamDownloader

classmethod from_connection_string(conn_str: str, container_name: str, credential: Optional[Any] = None, **kwargs: Any) → azure.storage.blob._container_client.ContainerClient[source]

Create ContainerClient from a Connection String.

Parameters
  • conn_str (str) – A connection string to an Azure Storage account.

  • container_name (str) – The container name for the blob.

  • credential – The credentials with which to authenticate. This is optional if the account URL already has a SAS token, or the connection string already has shared access key values. The value can be a SAS token string, an account shared access key, or an instance of a TokenCredentials class from azure.identity. Credentials provided here will take precedence over those in the connection string.

Returns

A container client.

Return type

ContainerClient

Example:

Creating the ContainerClient from a connection string.
from azure.storage.blob import ContainerClient
container_client = ContainerClient.from_connection_string(
    self.connection_string, container_name="mycontainer")
classmethod from_container_url(container_url: str, credential: Optional[Any] = None, **kwargs: Any) → azure.storage.blob._container_client.ContainerClient[source]

Create ContainerClient from a container url.

Parameters
  • container_url (str) – The full endpoint URL to the Container, including SAS token if used. This could be either the primary endpoint, or the secondary endpoint depending on the current location_mode.

  • credential – The credentials with which to authenticate. This is optional if the account URL already has a SAS token, or the connection string already has shared access key values. The value can be a SAS token string, an account shared access key, or an instance of a TokenCredentials class from azure.identity. Credentials provided here will take precedence over those in the connection string.

Returns

A container client.

Return type

ContainerClient

get_account_information(**kwargs: Any) → Dict[str, str][source]

Gets information related to the storage account.

The information can also be retrieved if the user has a SAS to a container or blob. The keys in the returned dictionary include ‘sku_name’ and ‘account_kind’.

Returns

A dict of account information (SKU and account type).

Return type

dict(str, str)

get_blob_client(blob: Union[str, azure.storage.blob._models.BlobProperties], snapshot: Optional[str] = None) → azure.storage.blob._blob_client.BlobClient[source]

Get a client to interact with the specified blob.

The blob need not already exist.

Parameters
  • blob (str or BlobProperties) – The blob with which to interact.

  • snapshot (str) – The optional blob snapshot on which to operate. This can be the snapshot ID string or the response returned from create_snapshot().

Returns

A BlobClient.

Return type

BlobClient

Example:

Get the blob client.
# Get the BlobClient from the ContainerClient to interact with a specific blob
blob_client = container_client.get_blob_client("mynewblob")
get_container_access_policy(**kwargs: Any) → Dict[str, Any][source]

Gets the permissions for the specified container. The permissions indicate whether container data may be accessed publicly.

Keyword Arguments
  • lease (BlobLeaseClient or str) – If specified, get_container_access_policy only succeeds if the container’s lease is active and matches this ID.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

Access policy information in a dict.

Return type

dict[str, Any]

Example:

Getting the access policy on the container.
policy = container_client.get_container_access_policy()
get_container_properties(**kwargs: Any) → azure.storage.blob._models.ContainerProperties[source]

Returns all user-defined metadata and system properties for the specified container. The data returned does not include the container’s list of blobs.

Keyword Arguments
  • lease (BlobLeaseClient or str) – If specified, get_container_properties only succeeds if the container’s lease is active and matches this ID.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

Properties for the specified container within a container object.

Return type

ContainerProperties

Example:

Getting properties on the container.
properties = container_client.get_container_properties()
list_blobs(name_starts_with: Optional[str] = None, include: Union[str, List[str], None] = None, **kwargs: Any)azure.core.paging.ItemPaged[azure.storage.blob._models.BlobProperties][source]

Returns a generator to list the blobs under the specified container. The generator will lazily follow the continuation tokens returned by the service.

Parameters
  • name_starts_with (str) – Filters the results to return only blobs whose names begin with the specified prefix.

  • or str include (list[str]) – Specifies one or more additional datasets to include in the response. Options include: ‘snapshots’, ‘metadata’, ‘uncommittedblobs’, ‘copy’, ‘deleted’, ‘tags’.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

An iterable (auto-paging) response of BlobProperties.

Return type

ItemPaged[BlobProperties]

Example:

List the blobs in the container.
blobs_list = container_client.list_blobs()
for blob in blobs_list:
    print(blob.name + '\n')
set_container_access_policy(signed_identifiers: Dict[str, AccessPolicy], public_access: Optional[Union[str, PublicAccess]] = None, **kwargs) → Dict[str, Union[str, datetime]][source]

Sets the permissions for the specified container or stored access policies that may be used with Shared Access Signatures. The permissions indicate whether blobs in a container may be accessed publicly.

Parameters
  • signed_identifiers (dict[str, AccessPolicy]) – A dictionary of access policies to associate with the container. The dictionary may contain up to 5 elements. An empty dictionary will clear the access policies set on the service.

  • public_access (PublicAccess) – Possible values include: ‘container’, ‘blob’.

Keyword Arguments
  • lease (BlobLeaseClient or str) – Required if the container has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

  • if_modified_since (datetime) – A datetime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified date/time.

  • if_unmodified_since (datetime) – A datetime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

Container-updated property dict (Etag and last modified).

Return type

dict[str, str or datetime]

Example:

Setting access policy on the container.
# Create access policy
from azure.storage.blob import AccessPolicy, ContainerSasPermissions
access_policy = AccessPolicy(permission=ContainerSasPermissions(read=True),
                             expiry=datetime.utcnow() + timedelta(hours=1),
                             start=datetime.utcnow() - timedelta(minutes=1))

identifiers = {'test': access_policy}

# Set the access policy on the container
container_client.set_container_access_policy(signed_identifiers=identifiers)
set_container_metadata(metadata: Optional[Dict[str, str]] = None, **kwargs) → Dict[str, Union[str, datetime]][source]

Sets one or more user-defined name-value pairs for the specified container. Each call to this operation replaces all existing metadata attached to the container. To remove all metadata from the container, call this operation with no metadata dict.

Parameters

metadata (dict[str, str]) – A dict containing name-value pairs to associate with the container as metadata. Example: {‘category’:’test’}

Keyword Arguments
  • lease (BlobLeaseClient or str) – If specified, set_container_metadata only succeeds if the container’s lease is active and matches this ID.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

Container-updated property dict (Etag and last modified).

Return type

dict[str, str or datetime]

Example:

Setting metadata on the container.
# Create key, value pairs for metadata
metadata = {'type': 'test'}

# Set metadata on the container
container_client.set_container_metadata(metadata=metadata)
set_premium_page_blob_tier_blobs(premium_page_blob_tier: Optional[Union[str, PremiumPageBlobTier]], *blobs: List[Union[str, BlobProperties, dict]], **kwargs) → Iterator[HttpResponse][source]

Sets the page blob tiers on all blobs. This API is only supported for page blobs on premium accounts.

Parameters
  • premium_page_blob_tier (PremiumPageBlobTier) –

    A page blob tier value to set the blob to. The tier correlates to the size of the blob and number of allowed IOPS. This is only applicable to page blobs on premium storage accounts.

    Note

    If you want to set different tier on different blobs please set this positional parameter to None. Then the blob tier on every BlobProperties will be taken.

  • blobs (list[str], list[dict], or list[BlobProperties]) –

    The blobs with which to interact. This can be a single blob, or multiple values can be supplied, where each value is either the name of the blob (str) or BlobProperties.

    Note

    When the blob type is dict, here’s a list of keys, value rules.

    blob name:

    key: ‘name’, value type: str

    premium blob tier:

    key: ‘blob_tier’, value type: PremiumPageBlobTier

    lease:

    key: ‘lease_id’, value type: Union[str, LeaseClient]

    timeout for subrequest:

    key: ‘timeout’, value type: int

Keyword Arguments
  • timeout (int) – The timeout parameter is expressed in seconds. This method may make multiple calls to the Azure service and the timeout will apply to each call individually.

  • raise_on_any_failure (bool) – This is a boolean param which defaults to True. When this is set, an exception is raised even if there is a single operation failure.

Returns

An iterator of responses, one for each blob in order

Return type

iterator[HttpResponse]

set_standard_blob_tier_blobs(standard_blob_tier: Optional[Union[str, StandardBlobTier]], *blobs: List[Union[str, BlobProperties, dict]], **kwargs) → Iterator[HttpResponse][source]

This operation sets the tier on block blobs.

A block blob’s tier determines Hot/Cool/Archive storage type. This operation does not update the blob’s ETag.

Parameters
  • standard_blob_tier (str or StandardBlobTier) –

    Indicates the tier to be set on all blobs. Options include ‘Hot’, ‘Cool’, ‘Archive’. The hot tier is optimized for storing data that is accessed frequently. The cool storage tier is optimized for storing data that is infrequently accessed and stored for at least a month. The archive tier is optimized for storing data that is rarely accessed and stored for at least six months with flexible latency requirements.

    Note

    If you want to set different tier on different blobs please set this positional parameter to None. Then the blob tier on every BlobProperties will be taken.

  • blobs (list[str], list[dict], or list[BlobProperties]) –

    The blobs with which to interact. This can be a single blob, or multiple values can be supplied, where each value is either the name of the blob (str) or BlobProperties.

    Note

    When the blob type is dict, here’s a list of keys, value rules.

    blob name:

    key: ‘name’, value type: str

    standard blob tier:

    key: ‘blob_tier’, value type: StandardBlobTier

    rehydrate priority:

    key: ‘rehydrate_priority’, value type: RehydratePriority

    lease:

    key: ‘lease_id’, value type: Union[str, LeaseClient]

    snapshot:

    key: “snapshost”, value type: str

    version id:

    key: “version_id”, value type: str

    tags match condition:

    key: ‘if_tags_match_condition’, value type: str

    timeout for subrequest:

    key: ‘timeout’, value type: int

Keyword Arguments

rehydrate_priority (RehydratePriority) – Indicates the priority with which to rehydrate an archived blob

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments
  • timeout (int) – The timeout parameter is expressed in seconds.

  • raise_on_any_failure (bool) – This is a boolean param which defaults to True. When this is set, an exception is raised even if there is a single operation failure.

Returns

An iterator of responses, one for each blob in order

Return type

Iterator[HttpResponse]

upload_blob(name: Union[str, azure.storage.blob._models.BlobProperties], data: Union[Iterable[AnyStr], IO[AnyStr]], blob_type: str = <BlobType.BlockBlob: 'BlockBlob'>, length: Optional[int] = None, metadata: Optional[Dict[str, str]] = None, **kwargs) → azure.storage.blob._blob_client.BlobClient[source]

Creates a new blob from a data source with automatic chunking.

Parameters
  • name (str or BlobProperties) – The blob with which to interact. If specified, this value will override a blob value specified in the blob URL.

  • data – The blob data to upload.

  • blob_type (BlobType) – The type of the blob. This can be either BlockBlob, PageBlob or AppendBlob. The default value is BlockBlob.

  • length (int) – Number of bytes to read from the stream. This is optional, but should be supplied for optimal performance.

  • metadata (dict(str, str)) – Name-value pairs associated with the blob as metadata.

Keyword Arguments
  • overwrite (bool) – Whether the blob to be uploaded should overwrite the current data. If True, upload_blob will overwrite the existing data. If set to False, the operation will fail with ResourceExistsError. The exception to the above is with Append blob types: if set to False and the data already exists, an error will not be raised and the data will be appended to the existing blob. If set overwrite=True, then the existing append blob will be deleted, and a new one created. Defaults to False.

  • content_settings (ContentSettings) – ContentSettings object used to set blob properties. Used to set content type, encoding, language, disposition, md5, and cache control.

  • validate_content (bool) – If true, calculates an MD5 hash for each chunk of the blob. The storage service checks the hash of the content that has arrived with the hash that was sent. This is primarily valuable for detecting bitflips on the wire if using http instead of https, as https (the default), will already validate. Note that this MD5 hash is not stored with the blob. Also note that if enabled, the memory-efficient upload algorithm will not be used, because computing the MD5 hash requires buffering entire blocks, and doing so defeats the purpose of the memory-efficient algorithm.

  • lease (BlobLeaseClient or str) – Required if the container has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments
  • timeout (int) – The timeout parameter is expressed in seconds. This method may make multiple calls to the Azure service and the timeout will apply to each call individually.

  • premium_page_blob_tier (PremiumPageBlobTier) – A page blob tier value to set the blob to. The tier correlates to the size of the blob and number of allowed IOPS. This is only applicable to page blobs on premium storage accounts.

  • standard_blob_tier (StandardBlobTier) – A standard blob tier value to set the blob to. For this version of the library, this is only applicable to block blobs on standard storage accounts.

  • maxsize_condition (int) – Optional conditional header. The max length in bytes permitted for the append blob. If the Append Block operation would cause the blob to exceed that limit or if the blob size is already greater than the value specified in this header, the request will fail with MaxBlobSizeConditionNotMet error (HTTP status code 412 - Precondition Failed).

  • max_concurrency (int) – Maximum number of parallel connections to use when the blob size exceeds 64MB.

  • cpk (CustomerProvidedEncryptionKey) – Encrypts the data on the service-side with the given key. Use of customer-provided keys must be done over HTTPS. As the encryption key itself is provided in the request, a secure connection must be established to transfer the key.

  • encryption_scope (str) –

    A predefined encryption scope used to encrypt the data on the service. An encryption scope can be created using the Management API and referenced here by name. If a default encryption scope has been defined at the container, this value will override it if the container-level scope is configured to allow overrides. Otherwise an error will be raised.

    New in version 12.2.0.

  • encoding (str) – Defaults to UTF-8.

Returns

A BlobClient to interact with the newly uploaded blob.

Return type

BlobClient

Example:

Upload blob to the container.
with open(SOURCE_FILE, "rb") as data:
    blob_client = container_client.upload_blob(name="myblob", data=data)

properties = blob_client.get_blob_properties()
walk_blobs(name_starts_with: Optional[str] = None, include: Optional[Any] = None, delimiter: str = '/', **kwargs: Optional[Any])azure.core.paging.ItemPaged[azure.storage.blob._models.BlobProperties][source]

Returns a generator to list the blobs under the specified container. The generator will lazily follow the continuation tokens returned by the service. This operation will list blobs in accordance with a hierarchy, as delimited by the specified delimiter character.

Parameters
  • name_starts_with (str) – Filters the results to return only blobs whose names begin with the specified prefix.

  • include (list[str]) – Specifies one or more additional datasets to include in the response. Options include: ‘snapshots’, ‘metadata’, ‘uncommittedblobs’, ‘copy’, ‘deleted’.

  • delimiter (str) – When the request includes this parameter, the operation returns a BlobPrefix element in the response body that acts as a placeholder for all blobs whose names begin with the same substring up to the appearance of the delimiter character. The delimiter may be a single character or a string.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

An iterable (auto-paging) response of BlobProperties.

Return type

ItemPaged[BlobProperties]

property api_version

The version of the Storage API used for requests.

Type

str

property location_mode

The location mode that the client is currently using.

By default this will be “primary”. Options include “primary” and “secondary”.

Type

str

property primary_endpoint

The full primary endpoint URL.

Type

str

property primary_hostname

The hostname of the primary endpoint.

Type

str

property secondary_endpoint

The full secondary endpoint URL if configured.

If not available a ValueError will be raised. To explicitly specify a secondary hostname, use the optional secondary_hostname keyword argument on instantiation.

Type

str

Raises

ValueError

property secondary_hostname

The hostname of the secondary endpoint.

If not available this will be None. To explicitly specify a secondary hostname, use the optional secondary_hostname keyword argument on instantiation.

Type

str or None

property url

The full endpoint URL to this entity, including SAS token if used.

This could be either the primary endpoint, or the secondary endpoint depending on the current location_mode().

class azure.storage.blob.BlobClient(account_url: str, container_name: str, blob_name: str, snapshot: Union[str, Dict[str, Any], None] = None, credential: Optional[Any] = None, **kwargs: Any)[source]

A client to interact with a specific blob, although that blob may not yet exist.

Parameters
  • account_url (str) – The URI to the storage account. In order to create a client given the full URI to the blob, use the from_blob_url() classmethod.

  • container_name (str) – The container name for the blob.

  • blob_name (str) – The name of the blob with which to interact. If specified, this value will override a blob value specified in the blob URL.

  • snapshot (str) – The optional blob snapshot on which to operate. This can be the snapshot ID string or the response returned from create_snapshot().

  • credential – The credentials with which to authenticate. This is optional if the account URL already has a SAS token. The value can be a SAS token string, an account shared access key, or an instance of a TokenCredentials class from azure.identity. If the URL already has a SAS token, specifying an explicit credential will take priority.

Keyword Arguments
  • api_version (str) –

    The Storage API version to use for requests. Default value is ‘2019-07-07’. Setting to an older version may result in reduced feature compatibility.

    New in version 12.2.0.

  • secondary_hostname (str) – The hostname of the secondary endpoint.

  • max_block_size (int) – The maximum chunk size for uploading a block blob in chunks. Defaults to 4*1024*1024, or 4MB.

  • max_single_put_size (int) – If the blob size is less than or equal max_single_put_size, then the blob will be uploaded with only one http PUT request. If the blob size is larger than max_single_put_size, the blob will be uploaded in chunks. Defaults to 64*1024*1024, or 64MB.

  • min_large_block_upload_threshold (int) – The minimum chunk size required to use the memory efficient algorithm when uploading a block blob. Defaults to 4*1024*1024+1.

  • use_byte_buffer (bool) – Use a byte buffer for block blob uploads. Defaults to False.

  • max_page_size (int) – The maximum chunk size for uploading a page blob. Defaults to 4*1024*1024, or 4MB.

  • max_single_get_size (int) – The maximum size for a blob to be downloaded in a single call, the exceeded part will be downloaded in chunks (could be parallel). Defaults to 32*1024*1024, or 32MB.

  • max_chunk_get_size (int) – The maximum chunk size used for downloading a blob. Defaults to 4*1024*1024, or 4MB.

Example:

Creating the BlobClient from a URL to a public blob (no auth needed).
from azure.storage.blob import BlobClient
blob_client = BlobClient.from_blob_url(blob_url="https://account.blob.core.windows.net/container/blob-name")
Creating the BlobClient from a SAS URL to a blob.
from azure.storage.blob import BlobClient

sas_url = "https://account.blob.core.windows.net/container/blob-name?sv=2015-04-05&st=2015-04-29T22%3A18%3A26Z&se=2015-04-30T02%3A23%3A26Z&sr=b&sp=rw&sip=168.1.5.60-168.1.5.70&spr=https&sig=Z%2FRHIX5Xcg0Mq2rqI3OlWTjEg2tYkboXr1P9ZUXDtkk%3D"
blob_client = BlobClient.from_blob_url(sas_url)
abort_copy(copy_id: Union[str, Dict[str, Any], azure.storage.blob._models.BlobProperties], **kwargs: Any)None[source]

Abort an ongoing copy operation.

This will leave a destination blob with zero length and full metadata. This will raise an error if the copy operation has already ended.

Parameters

copy_id (str or BlobProperties) – The copy operation to abort. This can be either an ID string, or an instance of BlobProperties.

Return type

None

Example:

Abort copying a blob from URL.
# Passing in copy id to abort copy operation
copied_blob.abort_copy(copy_id)

# check copy status
props = copied_blob.get_blob_properties()
print(props.copy.status)
acquire_lease(lease_duration: int = - 1, lease_id: Optional[str] = None, **kwargs: Any) → azure.storage.blob._lease.BlobLeaseClient[source]

Requests a new lease.

If the blob does not have an active lease, the Blob Service creates a lease on the blob and returns a new lease.

Parameters
  • lease_duration (int) – Specifies the duration of the lease, in seconds, or negative one (-1) for a lease that never expires. A non-infinite lease can be between 15 and 60 seconds. A lease duration cannot be changed using renew or change. Default is -1 (infinite lease).

  • lease_id (str) – Proposed lease ID, in a GUID string format. The Blob Service returns 400 (Invalid request) if the proposed lease ID is not in the correct format.

Keyword Arguments
  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

A BlobLeaseClient object.

Return type

BlobLeaseClient

Example:

Acquiring a lease on a blob.
# Acquire a lease on the blob
lease = blob_client.acquire_lease()

# Delete blob by passing in the lease
blob_client.delete_blob(lease=lease)
append_block(data: Union[AnyStr, Iterable[AnyStr], IO[AnyStr]], length: Optional[int] = None, **kwargs) → Dict[str, Union[str, datetime, int]][source]

Commits a new block of data to the end of the existing append blob.

Parameters
  • data (bytes or str or Iterable) – Content of the block. This can be bytes, text, an iterable or a file-like object.

  • length (int) – Size of the block in bytes.

Keyword Arguments
  • validate_content (bool) – If true, calculates an MD5 hash of the block content. The storage service checks the hash of the content that has arrived with the hash that was sent. This is primarily valuable for detecting bitflips on the wire if using http instead of https, as https (the default), will already validate. Note that this MD5 hash is not stored with the blob.

  • maxsize_condition (int) – Optional conditional header. The max length in bytes permitted for the append blob. If the Append Block operation would cause the blob to exceed that limit or if the blob size is already greater than the value specified in this header, the request will fail with MaxBlobSizeConditionNotMet error (HTTP status code 412 - Precondition Failed).

  • appendpos_condition (int) – Optional conditional header, used only for the Append Block operation. A number indicating the byte offset to compare. Append Block will succeed only if the append position is equal to this number. If it is not, the request will fail with the AppendPositionConditionNotMet error (HTTP status code 412 - Precondition Failed).

  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments
  • encoding (str) – Defaults to UTF-8.

  • cpk (CustomerProvidedEncryptionKey) – Encrypts the data on the service-side with the given key. Use of customer-provided keys must be done over HTTPS. As the encryption key itself is provided in the request, a secure connection must be established to transfer the key.

  • encryption_scope (str) –

    A predefined encryption scope used to encrypt the data on the service. An encryption scope can be created using the Management API and referenced here by name. If a default encryption scope has been defined at the container, this value will override it if the container-level scope is configured to allow overrides. Otherwise an error will be raised.

    New in version 12.2.0.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

Blob-updated property dict (Etag, last modified, append offset, committed block count).

Return type

dict(str, Any)

append_block_from_url(copy_source_url: str, source_offset: Optional[int] = None, source_length: Optional[int] = None, **kwargs) → Dict[str, Union[str, datetime, int]][source]

Creates a new block to be committed as part of a blob, where the contents are read from a source url.

Parameters
  • copy_source_url (str) – The URL of the source data. It can point to any Azure Blob or File, that is either public or has a shared access signature attached.

  • source_offset (int) – This indicates the start of the range of bytes (inclusive) that has to be taken from the copy source.

  • source_length (int) – This indicates the end of the range of bytes that has to be taken from the copy source.

Keyword Arguments
  • source_content_md5 (bytearray) – If given, the service will calculate the MD5 hash of the block content and compare against this value.

  • maxsize_condition (int) – Optional conditional header. The max length in bytes permitted for the append blob. If the Append Block operation would cause the blob to exceed that limit or if the blob size is already greater than the value specified in this header, the request will fail with MaxBlobSizeConditionNotMet error (HTTP status code 412 - Precondition Failed).

  • appendpos_condition (int) – Optional conditional header, used only for the Append Block operation. A number indicating the byte offset to compare. Append Block will succeed only if the append position is equal to this number. If it is not, the request will fail with the AppendPositionConditionNotMet error (HTTP status code 412 - Precondition Failed).

  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – The destination ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The destination match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments
  • source_if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the source resource has been modified since the specified time.

  • source_if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the source resource has not been modified since the specified date/time.

  • source_etag (str) – The source ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • source_match_condition (MatchConditions) – The source match condition to use upon the etag.

  • cpk (CustomerProvidedEncryptionKey) – Encrypts the data on the service-side with the given key. Use of customer-provided keys must be done over HTTPS. As the encryption key itself is provided in the request, a secure connection must be established to transfer the key.

  • encryption_scope (str) –

    A predefined encryption scope used to encrypt the data on the service. An encryption scope can be created using the Management API and referenced here by name. If a default encryption scope has been defined at the container, this value will override it if the container-level scope is configured to allow overrides. Otherwise an error will be raised.

    New in version 12.2.0.

  • timeout (int) – The timeout parameter is expressed in seconds.

clear_page(offset: int, length: int, **kwargs: Any) → Dict[str, Union[str, datetime]][source]

Clears a range of pages.

Parameters
  • offset (int) – Start of byte range to use for writing to a section of the blob. Pages must be aligned with 512-byte boundaries, the start offset must be a modulus of 512 and the length must be a modulus of 512.

  • length (int) – Number of bytes to use for writing to a section of the blob. Pages must be aligned with 512-byte boundaries, the start offset must be a modulus of 512 and the length must be a modulus of 512.

Keyword Arguments
  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

  • if_sequence_number_lte (int) – If the blob’s sequence number is less than or equal to the specified value, the request proceeds; otherwise it fails.

  • if_sequence_number_lt (int) – If the blob’s sequence number is less than the specified value, the request proceeds; otherwise it fails.

  • if_sequence_number_eq (int) – If the blob’s sequence number is equal to the specified value, the request proceeds; otherwise it fails.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments
  • cpk (CustomerProvidedEncryptionKey) – Encrypts the data on the service-side with the given key. Use of customer-provided keys must be done over HTTPS. As the encryption key itself is provided in the request, a secure connection must be established to transfer the key.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

Blob-updated property dict (Etag and last modified).

Return type

dict(str, Any)

close()

This method is to close the sockets opened by the client. It need not be used when using with a context manager.

commit_block_list(block_list: List[BlobBlock], content_settings: Optional[ContentSettings] = None, metadata: Optional[Dict[str, str]] = None, **kwargs) → Dict[str, Union[str, datetime]][source]

The Commit Block List operation writes a blob by specifying the list of block IDs that make up the blob.

Parameters
  • block_list (list) – List of Blockblobs.

  • content_settings (ContentSettings) – ContentSettings object used to set blob properties. Used to set content type, encoding, language, disposition, md5, and cache control.

  • metadata (dict[str, str]) – Name-value pairs associated with the blob as metadata.

Keyword Arguments
  • tags (dict(str, str)) –

    Name-value pairs associated with the blob as tag. Tags are case-sensitive. The tag set may contain at most 10 tags. Tag keys must be between 1 and 128 characters, and tag values must be between 0 and 256 characters. Valid tag key and value characters include: lowercase and uppercase letters, digits (0-9), space (` `), plus (+), minus (-), period (.), solidus (/), colon (:), equals (=), underscore (_)

    New in version 12.4.0.

  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

  • validate_content (bool) – If true, calculates an MD5 hash of the page content. The storage service checks the hash of the content that has arrived with the hash that was sent. This is primarily valuable for detecting bitflips on the wire if using http instead of https, as https (the default), will already validate. Note that this MD5 hash is not stored with the blob.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on destination blob with a matching value.

New in version 12.4.0.

Keyword Arguments
  • standard_blob_tier (StandardBlobTier) – A standard blob tier value to set the blob to. For this version of the library, this is only applicable to block blobs on standard storage accounts.

  • cpk (CustomerProvidedEncryptionKey) – Encrypts the data on the service-side with the given key. Use of customer-provided keys must be done over HTTPS. As the encryption key itself is provided in the request, a secure connection must be established to transfer the key.

  • encryption_scope (str) –

    A predefined encryption scope used to encrypt the data on the service. An encryption scope can be created using the Management API and referenced here by name. If a default encryption scope has been defined at the container, this value will override it if the container-level scope is configured to allow overrides. Otherwise an error will be raised.

    New in version 12.2.0.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

Blob-updated property dict (Etag and last modified).

Return type

dict(str, Any)

create_append_blob(content_settings: Optional[ContentSettings] = None, metadata: Optional[Dict[str, str]] = None, **kwargs: Any) → Dict[str, Union[str, datetime]][source]

Creates a new Append Blob.

Parameters
  • content_settings (ContentSettings) – ContentSettings object used to set blob properties. Used to set content type, encoding, language, disposition, md5, and cache control.

  • metadata (dict(str, str)) – Name-value pairs associated with the blob as metadata.

Keyword Arguments
  • tags (dict(str, str)) –

    Name-value pairs associated with the blob as tag. Tags are case-sensitive. The tag set may contain at most 10 tags. Tag keys must be between 1 and 128 characters, and tag values must be between 0 and 256 characters. Valid tag key and value characters include: lowercase and uppercase letters, digits (0-9), space (` `), plus (+), minus (-), period (.), solidus (/), colon (:), equals (=), underscore (_)

    New in version 12.4.0.

  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

  • cpk (CustomerProvidedEncryptionKey) – Encrypts the data on the service-side with the given key. Use of customer-provided keys must be done over HTTPS. As the encryption key itself is provided in the request, a secure connection must be established to transfer the key.

  • encryption_scope (str) –

    A predefined encryption scope used to encrypt the data on the service. An encryption scope can be created using the Management API and referenced here by name. If a default encryption scope has been defined at the container, this value will override it if the container-level scope is configured to allow overrides. Otherwise an error will be raised.

    New in version 12.2.0.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

Blob-updated property dict (Etag and last modified).

Return type

dict[str, Any]

create_page_blob(size: int, content_settings: Optional[ContentSettings] = None, metadata: Optional[Dict[str, str]] = None, premium_page_blob_tier: Optional[Union[str, PremiumPageBlobTier]] = None, **kwargs) → Dict[str, Union[str, datetime]][source]

Creates a new Page Blob of the specified size.

Parameters
  • size (int) – This specifies the maximum size for the page blob, up to 1 TB. The page blob size must be aligned to a 512-byte boundary.

  • content_settings (ContentSettings) – ContentSettings object used to set blob properties. Used to set content type, encoding, language, disposition, md5, and cache control.

  • metadata (dict(str, str)) – Name-value pairs associated with the blob as metadata.

  • premium_page_blob_tier (PremiumPageBlobTier) – A page blob tier value to set the blob to. The tier correlates to the size of the blob and number of allowed IOPS. This is only applicable to page blobs on premium storage accounts.

Keyword Arguments
  • tags (dict(str, str)) –

    Name-value pairs associated with the blob as tag. Tags are case-sensitive. The tag set may contain at most 10 tags. Tag keys must be between 1 and 128 characters, and tag values must be between 0 and 256 characters. Valid tag key and value characters include: lowercase and uppercase letters, digits (0-9), space (` `), plus (+), minus (-), period (.), solidus (/), colon (:), equals (=), underscore (_)

    New in version 12.4.0.

  • sequence_number (int) – Only for Page blobs. The sequence number is a user-controlled value that you can use to track requests. The value of the sequence number must be between 0 and 2^63 - 1.The default value is 0.

  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

  • cpk (CustomerProvidedEncryptionKey) – Encrypts the data on the service-side with the given key. Use of customer-provided keys must be done over HTTPS. As the encryption key itself is provided in the request, a secure connection must be established to transfer the key.

  • encryption_scope (str) –

    A predefined encryption scope used to encrypt the data on the service. An encryption scope can be created using the Management API and referenced here by name. If a default encryption scope has been defined at the container, this value will override it if the container-level scope is configured to allow overrides. Otherwise an error will be raised.

    New in version 12.2.0.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

Blob-updated property dict (Etag and last modified).

Return type

dict[str, Any]

create_snapshot(metadata: Optional[Dict[str, str]] = None, **kwargs: Any) → Dict[str, Union[str, datetime]][source]

Creates a snapshot of the blob.

A snapshot is a read-only version of a blob that’s taken at a point in time. It can be read, copied, or deleted, but not modified. Snapshots provide a way to back up a blob as it appears at a moment in time.

A snapshot of a blob has the same name as the base blob from which the snapshot is taken, with a DateTime value appended to indicate the time at which the snapshot was taken.

Parameters

metadata (dict(str, str)) – Name-value pairs associated with the blob as metadata.

Keyword Arguments
  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on destination blob with a matching value.

New in version 12.4.0.

Keyword Arguments
  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

  • cpk (CustomerProvidedEncryptionKey) – Encrypts the data on the service-side with the given key. Use of customer-provided keys must be done over HTTPS. As the encryption key itself is provided in the request, a secure connection must be established to transfer the key.

  • encryption_scope (str) –

    A predefined encryption scope used to encrypt the data on the service. An encryption scope can be created using the Management API and referenced here by name. If a default encryption scope has been defined at the container, this value will override it if the container-level scope is configured to allow overrides. Otherwise an error will be raised.

    New in version 12.2.0.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

Blob-updated property dict (Snapshot ID, Etag, and last modified).

Return type

dict[str, Any]

Example:

Create a snapshot of the blob.
# Create a read-only snapshot of the blob at this point in time
snapshot_blob = blob_client.create_snapshot()

# Get the snapshot ID
print(snapshot_blob.get('snapshot'))
delete_blob(delete_snapshots: bool = False, **kwargs: Any)None[source]

Marks the specified blob for deletion.

The blob is later deleted during garbage collection. Note that in order to delete a blob, you must delete all of its snapshots. You can delete both at the same time with the delete_blob() operation.

If a delete retention policy is enabled for the service, then this operation soft deletes the blob and retains the blob for a specified number of days. After the specified number of days, the blob’s data is removed from the service during garbage collection. Soft deleted blob is accessible through list_blobs() specifying include=[‘deleted’] option. Soft-deleted blob can be restored using undelete() operation.

Parameters

delete_snapshots (str) –

Required if the blob has associated snapshots. Values include:
  • ”only”: Deletes only the blobs snapshots.

  • ”include”: Deletes the blob along with all snapshots.

Keyword Arguments
  • version_id (str) –

    The version id parameter is an opaque DateTime value that, when present, specifies the version of the blob to delete.

    New in version 12.4.0.

    This keyword argument was introduced in API version ‘2019-12-12’.

  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. If specified, delete_blob only succeeds if the blob’s lease is active and matches this ID. Value can be a BlobLeaseClient object or the lease ID as a string.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Return type

None

Example:

Delete a blob.
blob_client.delete_blob()
download_blob(offset: Optional[int] = None, length: Optional[int] = None, **kwargs: Any) → azure.storage.blob._download.StorageStreamDownloader[source]

Downloads a blob to the StorageStreamDownloader. The readall() method must be used to read all the content or readinto() must be used to download the blob into a stream.

Parameters
  • offset (int) – Start of byte range to use for downloading a section of the blob. Must be set if length is provided.

  • length (int) – Number of bytes to read from the stream. This is optional, but should be supplied for optimal performance.

Keyword Arguments
  • version_id (str) –

    The version id parameter is an opaque DateTime value that, when present, specifies the version of the blob to download.

    New in version 12.4.0.

    This keyword argument was introduced in API version ‘2019-12-12’.

  • validate_content (bool) – If true, calculates an MD5 hash for each chunk of the blob. The storage service checks the hash of the content that has arrived with the hash that was sent. This is primarily valuable for detecting bitflips on the wire if using http instead of https, as https (the default), will already validate. Note that this MD5 hash is not stored with the blob. Also note that if enabled, the memory-efficient upload algorithm will not be used because computing the MD5 hash requires buffering entire blocks, and doing so defeats the purpose of the memory-efficient algorithm.

  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. If specified, download_blob only succeeds if the blob’s lease is active and matches this ID. Value can be a BlobLeaseClient object or the lease ID as a string.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments
  • cpk (CustomerProvidedEncryptionKey) – Encrypts the data on the service-side with the given key. Use of customer-provided keys must be done over HTTPS. As the encryption key itself is provided in the request, a secure connection must be established to transfer the key.

  • max_concurrency (int) – The number of parallel connections with which to download.

  • encoding (str) – Encoding to decode the downloaded bytes. Default is None, i.e. no decoding.

  • timeout (int) – The timeout parameter is expressed in seconds. This method may make multiple calls to the Azure service and the timeout will apply to each call individually.

Returns

A streaming object (StorageStreamDownloader)

Return type

StorageStreamDownloader

Example:

Download a blob.
with open(DEST_FILE, "wb") as my_blob:
    download_stream = blob_client.download_blob()
    my_blob.write(download_stream.readall())
exists(**kwargs: Any)bool[source]

Returns True if a blob exists with the defined parameters, and returns False otherwise.

Parameters
  • version_id (str) – The version id parameter is an opaque DateTime value that, when present, specifies the version of the blob to check if it exists.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

boolean

classmethod from_blob_url(blob_url: str, credential: Optional[Any] = None, snapshot: Union[str, Dict[str, Any], None] = None, **kwargs: Any) → azure.storage.blob._blob_client.BlobClient[source]

Create BlobClient from a blob url. This doesn’t support customized blob url with ‘/’ in blob name.

Parameters
  • blob_url (str) – The full endpoint URL to the Blob, including SAS token and snapshot if used. This could be either the primary endpoint, or the secondary endpoint depending on the current location_mode.

  • credential – The credentials with which to authenticate. This is optional if the account URL already has a SAS token, or the connection string already has shared access key values. The value can be a SAS token string, an account shared access key, or an instance of a TokenCredentials class from azure.identity. Credentials provided here will take precedence over those in the connection string.

  • snapshot (str) – The optional blob snapshot on which to operate. This can be the snapshot ID string or the response returned from create_snapshot(). If specified, this will override the snapshot in the url.

Returns

A Blob client.

Return type

BlobClient

classmethod from_connection_string(conn_str: str, container_name: str, blob_name: str, snapshot: Optional[str] = None, credential: Optional[Any] = None, **kwargs: Any) → azure.storage.blob._blob_client.BlobClient[source]

Create BlobClient from a Connection String.

Parameters
  • conn_str (str) – A connection string to an Azure Storage account.

  • container_name (str) – The container name for the blob.

  • blob_name (str) – The name of the blob with which to interact.

  • snapshot (str) – The optional blob snapshot on which to operate. This can be the snapshot ID string or the response returned from create_snapshot().

  • credential – The credentials with which to authenticate. This is optional if the account URL already has a SAS token, or the connection string already has shared access key values. The value can be a SAS token string, an account shared access key, or an instance of a TokenCredentials class from azure.identity. Credentials provided here will take precedence over those in the connection string.

Returns

A Blob client.

Return type

BlobClient

Example:

Creating the BlobClient from a connection string.
from azure.storage.blob import BlobClient
blob_client = BlobClient.from_connection_string(
    self.connection_string, container_name="mycontainer", blob_name="blobname.txt")
get_account_information(**kwargs: Any) → Dict[str, str][source]

Gets information related to the storage account in which the blob resides.

The information can also be retrieved if the user has a SAS to a container or blob. The keys in the returned dictionary include ‘sku_name’ and ‘account_kind’.

Returns

A dict of account information (SKU and account type).

Return type

dict(str, str)

get_blob_properties(**kwargs: Any) → azure.storage.blob._models.BlobProperties[source]

Returns all user-defined metadata, standard HTTP properties, and system properties for the blob. It does not return the content of the blob.

Keyword Arguments
  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

  • version_id (str) –

    The version id parameter is an opaque DateTime value that, when present, specifies the version of the blob to get properties.

    New in version 12.4.0.

    This keyword argument was introduced in API version ‘2019-12-12’.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments
  • cpk (CustomerProvidedEncryptionKey) – Encrypts the data on the service-side with the given key. Use of customer-provided keys must be done over HTTPS. As the encryption key itself is provided in the request, a secure connection must be established to transfer the key.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

BlobProperties

Return type

BlobProperties

Example:

Getting the properties for a blob.
properties = blob_client.get_blob_properties()
get_blob_tags(**kwargs: Any) → Dict[str, str][source]

The Get Tags operation enables users to get tags on a blob or specific blob version, or snapshot.

New in version 12.4.0: This operation was introduced in API version ‘2019-12-12’.

Keyword Arguments

version_id (str) – The version id parameter is an opaque DateTime value that, when present, specifies the version of the blob to add tags to.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on destination blob with a matching value.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

Key value pairs of blob tags.

Return type

Dict[str, str]

get_block_list(block_list_type: Optional[str] = 'committed', **kwargs: Any) → Tuple[List[azure.storage.blob._models.BlobBlock], List[azure.storage.blob._models.BlobBlock]][source]

The Get Block List operation retrieves the list of blocks that have been uploaded as part of a block blob.

Parameters

block_list_type (str) – Specifies whether to return the list of committed blocks, the list of uncommitted blocks, or both lists together. Possible values include: ‘committed’, ‘uncommitted’, ‘all’

Keyword Arguments

lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on destination blob with a matching value.

New in version 12.4.0.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

A tuple of two lists - committed and uncommitted blocks

Return type

tuple(list(BlobBlock), list(BlobBlock))

get_page_range_diff_for_managed_disk(previous_snapshot_url: str, offset: Optional[int] = None, length: Optional[int] = None, **kwargs) → Tuple[List[Dict[str, int]], List[Dict[str, int]]][source]

Returns the list of valid page ranges for a managed disk or snapshot.

Note

This operation is only available for managed disk accounts.

New in version 12.2.0: This operation was introduced in API version ‘2019-07-07’.

Parameters
  • previous_snapshot_url – Specifies the URL of a previous snapshot of the managed disk. The response will only contain pages that were changed between the target blob and its previous snapshot.

  • offset (int) – Start of byte range to use for getting valid page ranges. If no length is given, all bytes after the offset will be searched. Pages must be aligned with 512-byte boundaries, the start offset must be a modulus of 512 and the length must be a modulus of 512.

  • length (int) – Number of bytes to use for getting valid page ranges. If length is given, offset must be provided. This range will return valid page ranges from the offset start up to the specified length. Pages must be aligned with 512-byte boundaries, the start offset must be a modulus of 512 and the length must be a modulus of 512.

Keyword Arguments
  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

A tuple of two lists of page ranges as dictionaries with ‘start’ and ‘end’ keys. The first element are filled page ranges, the 2nd element is cleared page ranges.

Return type

tuple(list(dict(str, str), list(dict(str, str))

get_page_ranges(offset: Optional[int] = None, length: Optional[int] = None, previous_snapshot_diff: Union[str, Dict[str, Any], None] = None, **kwargs) → Tuple[List[Dict[str, int]], List[Dict[str, int]]][source]

Returns the list of valid page ranges for a Page Blob or snapshot of a page blob.

Parameters
  • offset (int) – Start of byte range to use for getting valid page ranges. If no length is given, all bytes after the offset will be searched. Pages must be aligned with 512-byte boundaries, the start offset must be a modulus of 512 and the length must be a modulus of 512.

  • length (int) – Number of bytes to use for getting valid page ranges. If length is given, offset must be provided. This range will return valid page ranges from the offset start up to the specified length. Pages must be aligned with 512-byte boundaries, the start offset must be a modulus of 512 and the length must be a modulus of 512.

  • previous_snapshot_diff (str) – The snapshot diff parameter that contains an opaque DateTime value that specifies a previous blob snapshot to be compared against a more recent snapshot or the current blob.

Keyword Arguments
  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

A tuple of two lists of page ranges as dictionaries with ‘start’ and ‘end’ keys. The first element are filled page ranges, the 2nd element is cleared page ranges.

Return type

tuple(list(dict(str, str), list(dict(str, str))

query_blob(query_expression: str, **kwargs: Any) → azure.storage.blob._quick_query_helper.BlobQueryReader[source]

Enables users to select/project on blob/or blob snapshot data by providing simple query expressions. This operations returns a BlobQueryReader, users need to use readall() or readinto() to get query data.

Parameters

query_expression (str) – Required. a query statement.

Keyword Arguments
  • on_error (Callable[Exception]) – A function to be called on any processing errors returned by the service.

  • blob_format (DelimitedTextDialect or DelimitedJsonDialect) – Optional. Defines the serialization of the data currently stored in the blob. The default is to treat the blob data as CSV data formatted in the default dialect. This can be overridden with a custom DelimitedTextDialect, or alternatively a DelimitedJsonDialect.

  • output_format (DelimitedTextDialect or DelimitedJsonDialect) – Optional. Defines the output serialization for the data stream. By default the data will be returned as it is represented in the blob. By providing an output format, the blob data will be reformatted according to that profile. This value can be a DelimitedTextDialect or a DelimitedJsonDialect.

  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments
  • cpk (CustomerProvidedEncryptionKey) – Encrypts the data on the service-side with the given key. Use of customer-provided keys must be done over HTTPS. As the encryption key itself is provided in the request, a secure connection must be established to transfer the key.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

A streaming object (BlobQueryReader)

Return type

BlobQueryReader

Example:

select/project on blob/or blob snapshot data by providing simple query expressions.
errors = []
def on_error(error):
    errors.append(error)

# upload the csv file
blob_client = blob_service_client.get_blob_client(container_name, "csvfile")
with open("./sample-blobs/quick_query.csv", "rb") as stream:
    blob_client.upload_blob(stream, overwrite=True)

# select the second column of the csv file
query_expression = "SELECT _2 from BlobStorage"
input_format = DelimitedTextDialect(delimiter=',', quotechar='"', lineterminator='\n', escapechar="", has_header=False)
output_format = DelimitedJsonDialect(delimiter='\n')
reader = blob_client.query_blob(query_expression, on_error=on_error, blob_format=input_format, output_format=output_format)
content = reader.readall()
resize_blob(size: int, **kwargs: Any) → Dict[str, Union[str, datetime]][source]

Resizes a page blob to the specified size.

If the specified value is less than the current size of the blob, then all pages above the specified value are cleared.

Parameters

size (int) – Size used to resize blob. Maximum size for a page blob is up to 1 TB. The page blob size must be aligned to a 512-byte boundary.

Keyword Arguments
  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments
  • premium_page_blob_tier (PremiumPageBlobTier) – A page blob tier value to set the blob to. The tier correlates to the size of the blob and number of allowed IOPS. This is only applicable to page blobs on premium storage accounts.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

Blob-updated property dict (Etag and last modified).

Return type

dict(str, Any)

seal_append_blob(**kwargs) → Dict[str, Union[str, datetime, int]][source]

The Seal operation seals the Append Blob to make it read-only.

New in version 12.4.0.

Keyword Arguments
  • appendpos_condition (int) – Optional conditional header, used only for the Append Block operation. A number indicating the byte offset to compare. Append Block will succeed only if the append position is equal to this number. If it is not, the request will fail with the AppendPositionConditionNotMet error (HTTP status code 412 - Precondition Failed).

  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

Blob-updated property dict (Etag, last modified, append offset, committed block count).

Return type

dict(str, Any)

set_blob_metadata(metadata: Optional[Dict[str, str]] = None, **kwargs: Any) → Dict[str, Union[str, datetime]][source]

Sets user-defined metadata for the blob as one or more name-value pairs.

Parameters

metadata (dict(str, str)) – Dict containing name and value pairs. Each call to this operation replaces all existing metadata attached to the blob. To remove all metadata from the blob, call this operation with no metadata headers.

Keyword Arguments
  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments
  • cpk (CustomerProvidedEncryptionKey) – Encrypts the data on the service-side with the given key. Use of customer-provided keys must be done over HTTPS. As the encryption key itself is provided in the request, a secure connection must be established to transfer the key.

  • encryption_scope (str) –

    A predefined encryption scope used to encrypt the data on the service. An encryption scope can be created using the Management API and referenced here by name. If a default encryption scope has been defined at the container, this value will override it if the container-level scope is configured to allow overrides. Otherwise an error will be raised.

    New in version 12.2.0.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

Blob-updated property dict (Etag and last modified)

set_blob_tags(tags: Optional[Dict[str, str]] = None, **kwargs: Any) → Dict[str, Any][source]
The Set Tags operation enables users to set tags on a blob or specific blob version, but not snapshot.

Each call to this operation replaces all existing tags attached to the blob. To remove all tags from the blob, call this operation with no tags set.

New in version 12.4.0: This operation was introduced in API version ‘2019-12-12’.

Parameters

tags (dict(str, str)) – Name-value pairs associated with the blob as tag. Tags are case-sensitive. The tag set may contain at most 10 tags. Tag keys must be between 1 and 128 characters, and tag values must be between 0 and 256 characters. Valid tag key and value characters include: lowercase and uppercase letters, digits (0-9), space (` `), plus (+), minus (-), period (.), solidus (/), colon (:), equals (=), underscore (_)

Keyword Arguments
  • version_id (str) – The version id parameter is an opaque DateTime value that, when present, specifies the version of the blob to add tags to.

  • validate_content (bool) – If true, calculates an MD5 hash of the tags content. The storage service checks the hash of the content that has arrived with the hash that was sent. This is primarily valuable for detecting bitflips on the wire if using http instead of https, as https (the default), will already validate. Note that this MD5 hash is not stored with the blob.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on destination blob with a matching value.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

Blob-updated property dict (Etag and last modified)

Return type

Dict[str, Any]

set_http_headers(content_settings: Optional[ContentSettings] = None, **kwargs: Any)None[source]

Sets system properties on the blob.

If one property is set for the content_settings, all properties will be overridden.

Parameters

content_settings (ContentSettings) – ContentSettings object used to set blob properties. Used to set content type, encoding, language, disposition, md5, and cache control.

Keyword Arguments
  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

Blob-updated property dict (Etag and last modified)

Return type

Dict[str, Any]

set_premium_page_blob_tier(premium_page_blob_tier: Union[str, PremiumPageBlobTier], **kwargs: Any)None[source]

Sets the page blob tiers on the blob. This API is only supported for page blobs on premium accounts.

Parameters

premium_page_blob_tier (PremiumPageBlobTier) – A page blob tier value to set the blob to. The tier correlates to the size of the blob and number of allowed IOPS. This is only applicable to page blobs on premium storage accounts.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments
  • timeout (int) – The timeout parameter is expressed in seconds. This method may make multiple calls to the Azure service and the timeout will apply to each call individually.

  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

Return type

None

set_sequence_number(sequence_number_action: Union[str, SequenceNumberAction], sequence_number: Optional[str] = None, **kwargs: Any) → Dict[str, Union[str, datetime]][source]

Sets the blob sequence number.

Parameters
  • sequence_number_action (str) – This property indicates how the service should modify the blob’s sequence number. See SequenceNumberAction for more information.

  • sequence_number (str) – This property sets the blob’s sequence number. The sequence number is a user-controlled property that you can use to track requests and manage concurrency issues.

Keyword Arguments
  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

Blob-updated property dict (Etag and last modified).

Return type

dict(str, Any)

set_standard_blob_tier(standard_blob_tier: Union[str, StandardBlobTier], **kwargs: Any)None[source]

This operation sets the tier on a block blob.

A block blob’s tier determines Hot/Cool/Archive storage type. This operation does not update the blob’s ETag.

Parameters

standard_blob_tier (str or StandardBlobTier) – Indicates the tier to be set on the blob. Options include ‘Hot’, ‘Cool’, ‘Archive’. The hot tier is optimized for storing data that is accessed frequently. The cool storage tier is optimized for storing data that is infrequently accessed and stored for at least a month. The archive tier is optimized for storing data that is rarely accessed and stored for at least six months with flexible latency requirements.

Keyword Arguments
  • rehydrate_priority (RehydratePriority) – Indicates the priority with which to rehydrate an archived blob

  • version_id (str) –

    The version id parameter is an opaque DateTime value that, when present, specifies the version of the blob to download.

    New in version 12.4.0.

    This keyword argument was introduced in API version ‘2019-12-12’.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments
  • timeout (int) – The timeout parameter is expressed in seconds.

  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

Return type

None

stage_block(block_id: str, data: Union[Iterable[AnyStr], IO[AnyStr]], length: Optional[int] = None, **kwargs) → Dict[str, Any][source]

Creates a new block to be committed as part of a blob.

Parameters
  • block_id (str) – A valid Base64 string value that identifies the block. Prior to encoding, the string must be less than or equal to 64 bytes in size. For a given blob, the length of the value specified for the block_id parameter must be the same size for each block.

  • data – The blob data.

  • length (int) – Size of the block.

Keyword Arguments
  • validate_content (bool) – If true, calculates an MD5 hash for each chunk of the blob. The storage service checks the hash of the content that has arrived with the hash that was sent. This is primarily valuable for detecting bitflips on the wire if using http instead of https, as https (the default), will already validate. Note that this MD5 hash is not stored with the blob. Also note that if enabled, the memory-efficient upload algorithm will not be used because computing the MD5 hash requires buffering entire blocks, and doing so defeats the purpose of the memory-efficient algorithm.

  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

  • encoding (str) – Defaults to UTF-8.

  • cpk (CustomerProvidedEncryptionKey) – Encrypts the data on the service-side with the given key. Use of customer-provided keys must be done over HTTPS. As the encryption key itself is provided in the request, a secure connection must be established to transfer the key.

  • encryption_scope (str) –

    A predefined encryption scope used to encrypt the data on the service. An encryption scope can be created using the Management API and referenced here by name. If a default encryption scope has been defined at the container, this value will override it if the container-level scope is configured to allow overrides. Otherwise an error will be raised.

    New in version 12.2.0.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

Blob property dict.

Return type

dict[str, Any]

stage_block_from_url(block_id: str, source_url: str, source_offset: Optional[int] = None, source_length: Optional[int] = None, source_content_md5: Union[bytes, bytearray, None] = None, **kwargs) → Dict[str, Any][source]

Creates a new block to be committed as part of a blob where the contents are read from a URL.

Parameters
  • block_id (str) – A valid Base64 string value that identifies the block. Prior to encoding, the string must be less than or equal to 64 bytes in size. For a given blob, the length of the value specified for the block_id parameter must be the same size for each block.

  • source_url (str) – The URL.

  • source_offset (int) – Start of byte range to use for the block. Must be set if source length is provided.

  • source_length (int) – The size of the block in bytes.

  • source_content_md5 (bytearray) – Specify the md5 calculated for the range of bytes that must be read from the copy source.

Keyword Arguments
  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

  • cpk (CustomerProvidedEncryptionKey) – Encrypts the data on the service-side with the given key. Use of customer-provided keys must be done over HTTPS. As the encryption key itself is provided in the request, a secure connection must be established to transfer the key.

  • encryption_scope (str) –

    A predefined encryption scope used to encrypt the data on the service. An encryption scope can be created using the Management API and referenced here by name. If a default encryption scope has been defined at the container, this value will override it if the container-level scope is configured to allow overrides. Otherwise an error will be raised.

    New in version 12.2.0.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

Blob property dict.

Return type

dict[str, Any]

start_copy_from_url(source_url: str, metadata: Optional[Dict[str, str]] = None, incremental_copy: bool = False, **kwargs: Any) → Dict[str, Union[str, datetime]][source]

Copies a blob asynchronously.

This operation returns a copy operation object that can be used to wait on the completion of the operation, as well as check status or abort the copy operation. The Blob service copies blobs on a best-effort basis.

The source blob for a copy operation may be a block blob, an append blob, or a page blob. If the destination blob already exists, it must be of the same blob type as the source blob. Any existing destination blob will be overwritten. The destination blob cannot be modified while a copy operation is in progress.

When copying from a page blob, the Blob service creates a destination page blob of the source blob’s length, initially containing all zeroes. Then the source page ranges are enumerated, and non-empty ranges are copied.

For a block blob or an append blob, the Blob service creates a committed blob of zero length before returning from this operation. When copying from a block blob, all committed blocks and their block IDs are copied. Uncommitted blocks are not copied. At the end of the copy operation, the destination blob will have the same committed block count as the source.

When copying from an append blob, all committed blocks are copied. At the end of the copy operation, the destination blob will have the same committed block count as the source.

For all blob types, you can call status() on the returned polling object to check the status of the copy operation, or wait() to block until the operation is complete. The final blob will be committed when the copy completes.

Parameters
  • source_url (str) –

    A URL of up to 2 KB in length that specifies a file or blob. The value should be URL-encoded as it would appear in a request URI. If the source is in another account, the source must either be public or must be authenticated via a shared access signature. If the source is public, no authentication is required. Examples: https://myaccount.blob.core.windows.net/mycontainer/myblob

    https://myaccount.blob.core.windows.net/mycontainer/myblob?snapshot=<DateTime>

    https://otheraccount.blob.core.windows.net/mycontainer/myblob?sastoken

  • metadata (dict(str, str)) – Name-value pairs associated with the blob as metadata. If no name-value pairs are specified, the operation will copy the metadata from the source blob or file to the destination blob. If one or more name-value pairs are specified, the destination blob is created with the specified metadata, and metadata is not copied from the source blob or file.

  • incremental_copy (bool) – Copies the snapshot of the source page blob to a destination page blob. The snapshot is copied such that only the differential changes between the previously copied snapshot are transferred to the destination. The copied snapshots are complete copies of the original snapshot and can be read or copied from as usual. Defaults to False.

Keyword Arguments
  • tags (dict(str, str)) –

    Name-value pairs associated with the blob as tag. Tags are case-sensitive. The tag set may contain at most 10 tags. Tag keys must be between 1 and 128 characters, and tag values must be between 0 and 256 characters. Valid tag key and value characters include: lowercase and uppercase letters, digits (0-9), space (` `), plus (+), minus (-), period (.), solidus (/), colon (:), equals (=), underscore (_)

    New in version 12.4.0.

  • source_if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this conditional header to copy the blob only if the source blob has been modified since the specified date/time.

  • source_if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this conditional header to copy the blob only if the source blob has not been modified since the specified date/time.

  • source_etag (str) – The source ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • source_match_condition (MatchConditions) – The source match condition to use upon the etag.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this conditional header to copy the blob only if the destination blob has been modified since the specified date/time. If the destination blob has not been modified, the Blob service returns status code 412 (Precondition Failed).

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this conditional header to copy the blob only if the destination blob has not been modified since the specified date/time. If the destination blob has been modified, the Blob service returns status code 412 (Precondition Failed).

  • etag (str) – The destination ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The destination match condition to use upon the etag.

  • destination_lease (BlobLeaseClient or str) – The lease ID specified for this header must match the lease ID of the destination blob. If the request does not include the lease ID or it is not valid, the operation fails with status code 412 (Precondition Failed).

  • source_lease (BlobLeaseClient or str) – Specify this to perform the Copy Blob operation only if the lease ID given matches the active lease ID of the source blob.

  • timeout (int) – The timeout parameter is expressed in seconds.

  • premium_page_blob_tier (PremiumPageBlobTier) – A page blob tier value to set the blob to. The tier correlates to the size of the blob and number of allowed IOPS. This is only applicable to page blobs on premium storage accounts.

  • standard_blob_tier (StandardBlobTier) – A standard blob tier value to set the blob to. For this version of the library, this is only applicable to block blobs on standard storage accounts.

  • rehydrate_priority (RehydratePriority) – Indicates the priority with which to rehydrate an archived blob

  • seal_destination_blob (bool) –

    Seal the destination append blob. This operation is only for append blob.

    New in version 12.4.0.

  • requires_sync (bool) – Enforces that the service will not return a response until the copy is complete.

Returns

A dictionary of copy properties (etag, last_modified, copy_id, copy_status).

Return type

dict[str, str or datetime]

Example:

Copy a blob from a URL.
# Get the blob client with the source blob
source_blob = "http://www.gutenberg.org/files/59466/59466-0.txt"
copied_blob = blob_service_client.get_blob_client("copyblobcontainer", '59466-0.txt')

# start copy and check copy status
copy = copied_blob.start_copy_from_url(source_blob)
props = copied_blob.get_blob_properties()
print(props.copy.status)
undelete_blob(**kwargs: Any)None[source]

Restores soft-deleted blobs or snapshots.

Operation will only be successful if used within the specified number of days set in the delete retention policy.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Return type

None

Example:

Undeleting a blob.
# Undelete the blob before the retention policy expires
blob_client.undelete_blob()
upload_blob(data: Union[Iterable[AnyStr], IO[AnyStr]], blob_type: str = <BlobType.BlockBlob: 'BlockBlob'>, length: Optional[int] = None, metadata: Optional[Dict[str, str]] = None, **kwargs) → Any[source]

Creates a new blob from a data source with automatic chunking.

Parameters
  • data – The blob data to upload.

  • blob_type (BlobType) – The type of the blob. This can be either BlockBlob, PageBlob or AppendBlob. The default value is BlockBlob.

  • length (int) – Number of bytes to read from the stream. This is optional, but should be supplied for optimal performance.

  • metadata (dict(str, str)) – Name-value pairs associated with the blob as metadata.

Keyword Arguments
  • tags (dict(str, str)) –

    Name-value pairs associated with the blob as tag. Tags are case-sensitive. The tag set may contain at most 10 tags. Tag keys must be between 1 and 128 characters, and tag values must be between 0 and 256 characters. Valid tag key and value characters include: lowercase and uppercase letters, digits (0-9), space (` `), plus (+), minus (-), period (.), solidus (/), colon (:), equals (=), underscore (_)

    New in version 12.4.0.

  • overwrite (bool) – Whether the blob to be uploaded should overwrite the current data. If True, upload_blob will overwrite the existing data. If set to False, the operation will fail with ResourceExistsError. The exception to the above is with Append blob types: if set to False and the data already exists, an error will not be raised and the data will be appended to the existing blob. If set overwrite=True, then the existing append blob will be deleted, and a new one created. Defaults to False.

  • content_settings (ContentSettings) – ContentSettings object used to set blob properties. Used to set content type, encoding, language, disposition, md5, and cache control.

  • validate_content (bool) – If true, calculates an MD5 hash for each chunk of the blob. The storage service checks the hash of the content that has arrived with the hash that was sent. This is primarily valuable for detecting bitflips on the wire if using http instead of https, as https (the default), will already validate. Note that this MD5 hash is not stored with the blob. Also note that if enabled, the memory-efficient upload algorithm will not be used because computing the MD5 hash requires buffering entire blocks, and doing so defeats the purpose of the memory-efficient algorithm.

  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. If specified, upload_blob only succeeds if the blob’s lease is active and matches this ID. Value can be a BlobLeaseClient object or the lease ID as a string.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments
  • premium_page_blob_tier (PremiumPageBlobTier) – A page blob tier value to set the blob to. The tier correlates to the size of the blob and number of allowed IOPS. This is only applicable to page blobs on premium storage accounts.

  • standard_blob_tier (StandardBlobTier) – A standard blob tier value to set the blob to. For this version of the library, this is only applicable to block blobs on standard storage accounts.

  • maxsize_condition (int) – Optional conditional header. The max length in bytes permitted for the append blob. If the Append Block operation would cause the blob to exceed that limit or if the blob size is already greater than the value specified in this header, the request will fail with MaxBlobSizeConditionNotMet error (HTTP status code 412 - Precondition Failed).

  • max_concurrency (int) – Maximum number of parallel connections to use when the blob size exceeds 64MB.

  • cpk (CustomerProvidedEncryptionKey) – Encrypts the data on the service-side with the given key. Use of customer-provided keys must be done over HTTPS. As the encryption key itself is provided in the request, a secure connection must be established to transfer the key.

  • encryption_scope (str) –

    A predefined encryption scope used to encrypt the data on the service. An encryption scope can be created using the Management API and referenced here by name. If a default encryption scope has been defined at the container, this value will override it if the container-level scope is configured to allow overrides. Otherwise an error will be raised.

    New in version 12.2.0.

  • encoding (str) – Defaults to UTF-8.

  • timeout (int) – The timeout parameter is expressed in seconds. This method may make multiple calls to the Azure service and the timeout will apply to each call individually.

Returns

Blob-updated property dict (Etag and last modified)

Return type

dict[str, Any]

Example:

Upload a blob to the container.
# Upload content to block blob
with open(SOURCE_FILE, "rb") as data:
    blob_client.upload_blob(data, blob_type="BlockBlob")
upload_page(page: bytes, offset: int, length: int, **kwargs) → Dict[str, Union[str, datetime]][source]

The Upload Pages operation writes a range of pages to a page blob.

Parameters
  • page (bytes) – Content of the page.

  • offset (int) – Start of byte range to use for writing to a section of the blob. Pages must be aligned with 512-byte boundaries, the start offset must be a modulus of 512 and the length must be a modulus of 512.

  • length (int) – Number of bytes to use for writing to a section of the blob. Pages must be aligned with 512-byte boundaries, the start offset must be a modulus of 512 and the length must be a modulus of 512.

Keyword Arguments
  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

  • validate_content (bool) – If true, calculates an MD5 hash of the page content. The storage service checks the hash of the content that has arrived with the hash that was sent. This is primarily valuable for detecting bitflips on the wire if using http instead of https, as https (the default), will already validate. Note that this MD5 hash is not stored with the blob.

  • if_sequence_number_lte (int) – If the blob’s sequence number is less than or equal to the specified value, the request proceeds; otherwise it fails.

  • if_sequence_number_lt (int) – If the blob’s sequence number is less than the specified value, the request proceeds; otherwise it fails.

  • if_sequence_number_eq (int) – If the blob’s sequence number is equal to the specified value, the request proceeds; otherwise it fails.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments
  • cpk (CustomerProvidedEncryptionKey) – Encrypts the data on the service-side with the given key. Use of customer-provided keys must be done over HTTPS. As the encryption key itself is provided in the request, a secure connection must be established to transfer the key.

  • encryption_scope (str) –

    A predefined encryption scope used to encrypt the data on the service. An encryption scope can be created using the Management API and referenced here by name. If a default encryption scope has been defined at the container, this value will override it if the container-level scope is configured to allow overrides. Otherwise an error will be raised.

    New in version 12.2.0.

  • encoding (str) – Defaults to UTF-8.

  • timeout (int) – The timeout parameter is expressed in seconds.

Returns

Blob-updated property dict (Etag and last modified).

Return type

dict(str, Any)

upload_pages_from_url(source_url: str, offset: int, length: int, source_offset: int, **kwargs) → Dict[str, Any][source]

The Upload Pages operation writes a range of pages to a page blob where the contents are read from a URL.

Parameters
  • source_url (str) – The URL of the source data. It can point to any Azure Blob or File, that is either public or has a shared access signature attached.

  • offset (int) – Start of byte range to use for writing to a section of the blob. Pages must be aligned with 512-byte boundaries, the start offset must be a modulus of 512 and the length must be a modulus of 512.

  • length (int) – Number of bytes to use for writing to a section of the blob. Pages must be aligned with 512-byte boundaries, the start offset must be a modulus of 512 and the length must be a modulus of 512.

  • source_offset (int) – This indicates the start of the range of bytes(inclusive) that has to be taken from the copy source. The service will read the same number of bytes as the destination range (length-offset).

Keyword Arguments
  • source_content_md5 (bytes) – If given, the service will calculate the MD5 hash of the block content and compare against this value.

  • source_if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the source resource has been modified since the specified time.

  • source_if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the source resource has not been modified since the specified date/time.

  • source_etag (str) – The source ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • source_match_condition (MatchConditions) – The source match condition to use upon the etag.

  • lease (BlobLeaseClient or str) – Required if the blob has an active lease. Value can be a BlobLeaseClient object or the lease ID as a string.

  • if_sequence_number_lte (int) – If the blob’s sequence number is less than or equal to the specified value, the request proceeds; otherwise it fails.

  • if_sequence_number_lt (int) – If the blob’s sequence number is less than the specified value, the request proceeds; otherwise it fails.

  • if_sequence_number_eq (int) – If the blob’s sequence number is equal to the specified value, the request proceeds; otherwise it fails.

  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – The destination ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The destination match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments
  • cpk (CustomerProvidedEncryptionKey) – Encrypts the data on the service-side with the given key. Use of customer-provided keys must be done over HTTPS. As the encryption key itself is provided in the request, a secure connection must be established to transfer the key.

  • encryption_scope (str) –

    A predefined encryption scope used to encrypt the data on the service. An encryption scope can be created using the Management API and referenced here by name. If a default encryption scope has been defined at the container, this value will override it if the container-level scope is configured to allow overrides. Otherwise an error will be raised.

    New in version 12.2.0.

  • timeout (int) – The timeout parameter is expressed in seconds.

property api_version

The version of the Storage API used for requests.

Type

str

property location_mode

The location mode that the client is currently using.

By default this will be “primary”. Options include “primary” and “secondary”.

Type

str

property primary_endpoint

The full primary endpoint URL.

Type

str

property primary_hostname

The hostname of the primary endpoint.

Type

str

property secondary_endpoint

The full secondary endpoint URL if configured.

If not available a ValueError will be raised. To explicitly specify a secondary hostname, use the optional secondary_hostname keyword argument on instantiation.

Type

str

Raises

ValueError

property secondary_hostname

The hostname of the secondary endpoint.

If not available this will be None. To explicitly specify a secondary hostname, use the optional secondary_hostname keyword argument on instantiation.

Type

str or None

property url

The full endpoint URL to this entity, including SAS token if used.

This could be either the primary endpoint, or the secondary endpoint depending on the current location_mode().

class azure.storage.blob.BlobType[source]

An enumeration.

AppendBlob = 'AppendBlob'
BlockBlob = 'BlockBlob'
PageBlob = 'PageBlob'
class azure.storage.blob.BlobLeaseClient(client: Union[BlobClient, ContainerClient], lease_id: Optional[str] = None)[source]

Creates a new BlobLeaseClient.

This client provides lease operations on a BlobClient or ContainerClient.

Variables
  • id (str) – The ID of the lease currently being maintained. This will be None if no lease has yet been acquired.

  • etag (str) – The ETag of the lease currently being maintained. This will be None if no lease has yet been acquired or modified.

  • last_modified (datetime) – The last modified timestamp of the lease currently being maintained. This will be None if no lease has yet been acquired or modified.

Parameters
  • client (BlobClient or ContainerClient) – The client of the blob or container to lease.

  • lease_id (str) – A string representing the lease ID of an existing lease. This value does not need to be specified in order to acquire a new lease, or break one.

acquire(lease_duration: int = - 1, **kwargs: Any)None[source]

Requests a new lease.

If the container does not have an active lease, the Blob service creates a lease on the container and returns a new lease ID.

Parameters

lease_duration (int) – Specifies the duration of the lease, in seconds, or negative one (-1) for a lease that never expires. A non-infinite lease can be between 15 and 60 seconds. A lease duration cannot be changed using renew or change. Default is -1 (infinite lease).

Keyword Arguments
  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Return type

None

break_lease(lease_break_period: Optional[int] = None, **kwargs: Any)int[source]

Break the lease, if the container or blob has an active lease.

Once a lease is broken, it cannot be renewed. Any authorized request can break the lease; the request is not required to specify a matching lease ID. When a lease is broken, the lease break period is allowed to elapse, during which time no lease operation except break and release can be performed on the container or blob. When a lease is successfully broken, the response indicates the interval in seconds until a new lease can be acquired.

Parameters

lease_break_period (int) – This is the proposed duration of seconds that the lease should continue before it is broken, between 0 and 60 seconds. This break period is only used if it is shorter than the time remaining on the lease. If longer, the time remaining on the lease is used. A new lease will not be available before the break period has expired, but the lease may be held for longer than the break period. If this header does not appear with a break operation, a fixed-duration lease breaks after the remaining lease period elapses, and an infinite lease breaks immediately.

Keyword Arguments
  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

Approximate time remaining in the lease period, in seconds.

Return type

int

change(proposed_lease_id: str, **kwargs: Any)None[source]

Change the lease ID of an active lease.

Parameters

proposed_lease_id (str) – Proposed lease ID, in a GUID string format. The Blob service returns 400 (Invalid request) if the proposed lease ID is not in the correct format.

Keyword Arguments
  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

None

release(**kwargs: Any)None[source]

Release the lease.

The lease may be released if the client lease id specified matches that associated with the container or blob. Releasing the lease allows another client to immediately acquire the lease for the container or blob as soon as the release is complete.

Keyword Arguments
  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

None

renew(**kwargs: Any)None[source]

Renews the lease.

The lease can be renewed if the lease ID specified in the lease client matches that associated with the container or blob. Note that the lease may be renewed even if it has expired as long as the container or blob has not been leased again since the expiration of that lease. When you renew a lease, the lease duration clock resets.

Keyword Arguments
  • if_modified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has been modified since the specified time.

  • if_unmodified_since (datetime) – A DateTime value. Azure expects the date value passed in to be UTC. If timezone is included, any non-UTC datetimes will be converted to UTC. If a date is passed in without timezone info, it is assumed to be UTC. Specify this header to perform the operation only if the resource has not been modified since the specified date/time.

  • etag (str) – An ETag value, or the wildcard character (*). Used to check if the resource has changed, and act according to the condition specified by the match_condition parameter.

  • match_condition (MatchConditions) – The match condition to use upon the etag.

:keyword str if_tags_match_condition

Specify a SQL where clause on blob tags to operate only on blob with a matching value. eg. “”tagname”=’my tag’”

New in version 12.4.0.

Keyword Arguments

timeout (int) – The timeout parameter is expressed in seconds.

Returns

None

class azure.storage.blob.StorageErrorCode[source]

An enumeration.

account_already_exists = 'AccountAlreadyExists'
account_being_created = 'AccountBeingCreated'
account_is_disabled = 'AccountIsDisabled'
append_position_condition_not_met = 'AppendPositionConditionNotMet'
authentication_failed = 'AuthenticationFailed'
authorization_failure = 'AuthorizationFailure'
blob_already_exists = 'BlobAlreadyExists'
blob_archived = 'BlobArchived'
blob_being_rehydrated = 'BlobBeingRehydrated'
blob_not_archived = 'BlobNotArchived'
blob_not_found = 'BlobNotFound'
blob_overwritten = 'BlobOverwritten'
blob_tier_inadequate_for_content_length = 'BlobTierInadequateForContentLength'
block_count_exceeds_limit = 'BlockCountExceedsLimit'
block_list_too_long = 'BlockListTooLong'
cannot_change_to_lower_tier = 'CannotChangeToLowerTier'
cannot_delete_file_or_directory = 'CannotDeleteFileOrDirectory'
cannot_verify_copy_source = 'CannotVerifyCopySource'
client_cache_flush_delay = 'ClientCacheFlushDelay'
condition_headers_not_supported = 'ConditionHeadersNotSupported'
condition_not_met = 'ConditionNotMet'
container_already_exists = 'ContainerAlreadyExists'
container_being_deleted = 'ContainerBeingDeleted'
container_disabled = 'ContainerDisabled'
container_not_found = 'ContainerNotFound'
container_quota_downgrade_not_allowed = 'ContainerQuotaDowngradeNotAllowed'
content_length_larger_than_tier_limit = 'ContentLengthLargerThanTierLimit'
content_length_must_be_zero = 'ContentLengthMustBeZero'
copy_across_accounts_not_supported = 'CopyAcrossAccountsNotSupported'
copy_id_mismatch = 'CopyIdMismatch'
delete_pending = 'DeletePending'
destination_path_is_being_deleted = 'DestinationPathIsBeingDeleted'
directory_not_empty = 'DirectoryNotEmpty'
empty_metadata_key = 'EmptyMetadataKey'
feature_version_mismatch = 'FeatureVersionMismatch'
file_lock_conflict = 'FileLockConflict'
file_system_already_exists = 'FilesystemAlreadyExists'
file_system_being_deleted = 'FilesystemBeingDeleted'
file_system_not_found = 'FilesystemNotFound'
incremental_copy_blob_mismatch = 'IncrementalCopyBlobMismatch'
incremental_copy_of_eralier_version_snapshot_not_allowed = 'IncrementalCopyOfEralierVersionSnapshotNotAllowed'
incremental_copy_source_must_be_snapshot = 'IncrementalCopySourceMustBeSnapshot'
infinite_lease_duration_required = 'InfiniteLeaseDurationRequired'
insufficient_account_permissions = 'InsufficientAccountPermissions'
internal_error = 'InternalError'
invalid_authentication_info = 'InvalidAuthenticationInfo'
invalid_blob_or_block = 'InvalidBlobOrBlock'
invalid_blob_tier = 'InvalidBlobTier'
invalid_blob_type = 'InvalidBlobType'
invalid_block_id = 'InvalidBlockId'
invalid_block_list = 'InvalidBlockList'
invalid_destination_path = 'InvalidDestinationPath'
invalid_file_or_directory_path_name = 'InvalidFileOrDirectoryPathName'
invalid_flush_position = 'InvalidFlushPosition'
invalid_header_value = 'InvalidHeaderValue'
invalid_http_verb = 'InvalidHttpVerb'
invalid_input = 'InvalidInput'
invalid_marker = 'InvalidMarker'
invalid_md5 = 'InvalidMd5'
invalid_metadata = 'InvalidMetadata'
invalid_operation = 'InvalidOperation'
invalid_page_range = 'InvalidPageRange'
invalid_property_name = 'InvalidPropertyName'
invalid_query_parameter_value = 'InvalidQueryParameterValue'
invalid_range = 'InvalidRange'
invalid_rename_source_path = 'InvalidRenameSourcePath'
invalid_resource_name = 'InvalidResourceName'
invalid_source_blob_type = 'InvalidSourceBlobType'
invalid_source_blob_url = 'InvalidSourceBlobUrl'
invalid_source_or_destination_resource_type = 'InvalidSourceOrDestinationResourceType'
invalid_source_uri = 'InvalidSourceUri'
invalid_uri = 'InvalidUri'
invalid_version_for_page_blob_operation = 'InvalidVersionForPageBlobOperation'
invalid_xml_document = 'InvalidXmlDocument'
invalid_xml_node_value = 'InvalidXmlNodeValue'
lease_already_broken = 'LeaseAlreadyBroken'
lease_already_present = 'LeaseAlreadyPresent'
lease_id_mismatch_with_blob_operation = 'LeaseIdMismatchWithBlobOperation'
lease_id_mismatch_with_container_operation = 'LeaseIdMismatchWithContainerOperation'
lease_id_mismatch_with_lease_operation = 'LeaseIdMismatchWithLeaseOperation'
lease_id_missing = 'LeaseIdMissing'
lease_is_already_broken = 'LeaseIsAlreadyBroken'
lease_is_breaking_and_cannot_be_acquired = 'LeaseIsBreakingAndCannotBeAcquired'
lease_is_breaking_and_cannot_be_changed = 'LeaseIsBreakingAndCannotBeChanged'
lease_is_broken_and_cannot_be_renewed = 'LeaseIsBrokenAndCannotBeRenewed'
lease_lost = 'LeaseLost'
lease_name_mismatch = 'LeaseNameMismatch'
lease_not_present_with_blob_operation = 'LeaseNotPresentWithBlobOperation'
lease_not_present_with_container_operation = 'LeaseNotPresentWithContainerOperation'
lease_not_present_with_lease_operation = 'LeaseNotPresentWithLeaseOperation'
max_blob_size_condition_not_met = 'MaxBlobSizeConditionNotMet'
md5_mismatch = 'Md5Mismatch'
message_not_found = 'MessageNotFound'
message_too_large = 'MessageTooLarge'
metadata_too_large = 'MetadataTooLarge'
missing_content_length_header = 'MissingContentLengthHeader'
missing_required_header = 'MissingRequiredHeader'
missing_required_query_parameter = 'MissingRequiredQueryParameter'
missing_required_xml_node = 'MissingRequiredXmlNode'
multiple_condition_headers_not_supported = 'MultipleConditionHeadersNotSupported'
no_authentication_information = 'NoAuthenticationInformation'
no_pending_copy_operation = 'NoPendingCopyOperation'
operation_not_allowed_on_incremental_copy_blob = 'OperationNotAllowedOnIncrementalCopyBlob'
operation_timed_out = 'OperationTimedOut'
out_of_range_input = 'OutOfRangeInput'
out_of_range_query_parameter_value = 'OutOfRangeQueryParameterValue'
parent_not_found = 'ParentNotFound'
path_already_exists = 'PathAlreadyExists'
path_conflict = 'PathConflict'
path_not_found = 'PathNotFound'
pending_copy_operation = 'PendingCopyOperation'
pop_receipt_mismatch = 'PopReceiptMismatch'
previous_snapshot_cannot_be_newer = 'PreviousSnapshotCannotBeNewer'
previous_snapshot_not_found = 'PreviousSnapshotNotFound'
previous_snapshot_operation_not_supported = 'PreviousSnapshotOperationNotSupported'
queue_already_exists = 'QueueAlreadyExists'
queue_being_deleted = 'QueueBeingDeleted'
queue_disabled = 'QueueDisabled'
queue_not_empty = 'QueueNotEmpty'
queue_not_found = 'QueueNotFound'
read_only_attribute = 'ReadOnlyAttribute'
rename_destination_parent_path_not_found = 'RenameDestinationParentPathNotFound'
request_body_too_large = 'RequestBodyTooLarge'
request_url_failed_to_parse = 'RequestUrlFailedToParse'
resource_already_exists = 'ResourceAlreadyExists'
resource_not_found = 'ResourceNotFound'
resource_type_mismatch = 'ResourceTypeMismatch'
sequence_number_condition_not_met = 'SequenceNumberConditionNotMet'
sequence_number_increment_too_large = 'SequenceNumberIncrementTooLarge'
server_busy = 'ServerBusy'
share_already_exists = 'ShareAlreadyExists'
share_being_deleted = 'ShareBeingDeleted'
share_disabled = 'ShareDisabled'
share_has_snapshots = 'ShareHasSnapshots'
share_not_found = 'ShareNotFound'
share_snapshot_count_exceeded = 'ShareSnapshotCountExceeded'
share_snapshot_in_progress = 'ShareSnapshotInProgress'
share_snapshot_operation_not_supported = 'ShareSnapshotOperationNotSupported'
sharing_violation = 'SharingViolation'
snaphot_operation_rate_exceeded = 'SnaphotOperationRateExceeded'
snapshot_count_exceeded = 'SnapshotCountExceeded'
snapshots_present = 'SnapshotsPresent'
source_condition_not_met = 'SourceConditionNotMet'
source_path_is_being_deleted = 'SourcePathIsBeingDeleted'
source_path_not_found = 'SourcePathNotFound'
system_in_use = 'SystemInUse'
target_condition_not_met = 'TargetConditionNotMet'
unauthorized_blob_overwrite = 'UnauthorizedBlobOverwrite'
unsupported_header = 'UnsupportedHeader'
unsupported_http_verb = 'UnsupportedHttpVerb'
unsupported_query_parameter = 'UnsupportedQueryParameter'
unsupported_rest_version = 'UnsupportedRestVersion'
unsupported_xml_node = 'UnsupportedXmlNode'
class azure.storage.blob.UserDelegationKey[source]

Represents a user delegation key, provided to the user by Azure Storage based on their Azure Active Directory access token.

The fields are saved as simple strings since the user does not have to interact with this object; to generate an identify SAS, the user can simply pass it to the right API.

Variables
  • signed_oid (str) – Object ID of this token.

  • signed_tid (str) – Tenant ID of the tenant that issued this token.

  • signed_start (str) – The datetime this token becomes valid.

  • signed_expiry (str) – The datetime this token expires.

  • signed_service (str) – What service this key is valid for.

  • signed_version (str) – The version identifier of the REST service that created this token.

  • value (str) – The user delegation key.

class azure.storage.blob.ExponentialRetry(initial_backoff=15, increment_base=3, retry_total=3, retry_to_secondary=False, random_jitter_range=3, **kwargs)[source]

Exponential retry.

Constructs an Exponential retry object. The initial_backoff is used for the first retry. Subsequent retries are retried after initial_backoff + increment_power^retry_count seconds. For example, by default the first retry occurs after 15 seconds, the second after (15+3^1) = 18 seconds, and the third after (15+3^2) = 24 seconds.

Parameters
  • initial_backoff (int) – The initial backoff interval, in seconds, for the first retry.

  • increment_base (int) – The base, in seconds, to increment the initial_backoff by after the first retry.

  • max_attempts (int) – The maximum number of retry attempts.

  • retry_to_secondary (bool) – Whether the request should be retried to secondary, if able. This should only be enabled of RA-GRS accounts are used and potentially stale data can be handled.

  • random_jitter_range (int) – A number in seconds which indicates a range to jitter/randomize for the back-off interval. For example, a random_jitter_range of 3 results in the back-off interval x to vary between x+3 and x-3.

configure_retries(request)
get_backoff_time(settings)[source]

Calculates how long to sleep before retrying.

Returns

An integer indicating how long to wait before retrying the request, or None to indicate no retry should be performed.

Return type

int or None

increment(settings, request, response=None, error=None)

Increment the retry counters.

Parameters
  • response – A pipeline response object.

  • error – An error encountered during the request, or None if the response was received successfully.

Returns

Whether the retry attempts are exhausted.

send(request)

Abstract send method for a synchronous pipeline. Mutates the request.

Context content is dependent on the HttpTransport.

Parameters

request (PipelineRequest) – The pipeline request object

Returns

The pipeline response object.

Return type

PipelineResponse

sleep(settings, transport)
class azure.storage.blob.LinearRetry(backoff=15, retry_total=3, retry_to_secondary=False, random_jitter_range=3, **kwargs)[source]

Linear retry.

Constructs a Linear retry object.

Parameters
  • backoff (int) – The backoff interval, in seconds, between retries.

  • max_attempts (int) – The maximum number of retry attempts.

  • retry_to_secondary (bool) – Whether the request should be retried to secondary, if able. This should only be enabled of RA-GRS accounts are used and potentially stale data can be handled.

  • random_jitter_range (int) – A number in seconds which indicates a range to jitter/randomize for the back-off interval. For example, a random_jitter_range of 3 results in the back-off interval x to vary between x+3 and x-3.

configure_retries(request)
get_backoff_time(settings)[source]

Calculates how long to sleep before retrying.

Returns

An integer indicating how long to wait before retrying the request, or None to indicate no retry should be performed.

Return type

int or None

increment(settings, request, response=None, error=None)

Increment the retry counters.

Parameters
  • response – A pipeline response object.

  • error – An error encountered during the request, or None if the response was received successfully.

Returns

Whether the retry attempts are exhausted.

send(request)

Abstract send method for a synchronous pipeline. Mutates the request.

Context content is dependent on the HttpTransport.

Parameters

request (PipelineRequest) – The pipeline request object

Returns

The pipeline response object.

Return type

PipelineResponse

sleep(settings, transport)
class azure.storage.blob.LocationMode[source]

Specifies the location the request should be sent to. This mode only applies for RA-GRS accounts which allow secondary read access. All other account types must use PRIMARY.

PRIMARY = 'primary'

Requests should be sent to the primary location.

SECONDARY = 'secondary'

Requests should be sent to the secondary location, if possible.

class azure.storage.blob.BlockState[source]

Block blob block types.

Committed = 'Committed'

Committed blocks.

Latest = 'Latest'

Latest blocks.

Uncommitted = 'Uncommitted'

Uncommitted blocks.

class azure.storage.blob.StandardBlobTier[source]

Specifies the blob tier to set the blob to. This is only applicable for block blobs on standard storage accounts.

Archive = 'Archive'

Archive

Cool = 'Cool'

Cool

Hot = 'Hot'

Hot

class azure.storage.blob.PremiumPageBlobTier[source]

Specifies the page blob tier to set the blob to. This is only applicable to page blobs on premium storage accounts. Please take a look at: https://docs.microsoft.com/en-us/azure/storage/storage-premium-storage#scalability-and-performance-targets for detailed information on the corresponding IOPS and throughput per PageBlobTier.

P10 = 'P10'

P10 Tier

P20 = 'P20'

P20 Tier

P30 = 'P30'

P30 Tier

P4 = 'P4'

P4 Tier

P40 = 'P40'

P40 Tier

P50 = 'P50'

P50 Tier

P6 = 'P6'

P6 Tier

P60 = 'P60'

P60 Tier

class azure.storage.blob.SequenceNumberAction[source]

Sequence number actions.

Increment = 'increment'

Increments the value of the sequence number by 1. If specifying this option, do not include the x-ms-blob-sequence-number header.

Max = 'max'

Sets the sequence number to be the higher of the value included with the request and the value currently stored for the blob.

Update = 'update'

Sets the sequence number to the value included with the request.

class azure.storage.blob.PublicAccess[source]

Specifies whether data in the container may be accessed publicly and the level of access.

Blob = 'blob'

Specifies public read access for blobs. Blob data within this container can be read via anonymous request, but container data is not available. Clients cannot enumerate blobs within the container via anonymous request.

Container = 'container'

Specifies full public read access for container and blob data. Clients can enumerate blobs within the container via anonymous request, but cannot enumerate containers within the storage account.

OFF = 'off'

Specifies that there is no public read access for both the container and blobs within the container. Clients cannot enumerate the containers within the storage account as well as the blobs within the container.

class azure.storage.blob.BlobAnalyticsLogging(**kwargs)[source]

Azure Analytics Logging settings.

Keyword Arguments
  • version (str) – The version of Storage Analytics to configure. The default value is 1.0.

  • delete (bool) – Indicates whether all delete requests should be logged. The default value is False.

  • read (bool) – Indicates whether all read requests should be logged. The default value is False.

  • write (bool) – Indicates whether all write requests should be logged. The default value is False.

  • retention_policy (RetentionPolicy) – Determines how long the associated data should persist. If not specified the retention policy will be disabled by default.

as_dict(keep_readonly=True, key_transformer=<function attribute_transformer>, **kwargs)

Return a dict that can be JSONify using json.dump.

Advanced usage might optionaly use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

  • attribute_transformer

  • full_restapi_key_transformer

  • last_restapi_key_transformer

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

key_transformer (function) – A key transformer function.

Returns

A dict JSON compatible object

Return type

dict

classmethod deserialize(data, content_type=None)

Parse a str using the RestAPI syntax and return a model.

Parameters
  • data (str) – A str using RestAPI structure. JSON by default.

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod enable_additional_properties_sending()
classmethod from_dict(data, key_extractors=None, content_type=None)

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

Parameters
  • data (dict) – A dict using RestAPI structure

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod is_xml_model()
serialize(keep_readonly=False, **kwargs)

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

keep_readonly (bool) – If you want to serialize the readonly attributes

Returns

A dict JSON compatible object

Return type

dict

validate()

Validate this model recursively and return a list of ValidationError.

Returns

A list of validation error

Return type

list

class azure.storage.blob.Metrics(**kwargs)[source]

A summary of request statistics grouped by API in hour or minute aggregates for blobs.

Keyword Arguments
  • version (str) – The version of Storage Analytics to configure. The default value is 1.0.

  • enabled (bool) – Indicates whether metrics are enabled for the Blob service. The default value is False.

  • include_apis (bool) – Indicates whether metrics should generate summary statistics for called API operations.

  • retention_policy (RetentionPolicy) – Determines how long the associated data should persist. If not specified the retention policy will be disabled by default.

as_dict(keep_readonly=True, key_transformer=<function attribute_transformer>, **kwargs)

Return a dict that can be JSONify using json.dump.

Advanced usage might optionaly use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

  • attribute_transformer

  • full_restapi_key_transformer

  • last_restapi_key_transformer

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

key_transformer (function) – A key transformer function.

Returns

A dict JSON compatible object

Return type

dict

classmethod deserialize(data, content_type=None)

Parse a str using the RestAPI syntax and return a model.

Parameters
  • data (str) – A str using RestAPI structure. JSON by default.

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod enable_additional_properties_sending()
classmethod from_dict(data, key_extractors=None, content_type=None)

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

Parameters
  • data (dict) – A dict using RestAPI structure

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod is_xml_model()
serialize(keep_readonly=False, **kwargs)

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

keep_readonly (bool) – If you want to serialize the readonly attributes

Returns

A dict JSON compatible object

Return type

dict

validate()

Validate this model recursively and return a list of ValidationError.

Returns

A list of validation error

Return type

list

class azure.storage.blob.RetentionPolicy(enabled=False, days=None)[source]

The retention policy which determines how long the associated data should persist.

Parameters
  • enabled (bool) – Indicates whether a retention policy is enabled for the storage service. The default value is False.

  • days (int) – Indicates the number of days that metrics or logging or soft-deleted data should be retained. All data older than this value will be deleted. If enabled=True, the number of days must be specified.

as_dict(keep_readonly=True, key_transformer=<function attribute_transformer>, **kwargs)

Return a dict that can be JSONify using json.dump.

Advanced usage might optionaly use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

  • attribute_transformer

  • full_restapi_key_transformer

  • last_restapi_key_transformer

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

key_transformer (function) – A key transformer function.

Returns

A dict JSON compatible object

Return type

dict

classmethod deserialize(data, content_type=None)

Parse a str using the RestAPI syntax and return a model.

Parameters
  • data (str) – A str using RestAPI structure. JSON by default.

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod enable_additional_properties_sending()
classmethod from_dict(data, key_extractors=None, content_type=None)

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

Parameters
  • data (dict) – A dict using RestAPI structure

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod is_xml_model()
serialize(keep_readonly=False, **kwargs)

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

keep_readonly (bool) – If you want to serialize the readonly attributes

Returns

A dict JSON compatible object

Return type

dict

validate()

Validate this model recursively and return a list of ValidationError.

Returns

A list of validation error

Return type

list

class azure.storage.blob.StaticWebsite(**kwargs)[source]

The properties that enable an account to host a static website.

Keyword Arguments
  • enabled (bool) – Indicates whether this account is hosting a static website. The default value is False.

  • index_document (str) – The default name of the index page under each directory.

  • error_document404_path (str) – The absolute path of the custom 404 page.

  • default_index_document_path (str) – Absolute path of the default index page.

as_dict(keep_readonly=True, key_transformer=<function attribute_transformer>, **kwargs)

Return a dict that can be JSONify using json.dump.

Advanced usage might optionaly use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

  • attribute_transformer

  • full_restapi_key_transformer

  • last_restapi_key_transformer

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

key_transformer (function) – A key transformer function.

Returns

A dict JSON compatible object

Return type

dict

classmethod deserialize(data, content_type=None)

Parse a str using the RestAPI syntax and return a model.

Parameters
  • data (str) – A str using RestAPI structure. JSON by default.

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod enable_additional_properties_sending()
classmethod from_dict(data, key_extractors=None, content_type=None)

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

Parameters
  • data (dict) – A dict using RestAPI structure

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod is_xml_model()
serialize(keep_readonly=False, **kwargs)

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

keep_readonly (bool) – If you want to serialize the readonly attributes

Returns

A dict JSON compatible object

Return type

dict

validate()

Validate this model recursively and return a list of ValidationError.

Returns

A list of validation error

Return type

list

class azure.storage.blob.CorsRule(allowed_origins, allowed_methods, **kwargs)[source]

CORS is an HTTP feature that enables a web application running under one domain to access resources in another domain. Web browsers implement a security restriction known as same-origin policy that prevents a web page from calling APIs in a different domain; CORS provides a secure way to allow one domain (the origin domain) to call APIs in another domain.

Parameters
  • allowed_origins (list(str)) – A list of origin domains that will be allowed via CORS, or “*” to allow all domains. The list of must contain at least one entry. Limited to 64 origin domains. Each allowed origin can have up to 256 characters.

  • allowed_methods (list(str)) – A list of HTTP methods that are allowed to be executed by the origin. The list of must contain at least one entry. For Azure Storage, permitted methods are DELETE, GET, HEAD, MERGE, POST, OPTIONS or PUT.

Keyword Arguments
  • allowed_headers (list(str)) – Defaults to an empty list. A list of headers allowed to be part of the cross-origin request. Limited to 64 defined headers and 2 prefixed headers. Each header can be up to 256 characters.

  • exposed_headers (list(str)) – Defaults to an empty list. A list of response headers to expose to CORS clients. Limited to 64 defined headers and two prefixed headers. Each header can be up to 256 characters.

  • max_age_in_seconds (int) – The number of seconds that the client/browser should cache a preflight response.

as_dict(keep_readonly=True, key_transformer=<function attribute_transformer>, **kwargs)

Return a dict that can be JSONify using json.dump.

Advanced usage might optionaly use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

  • attribute_transformer

  • full_restapi_key_transformer

  • last_restapi_key_transformer

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

key_transformer (function) – A key transformer function.

Returns

A dict JSON compatible object

Return type

dict

classmethod deserialize(data, content_type=None)

Parse a str using the RestAPI syntax and return a model.

Parameters
  • data (str) – A str using RestAPI structure. JSON by default.

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod enable_additional_properties_sending()
classmethod from_dict(data, key_extractors=None, content_type=None)

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

Parameters
  • data (dict) – A dict using RestAPI structure

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod is_xml_model()
serialize(keep_readonly=False, **kwargs)

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

keep_readonly (bool) – If you want to serialize the readonly attributes

Returns

A dict JSON compatible object

Return type

dict

validate()

Validate this model recursively and return a list of ValidationError.

Returns

A list of validation error

Return type

list

class azure.storage.blob.ContainerProperties(**kwargs)[source]

Blob container’s properties class.

Returned ContainerProperties instances expose these values through a dictionary interface, for example: container_props["last_modified"]. Additionally, the container name is available as container_props["name"].

Variables
  • last_modified (datetime) – A datetime object representing the last time the container was modified.

  • etag (str) – The ETag contains a value that you can use to perform operations conditionally.

  • lease (LeaseProperties) – Stores all the lease information for the container.

  • public_access (str) – Specifies whether data in the container may be accessed publicly and the level of access.

  • has_immutability_policy (bool) – Represents whether the container has an immutability policy.

  • has_legal_hold (bool) – Represents whether the container has a legal hold.

  • metadata (dict) – A dict with name-value pairs to associate with the container as metadata.

  • encryption_scope (ContainerEncryptionScope) – The default encryption scope configuration for the container.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.storage.blob.BlobProperties(**kwargs)[source]

Blob Properties.

Variables
  • name (str) – The name of the blob.

  • container (str) – The container in which the blob resides.

  • snapshot (str) – Datetime value that uniquely identifies the blob snapshot.

  • blob_type (BlobType) – String indicating this blob’s type.

  • metadata (dict) – Name-value pairs associated with the blob as metadata.

  • last_modified (datetime) – A datetime object representing the last time the blob was modified.

  • etag (str) – The ETag contains a value that you can use to perform operations conditionally.

  • size (int) – The size of the content returned. If the entire blob was requested, the length of blob in bytes. If a subset of the blob was requested, the length of the returned subset.

  • content_range (str) – Indicates the range of bytes returned in the event that the client requested a subset of the blob.

  • append_blob_committed_block_count (int) – (For Append Blobs) Number of committed blocks in the blob.

  • is_append_blob_sealed (bool) –

    Indicate if the append blob is sealed or not.

    New in version 12.4.0.

  • page_blob_sequence_number (int) – (For Page Blobs) Sequence number for page blob used for coordinating concurrent writes.

  • server_encrypted (bool) – Set to true if the blob is encrypted on the server.

  • copy (CopyProperties) – Stores all the copy properties for the blob.

  • content_settings (ContentSettings) – Stores all the content settings for the blob.

  • lease (LeaseProperties) – Stores all the lease information for the blob.

  • blob_tier (StandardBlobTier) – Indicates the access tier of the blob. The hot tier is optimized for storing data that is accessed frequently. The cool storage tier is optimized for storing data that is infrequently accessed and stored for at least a month. The archive tier is optimized for storing data that is rarely accessed and stored for at least six months with flexible latency requirements.

  • rehydrate_priority (str) – Indicates the priority with which to rehydrate an archived blob

  • blob_tier_change_time (datetime) – Indicates when the access tier was last changed.

  • blob_tier_inferred (bool) – Indicates whether the access tier was inferred by the service. If false, it indicates that the tier was set explicitly.

  • deleted (bool) – Whether this blob was deleted.

  • deleted_time (datetime) – A datetime object representing the time at which the blob was deleted.

  • remaining_retention_days (int) – The number of days that the blob will be retained before being permanently deleted by the service.

  • creation_time (datetime) – Indicates when the blob was created, in UTC.

  • archive_status (str) – Archive status of blob.

  • encryption_key_sha256 (str) – The SHA-256 hash of the provided encryption key.

  • encryption_scope (str) – A predefined encryption scope used to encrypt the data on the service. An encryption scope can be created using the Management API and referenced here by name. If a default encryption scope has been defined at the container, this value will override it if the container-level scope is configured to allow overrides. Otherwise an error will be raised.

  • request_server_encrypted (bool) – Whether this blob is encrypted.

  • object_replication_source_properties (list(ObjectReplicationPolicy)) –

    Only present for blobs that have policy ids and rule ids applied to them.

    New in version 12.4.0.

  • object_replication_destination_policy (str) –

    Represents the Object Replication Policy Id that created this blob.

    New in version 12.4.0.

  • tag_count (int) –

    Tags count on this blob.

    New in version 12.4.0.

  • str) tags (dict(str,) –

    Key value pair of tags on this blob.

    New in version 12.4.0.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.storage.blob.BlobPrefix(*args, **kwargs)[source]

An Iterable of Blob properties.

Returned from walk_blobs when a delimiter is used. Can be thought of as a virtual blob directory.

Variables
  • name (str) – The prefix, or “directory name” of the blob.

  • service_endpoint (str) – The service URL.

  • prefix (str) – A blob name prefix being used to filter the list.

  • marker (str) – The continuation token of the current page of results.

  • results_per_page (int) – The maximum number of results retrieved per API call.

  • next_marker (str) – The continuation token to retrieve the next page of results.

  • location_mode (str) – The location mode being used to list results. The available options include “primary” and “secondary”.

  • current_page (list(BlobProperties)) – The current page of listed results.

  • container (str) – The container that the blobs are listed from.

  • delimiter (str) – A delimiting character used for hierarchy listing.

Parameters
  • command (callable) – Function to retrieve the next page of items.

  • prefix (str) – Filters the results to return only blobs whose names begin with the specified prefix.

  • results_per_page (int) – The maximum number of blobs to retrieve per call.

  • marker (str) – An opaque continuation token.

  • delimiter (str) – Used to capture blobs whose names begin with the same substring up to the appearance of the delimiter character. The delimiter may be a single character or a string.

  • location_mode – Specifies the location the request should be sent to. This mode only applies for RA-GRS accounts which allow secondary read access. Options include ‘primary’ or ‘secondary’.

Return an iterator of items.

args and kwargs will be passed to the PageIterator constructor directly, except page_iterator_class

by_page(continuation_token: Optional[str] = None) → Iterator[Iterator[ReturnType]]

Get an iterator of pages of objects, instead of an iterator of objects.

Parameters

continuation_token (str) – An opaque continuation token. This value can be retrieved from the continuation_token field of a previous generator object. If specified, this generator will begin returning results from this point.

Returns

An iterator of pages (themselves iterator of objects)

get(key, default=None)
has_key(k)
items()
keys()
next()

Return the next item from the iterator. When exhausted, raise StopIteration

update(*args, **kwargs)
values()
class azure.storage.blob.FilteredBlob(**kwargs)[source]

Blob info from a Filter Blobs API call.

Variables
  • name – Blob name

  • container_name – Container name.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.storage.blob.LeaseProperties(**kwargs)[source]

Blob Lease Properties.

Variables
  • status (str) – The lease status of the blob. Possible values: locked|unlocked

  • state (str) – Lease state of the blob. Possible values: available|leased|expired|breaking|broken

  • duration (str) – When a blob is leased, specifies whether the lease is of infinite or fixed duration.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.storage.blob.ContentSettings(content_type=None, content_encoding=None, content_language=None, content_disposition=None, cache_control=None, content_md5=None, **kwargs)[source]

The content settings of a blob.

Parameters
  • content_type (str) – The content type specified for the blob. If no content type was specified, the default content type is application/octet-stream.

  • content_encoding (str) – If the content_encoding has previously been set for the blob, that value is stored.

  • content_language (str) – If the content_language has previously been set for the blob, that value is stored.

  • content_disposition (str) – content_disposition conveys additional information about how to process the response payload, and also can be used to attach additional metadata. If content_disposition has previously been set for the blob, that value is stored.

  • cache_control (str) – If the cache_control has previously been set for the blob, that value is stored.

  • content_md5 (str) – If the content_md5 has been set for the blob, this response header is stored so that the client can check for message content integrity.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.storage.blob.CopyProperties(**kwargs)[source]

Blob Copy Properties.

These properties will be None if this blob has never been the destination in a Copy Blob operation, or if this blob has been modified after a concluded Copy Blob operation, for example, using Set Blob Properties, Upload Blob, or Commit Block List.

Variables
  • id (str) – String identifier for the last attempted Copy Blob operation where this blob was the destination blob.

  • source (str) – URL up to 2 KB in length that specifies the source blob used in the last attempted Copy Blob operation where this blob was the destination blob.

  • status (str) –

    State of the copy operation identified by Copy ID, with these values:
    success:

    Copy completed successfully.

    pending:

    Copy is in progress. Check copy_status_description if intermittent, non-fatal errors impede copy progress but don’t cause failure.

    aborted:

    Copy was ended by Abort Copy Blob.

    failed:

    Copy failed. See copy_status_description for failure details.

  • progress (str) – Contains the number of bytes copied and the total bytes in the source in the last attempted Copy Blob operation where this blob was the destination blob. Can show between 0 and Content-Length bytes copied.

  • completion_time (datetime) – Conclusion time of the last attempted Copy Blob operation where this blob was the destination blob. This value can specify the time of a completed, aborted, or failed copy attempt.

  • status_description (str) – Only appears when x-ms-copy-status is failed or pending. Describes cause of fatal or non-fatal copy operation failure.

  • incremental_copy (bool) – Copies the snapshot of the source page blob to a destination page blob. The snapshot is copied such that only the differential changes between the previously copied snapshot are transferred to the destination

  • destination_snapshot (datetime) – Included if the blob is incremental copy blob or incremental copy snapshot, if x-ms-copy-status is success. Snapshot time of the last successful incremental copy snapshot for this blob.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.storage.blob.BlobBlock(block_id, state=<BlockState.Latest: 'Latest'>)[source]

BlockBlob Block class.

Parameters
  • block_id (str) – Block id.

  • state (str) – Block state. Possible values: committed|uncommitted

Variables

size (int) – Block size in bytes.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.storage.blob.PageRange(start=None, end=None)[source]

Page Range for page blob.

Parameters
  • start (int) – Start of page range in bytes.

  • end (int) – End of page range in bytes.

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.storage.blob.AccessPolicy(permission=None, expiry=None, start=None)[source]

Access Policy class used by the set and get access policy methods in each service.

A stored access policy can specify the start time, expiry time, and permissions for the Shared Access Signatures with which it’s associated. Depending on how you want to control access to your resource, you can specify all of these parameters within the stored access policy, and omit them from the URL for the Shared Access Signature. Doing so permits you to modify the associated signature’s behavior at any time, as well as to revoke it. Or you can specify one or more of the access policy parameters within the stored access policy, and the others on the URL. Finally, you can specify all of the parameters on the URL. In this case, you can use the stored access policy to revoke the signature, but not to modify its behavior.

Together the Shared Access Signature and the stored access policy must include all fields required to authenticate the signature. If any required fields are missing, the request will fail. Likewise, if a field is specified both in the Shared Access Signature URL and in the stored access policy, the request will fail with status code 400 (Bad Request).

Parameters
  • permission (str or ContainerSasPermissions) – The permissions associated with the shared access signature. The user is restricted to operations allowed by the permissions. Required unless an id is given referencing a stored access policy which contains this field. This field must be omitted if it has been specified in an associated stored access policy.

  • expiry (datetime or str) – The time at which the shared access signature becomes invalid. Required unless an id is given referencing a stored access policy which contains this field. This field must be omitted if it has been specified in an associated stored access policy. Azure will always convert values to UTC. If a date is passed in without timezone info, it is assumed to be UTC.

  • start (datetime or str) – The time at which the shared access signature becomes valid. If omitted, start time for this call is assumed to be the time when the storage service receives the request. Azure will always convert values to UTC. If a date is passed in without timezone info, it is assumed to be UTC.

as_dict(keep_readonly=True, key_transformer=<function attribute_transformer>, **kwargs)

Return a dict that can be JSONify using json.dump.

Advanced usage might optionaly use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

  • attribute_transformer

  • full_restapi_key_transformer

  • last_restapi_key_transformer

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

key_transformer (function) – A key transformer function.

Returns

A dict JSON compatible object

Return type

dict

classmethod deserialize(data, content_type=None)

Parse a str using the RestAPI syntax and return a model.

Parameters
  • data (str) – A str using RestAPI structure. JSON by default.

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod enable_additional_properties_sending()
classmethod from_dict(data, key_extractors=None, content_type=None)

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

Parameters
  • data (dict) – A dict using RestAPI structure

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod is_xml_model()
serialize(keep_readonly=False, **kwargs)

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

keep_readonly (bool) – If you want to serialize the readonly attributes

Returns

A dict JSON compatible object

Return type

dict

validate()

Validate this model recursively and return a list of ValidationError.

Returns

A list of validation error

Return type

list

class azure.storage.blob.ContainerSasPermissions(read=False, write=False, delete=False, list=False, delete_previous_version=False, tag=False)[source]

ContainerSasPermissions class to be used with the generate_container_sas() function and for the AccessPolicies used with set_container_access_policy().

Parameters
  • read (bool) – Read the content, properties, metadata or block list of any blob in the container. Use any blob in the container as the source of a copy operation.

  • write (bool) – For any blob in the container, create or write content, properties, metadata, or block list. Snapshot or lease the blob. Resize the blob (page blob only). Use the blob as the destination of a copy operation within the same account. Note: You cannot grant permissions to read or write container properties or metadata, nor to lease a container, with a container SAS. Use an account SAS instead.

  • delete (bool) – Delete any blob in the container. Note: You cannot grant permissions to delete a container with a container SAS. Use an account SAS instead.

  • delete_previous_version (bool) – Delete the previous blob version for the versioning enabled storage account.

  • list (bool) – List blobs in the container.

  • tag (bool) – Set or get tags on the blobs in the container.

classmethod from_string(permission)[source]

Create a ContainerSasPermissions from a string.

To specify read, write, delete, or list permissions you need only to include the first letter of the word in the string. E.g. For read and write permissions, you would provide a string “rw”.

Parameters

permission (str) – The string which dictates the read, write, delete, and list permissions.

Returns

A ContainerSasPermissions object

Return type

ContainerSasPermissions

class azure.storage.blob.BlobSasPermissions(read=False, add=False, create=False, write=False, delete=False, delete_previous_version=False, tag=True)[source]

BlobSasPermissions class to be used with the generate_blob_sas() function.

Parameters
  • read (bool) – Read the content, properties, metadata and block list. Use the blob as the source of a copy operation.

  • add (bool) – Add a block to an append blob.

  • create (bool) – Write a new blob, snapshot a blob, or copy a blob to a new blob.

  • write (bool) – Create or write content, properties, metadata, or block list. Snapshot or lease the blob. Resize the blob (page blob only). Use the blob as the destination of a copy operation within the same account.

  • delete (bool) – Delete the blob.

  • delete_previous_version (bool) – Delete the previous blob version for the versioning enabled storage account.

  • tag (bool) – Set or get tags on the blob.

classmethod from_string(permission)[source]

Create a BlobSasPermissions from a string.

To specify read, add, create, write, or delete permissions you need only to include the first letter of the word in the string. E.g. For read and write permissions, you would provide a string “rw”.

Parameters

permission (str) – The string which dictates the read, add, create, write, or delete permissions.

Returns

A BlobSasPermissions object

Return type

BlobSasPermissions

class azure.storage.blob.ResourceTypes(service=False, container=False, object=False)[source]

Specifies the resource types that are accessible with the account SAS.

Parameters
  • service (bool) – Access to service-level APIs (e.g., Get/Set Service Properties, Get Service Stats, List Containers/Queues/Shares)

  • container (bool) – Access to container-level APIs (e.g., Create/Delete Container, Create/Delete Queue, Create/Delete Share, List Blobs/Files and Directories)

  • object (bool) – Access to object-level APIs for blobs, queue messages, and files(e.g. Put Blob, Query Entity, Get Messages, Create File, etc.)

classmethod from_string(string)[source]

Create a ResourceTypes from a string.

To specify service, container, or object you need only to include the first letter of the word in the string. E.g. service and container, you would provide a string “sc”.

Parameters

string (str) – Specify service, container, or object in in the string with the first letter of the word.

Returns

A ResourceTypes object

Return type

ResourceTypes

class azure.storage.blob.AccountSasPermissions(read=False, write=False, delete=False, list=False, add=False, create=False, update=False, process=False, delete_previous_version=False, **kwargs)[source]

ResourceTypes class to be used with generate_account_sas function and for the AccessPolicies used with set_*_acl. There are two types of SAS which may be used to grant resource access. One is to grant access to a specific resource (resource-specific). Another is to grant access to the entire service for a specific account and allow certain operations based on perms found here.

Parameters
  • read (bool) – Valid for all signed resources types (Service, Container, and Object). Permits read permissions to the specified resource type.

  • write (bool) – Valid for all signed resources types (Service, Container, and Object). Permits write permissions to the specified resource type.

  • delete (bool) – Valid for Container and Object resource types, except for queue messages.

  • delete_previous_version (bool) – Delete the previous blob version for the versioning enabled storage account.

  • list (bool) – Valid for Service and Container resource types only.

  • add (bool) – Valid for the following Object resource types only: queue messages, and append blobs.

  • create (bool) – Valid for the following Object resource types only: blobs and files. Users can create new blobs or files, but may not overwrite existing blobs or files.

  • update (bool) – Valid for the following Object resource types only: queue messages.

  • process (bool) – Valid for the following Object resource type only: queue messages.

Keyword Arguments
  • tag (bool) – To enable set or get tags on the blobs in the container.

  • filter_by_tags (bool) – To enable get blobs by tags, this should be used together with list permission.

classmethod from_string(permission)[source]

Create AccountSasPermissions from a string.

To specify read, write, delete, etc. permissions you need only to include the first letter of the word in the string. E.g. for read and write permissions you would provide a string “rw”.

Parameters

permission (str) – Specify permissions in the string with the first letter of the word.

Returns

An AccountSasPermissions object

Return type

AccountSasPermissions

class azure.storage.blob.StorageStreamDownloader(clients=None, config=None, start_range=None, end_range=None, validate_content=None, encryption_options=None, max_concurrency=1, name=None, container=None, encoding=None, **kwargs)[source]

A streaming object to download from Azure Storage.

Variables
  • name (str) – The name of the blob being downloaded.

  • container (str) – The name of the container where the blob is.

  • properties (BlobProperties) – The properties of the blob being downloaded. If only a range of the data is being downloaded, this will be reflected in the properties.

  • size (int) – The size of the total data in the stream. This will be the byte range if specified, otherwise the total size of the blob.

chunks()[source]
content_as_bytes(max_concurrency=1)[source]

Download the contents of this file.

This operation is blocking until all data is downloaded.

Keyword Arguments

max_concurrency (int) – The number of parallel connections with which to download.

Return type

bytes

content_as_text(max_concurrency=1, encoding='UTF-8')[source]

Download the contents of this blob, and decode as text.

This operation is blocking until all data is downloaded.

Keyword Arguments

max_concurrency (int) – The number of parallel connections with which to download.

Parameters

encoding (str) – Test encoding to decode the downloaded bytes. Default is UTF-8.

Return type

str

download_to_stream(stream, max_concurrency=1)[source]

Download the contents of this blob to a stream.

Parameters

stream – The stream to download to. This can be an open file-handle, or any writable stream. The stream must be seekable if the download uses more than one parallel connection.

Returns

The properties of the downloaded blob.

Return type

Any

readall()[source]

Download the contents of this blob.

This operation is blocking until all data is downloaded. :rtype: bytes or str

readinto(stream)[source]

Download the contents of this file to a stream.

Parameters

stream – The stream to download to. This can be an open file-handle, or any writable stream. The stream must be seekable if the download uses more than one parallel connection.

Returns

The number of bytes read.

Return type

int

class azure.storage.blob.CustomerProvidedEncryptionKey(key_value, key_hash)[source]

All data in Azure Storage is encrypted at-rest using an account-level encryption key. In versions 2018-06-17 and newer, you can manage the key used to encrypt blob contents and application metadata per-blob by providing an AES-256 encryption key in requests to the storage service.

When you use a customer-provided key, Azure Storage does not manage or persist your key. When writing data to a blob, the provided key is used to encrypt your data before writing it to disk. A SHA-256 hash of the encryption key is written alongside the blob contents, and is used to verify that all subsequent operations against the blob use the same encryption key. This hash cannot be used to retrieve the encryption key or decrypt the contents of the blob. When reading a blob, the provided key is used to decrypt your data after reading it from disk. In both cases, the provided encryption key is securely discarded as soon as the encryption or decryption process completes.

Parameters
  • key_value (str) – Base64-encoded AES-256 encryption key value.

  • key_hash (str) – Base64-encoded SHA256 of the encryption key.

Variables

algorithm (str) – Specifies the algorithm to use when encrypting data using the given key. Must be AES256.

class azure.storage.blob.RehydratePriority[source]

An enumeration.

high = 'High'
standard = 'Standard'
class azure.storage.blob.ContainerEncryptionScope(default_encryption_scope, **kwargs)[source]

The default encryption scope configuration for a container.

This scope is used implicitly for all future writes within the container, but can be overridden per blob operation.

New in version 12.2.0.

Parameters
  • default_encryption_scope (str) – Specifies the default encryption scope to set on the container and use for all future writes.

  • prevent_encryption_scope_override (bool) – If true, prevents any request from specifying a different encryption scope than the scope set on the container. Default value is false.

class azure.storage.blob.BlobQueryError(error=None, is_fatal=False, description=None, position=None)[source]

The error happened during quick query operation.

Variables
  • error (str) – The name of the error.

  • is_fatal (bool) – If true, this error prevents further query processing. More result data may be returned, but there is no guarantee that all of the original data will be processed. If false, this error does not prevent further query processing.

  • description (str) – A description of the error.

  • position (int) – The blob offset at which the error occurred.

class azure.storage.blob.DelimitedJsonDialect(**kwargs)[source]

Defines the input or output JSON serialization for a blob data query.

keyword str delimiter

The line separator character, default value is ‘

class azure.storage.blob.DelimitedTextDialect(**kwargs)[source]

Defines the input or output delimited (CSV) serialization for a blob query request.

keyword str delimiter

Column separator, defaults to ‘,’.

keyword str quotechar

Field quote, defaults to ‘”’.

keyword str lineterminator

Record separator, defaults to ‘

‘.
keyword str escapechar

Escape char, defaults to empty.

keyword bool has_header

Whether the blob data includes headers in the first line. The default value is False, meaning that the data will be returned inclusive of the first line. If set to True, the data will be returned exclusive of the first line.

class azure.storage.blob.BlobQueryReader(name=None, container=None, errors=None, record_delimiter='\n', encoding=None, headers=None, response=None, error_cls=None)[source]

A streaming object to read query results.

Variables
  • name (str) – The name of the blob being quered.

  • container (str) – The name of the container where the blob is.

  • response_headers (dict) – The response_headers of the quick query request.

  • record_delimiter (bytes) – The delimiter used to separate lines, or records with the data. The records method will return these lines via a generator.

readall() → Union[bytes, str][source]

Return all query results.

This operation is blocking until all data is downloaded. If encoding has been configured - this will be used to decode individual records are they are received.

Return type

Union[bytes, str]

readinto(stream: IO)None[source]

Download the query result to a stream.

Parameters

stream – The stream to download to. This can be an open file-handle, or any writable stream.

Returns

None

records() → Iterable[Union[bytes, str]][source]

Returns a record generator for the query result.

Records will be returned line by line. If encoding has been configured - this will be used to decode individual records are they are received.

Return type

Iterable[Union[bytes, str]]

class azure.storage.blob.ObjectReplicationPolicy(**kwargs)[source]

Policy id and rule ids applied to a blob.

Variables
  • policy_id (str) – Policy id for the blob. A replication policy gets created (policy id) when creating a source/destination pair.

  • rules (list(ObjectReplicationRule)) – Within each policy there may be multiple replication rules. e.g. rule 1= src/container/.pdf to dst/container2/; rule2 = src/container1/.jpg to dst/container3

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
class azure.storage.blob.ObjectReplicationRule(**kwargs)[source]

Policy id and rule ids applied to a blob.

Variables
  • rule_id (str) – Rule id.

  • status (str) – The status of the rule. It could be “Complete” or “Failed”

get(key, default=None)
has_key(k)
items()
keys()
update(*args, **kwargs)
values()
azure.storage.blob.upload_blob_to_url(blob_url: str, data: Union[Iterable[AnyStr], IO[AnyStr]], credential: Optional[Any] = None, **kwargs) → Dict[str, Any][source]

Upload data to a given URL

The data will be uploaded as a block blob.

Parameters
  • blob_url (str) – The full URI to the blob. This can also include a SAS token.

  • data (bytes or str or Iterable) – The data to upload. This can be bytes, text, an iterable or a file-like object.

  • credential – The credentials with which to authenticate. This is optional if the blob URL already has a SAS token. The value can be a SAS token string, an account shared access key, or an instance of a TokenCredentials class from azure.identity. If the URL already has a SAS token, specifying an explicit credential will take priority.

Keyword Arguments
  • overwrite (bool) – Whether the blob to be uploaded should overwrite the current data. If True, upload_blob_to_url will overwrite any existing data. If set to False, the operation will fail with a ResourceExistsError.

  • max_concurrency (int) – The number of parallel connections with which to download.

  • length (int) – Number of bytes to read from the stream. This is optional, but should be supplied for optimal performance.

  • metadata (dict(str,str)) – Name-value pairs associated with the blob as metadata.

  • validate_content (bool) – If true, calculates an MD5 hash for each chunk of the blob. The storage service checks the hash of the content that has arrived with the hash that was sent. This is primarily valuable for detecting bitflips on the wire if using http instead of https as https (the default) will already validate. Note that this MD5 hash is not stored with the blob. Also note that if enabled, the memory-efficient upload algorithm will not be used, because computing the MD5 hash requires buffering entire blocks, and doing so defeats the purpose of the memory-efficient algorithm.

  • encoding (str) – Encoding to use if text is supplied as input. Defaults to UTF-8.

Returns

Blob-updated property dict (Etag and last modified)

Return type

dict(str, Any)

azure.storage.blob.download_blob_from_url(blob_url: str, output: str, credential: Optional[Any] = None, **kwargs)None[source]

Download the contents of a blob to a local file or stream.

Parameters
  • blob_url (str) – The full URI to the blob. This can also include a SAS token.

  • output (str or writable stream.) – Where the data should be downloaded to. This could be either a file path to write to, or an open IO handle to write to.

  • credential – The credentials with which to authenticate. This is optional if the blob URL already has a SAS token or the blob is public. The value can be a SAS token string, an account shared access key, or an instance of a TokenCredentials class from azure.identity. If the URL already has a SAS token, specifying an explicit credential will take priority.

Keyword Arguments
  • overwrite (bool) – Whether the local file should be overwritten if it already exists. The default value is False - in which case a ValueError will be raised if the file already exists. If set to True, an attempt will be made to write to the existing file. If a stream handle is passed in, this value is ignored.

  • max_concurrency (int) – The number of parallel connections with which to download.

  • offset (int) – Start of byte range to use for downloading a section of the blob. Must be set if length is provided.

  • length (int) – Number of bytes to read from the stream. This is optional, but should be supplied for optimal performance.

  • validate_content (bool) – If true, calculates an MD5 hash for each chunk of the blob. The storage service checks the hash of the content that has arrived with the hash that was sent. This is primarily valuable for detecting bitflips on the wire if using http instead of https as https (the default) will already validate. Note that this MD5 hash is not stored with the blob. Also note that if enabled, the memory-efficient upload algorithm will not be used, because computing the MD5 hash requires buffering entire blocks, and doing so defeats the purpose of the memory-efficient algorithm.

Return type

None

azure.storage.blob.generate_account_sas(account_name: str, account_key: str, resource_types: Union[ResourceTypes, str], permission: Union[AccountSasPermissions, str], expiry: Optional[Union[datetime, str]], start: Optional[Union[datetime, str]] = None, ip: Optional[str] = None, **kwargs: Any)str[source]

Generates a shared access signature for the blob service.

Use the returned signature with the credential parameter of any BlobServiceClient, ContainerClient or BlobClient.

Parameters
  • account_name (str) – The storage account name used to generate the shared access signature.

  • account_key (str) – The account key, also called shared key or access key, to generate the shared access signature.

  • resource_types (str or ResourceTypes) – Specifies the resource types that are accessible with the account SAS.

  • permission (str or AccountSasPermissions) – The permissions associated with the shared access signature. The user is restricted to operations allowed by the permissions. Required unless an id is given referencing a stored access policy which contains this field. This field must be omitted if it has been specified in an associated stored access policy.

  • expiry (datetime or str) – The time at which the shared access signature becomes invalid. Required unless an id is given referencing a stored access policy which contains this field. This field must be omitted if it has been specified in an associated stored access policy. Azure will always convert values to UTC. If a date is passed in without timezone info, it is assumed to be UTC.

  • start (datetime or str) – The time at which the shared access signature becomes valid. If omitted, start time for this call is assumed to be the time when the storage service receives the request. Azure will always convert values to UTC. If a date is passed in without timezone info, it is assumed to be UTC.

  • ip (str) – Specifies an IP address or a range of IP addresses from which to accept requests. If the IP address from which the request originates does not match the IP address or address range specified on the SAS token, the request is not authenticated. For example, specifying ip=168.1.5.65 or ip=168.1.5.60-168.1.5.70 on the SAS restricts the request to those IP addresses.

Keyword Arguments

protocol (str) – Specifies the protocol permitted for a request made. The default value is https.

Returns

A Shared Access Signature (sas) token.

Return type

str

Example:

Generating a shared access signature.
# Create a SAS token to use to authenticate a new client
from datetime import datetime, timedelta
from azure.storage.blob import ResourceTypes, AccountSasPermissions, generate_account_sas

sas_token = generate_account_sas(
    blob_service_client.account_name,
    account_key=blob_service_client.credential.account_key,
    resource_types=ResourceTypes(object=True),
    permission=AccountSasPermissions(read=True),
    expiry=datetime.utcnow() + timedelta(hours=1)
)
azure.storage.blob.generate_container_sas(account_name: str, container_name: str, account_key: Optional[str] = None, user_delegation_key: Optional[UserDelegationKey] = None, permission: Optional[Union[ContainerSasPermissions, str]] = None, expiry: Optional[Union[datetime, str]] = None, start: Optional[Union[datetime, str]] = None, policy_id: Optional[str] = None, ip: Optional[str] = None, **kwargs: Any) → Any[source]

Generates a shared access signature for a container.

Use the returned signature with the credential parameter of any BlobServiceClient, ContainerClient or BlobClient.

Parameters
  • account_name (str) – The storage account name used to generate the shared access signature.

  • container_name (str) – The name of the container.

  • account_key (str) – The account key, also called shared key or access key, to generate the shared access signature. Either account_key or user_delegation_key must be specified.

  • user_delegation_key (UserDelegationKey) – Instead of an account shared key, the user could pass in a user delegation key. A user delegation key can be obtained from the service by authenticating with an AAD identity; this can be accomplished by calling get_user_delegation_key(). When present, the SAS is signed with the user delegation key instead.

  • permission (str or ContainerSasPermissions) – The permissions associated with the shared access signature. The user is restricted to operations allowed by the permissions. Permissions must be ordered read, write, delete, list. Required unless an id is given referencing a stored access policy which contains this field. This field must be omitted if it has been specified in an associated stored access policy.

  • expiry (datetime or str) – The time at which the shared access signature becomes invalid. Required unless an id is given referencing a stored access policy which contains this field. This field must be omitted if it has been specified in an associated stored access policy. Azure will always convert values to UTC. If a date is passed in without timezone info, it is assumed to be UTC.

  • start (datetime or str) – The time at which the shared access signature becomes valid. If omitted, start time for this call is assumed to be the time when the storage service receives the request. Azure will always convert values to UTC. If a date is passed in without timezone info, it is assumed to be UTC.

  • policy_id (str) – A unique value up to 64 characters in length that correlates to a stored access policy. To create a stored access policy, use set_container_access_policy().

  • ip (str) – Specifies an IP address or a range of IP addresses from which to accept requests. If the IP address from which the request originates does not match the IP address or address range specified on the SAS token, the request is not authenticated. For example, specifying ip=168.1.5.65 or ip=168.1.5.60-168.1.5.70 on the SAS restricts the request to those IP addresses.

Keyword Arguments
  • protocol (str) – Specifies the protocol permitted for a request made. The default value is https.

  • cache_control (str) – Response header value for Cache-Control when resource is accessed using this shared access signature.

  • content_disposition (str) – Response header value for Content-Disposition when resource is accessed using this shared access signature.

  • content_encoding (str) – Response header value for Content-Encoding when resource is accessed using this shared access signature.

  • content_language (str) – Response header value for Content-Language when resource is accessed using this shared access signature.

  • content_type (str) – Response header value for Content-Type when resource is accessed using this shared access signature.

Returns

A Shared Access Signature (sas) token.

Return type

str

Example:

Generating a sas token.
# Use access policy to generate a sas token
from azure.storage.blob import generate_container_sas

sas_token = generate_container_sas(
    container_client.account_name,
    container_client.container_name,
    account_key=container_client.credential.account_key,
    policy_id='my-access-policy-id'
)
azure.storage.blob.generate_blob_sas(account_name: str, container_name: str, blob_name: str, snapshot: Optional[str] = None, account_key: Optional[str] = None, user_delegation_key: Optional[UserDelegationKey] = None, permission: Optional[Union[BlobSasPermissions, str]] = None, expiry: Optional[Union[datetime, str]] = None, start: Optional[Union[datetime, str]] = None, policy_id: Optional[str] = None, ip: Optional[str] = None, **kwargs: Any) → Any[source]

Generates a shared access signature for a blob.

Use the returned signature with the credential parameter of any BlobServiceClient, ContainerClient or BlobClient.

Parameters
  • account_name (str) – The storage account name used to generate the shared access signature.

  • container_name (str) – The name of the container.

  • blob_name (str) – The name of the blob.

  • snapshot (str) – An optional blob snapshot ID.

  • account_key (str) – The account key, also called shared key or access key, to generate the shared access signature. Either account_key or user_delegation_key must be specified.

  • user_delegation_key (UserDelegationKey) – Instead of an account shared key, the user could pass in a user delegation key. A user delegation key can be obtained from the service by authenticating with an AAD identity; this can be accomplished by calling get_user_delegation_key(). When present, the SAS is signed with the user delegation key instead.

  • permission (str or BlobSasPermissions) – The permissions associated with the shared access signature. The user is restricted to operations allowed by the permissions. Permissions must be ordered read, write, delete, list. Required unless an id is given referencing a stored access policy which contains this field. This field must be omitted if it has been specified in an associated stored access policy.

  • expiry (datetime or str) – The time at which the shared access signature becomes invalid. Required unless an id is given referencing a stored access policy which contains this field. This field must be omitted if it has been specified in an associated stored access policy. Azure will always convert values to UTC. If a date is passed in without timezone info, it is assumed to be UTC.

  • start (datetime or str) – The time at which the shared access signature becomes valid. If omitted, start time for this call is assumed to be the time when the storage service receives the request. Azure will always convert values to UTC. If a date is passed in without timezone info, it is assumed to be UTC.

  • policy_id (str) – A unique value up to 64 characters in length that correlates to a stored access policy. To create a stored access policy, use set_container_access_policy().

  • ip (str) – Specifies an IP address or a range of IP addresses from which to accept requests. If the IP address from which the request originates does not match the IP address or address range specified on the SAS token, the request is not authenticated. For example, specifying ip=168.1.5.65 or ip=168.1.5.60-168.1.5.70 on the SAS restricts the request to those IP addresses.

Keyword Arguments
  • version_id (str) –

    An optional blob version ID. This parameter is only for versioning enabled account

    New in version 12.4.0: This keyword argument was introduced in API version ‘2019-12-12’.

  • protocol (str) – Specifies the protocol permitted for a request made. The default value is https.

  • cache_control (str) – Response header value for Cache-Control when resource is accessed using this shared access signature.

  • content_disposition (str) – Response header value for Content-Disposition when resource is accessed using this shared access signature.

  • content_encoding (str) – Response header value for Content-Encoding when resource is accessed using this shared access signature.

  • content_language (str) – Response header value for Content-Language when resource is accessed using this shared access signature.

  • content_type (str) – Response header value for Content-Type when resource is accessed using this shared access signature.

Returns

A Shared Access Signature (sas) token.

Return type

str