Creates an instance of DataLakeFileClient from url and credential.
A Client string pointing to Azure Storage data lake file, such as "https://myaccount.dfs.core.windows.net/filesystem/file". You can append a SAS if using AnonymousCredential, such as "https://myaccount.dfs.core.windows.net/filesystem/directory/file?sasString".
Creates an instance of DataLakeFileClient from url and pipeline.
A Client string pointing to Azure Storage data lake file, such as "https://myaccount.dfs.core.windows.net/filesystem/file". You can append a SAS if using AnonymousCredential, such as "https://myaccount.dfs.core.windows.net/filesystem/directory/file?sasString".
Call newPipeline() to create a default pipeline, or provide a customized pipeline.
Encoded URL string value for corresponding blob endpoint.
Such as AnonymousCredential, StorageSharedKeyCredential or any credential from the @azure/identity package to authenticate requests to the service. You can also provide an object that implements the TokenCredential interface. If not specified, AnonymousCredential is used.
Encoded URL string value for corresponding dfs endpoint.
StorageClient is a reference to protocol layer operations entry, which is generated by AutoRest generator.
Encoded URL string value.
Name of current file system.
Name of current path (directory or file).
Uploads data to be appended to a file. Data can only be appended to a file. To apply perviously uploaded data to a file, call flush.
Content to be uploaded.
Append offset in bytes.
Length of content to append in bytes.
Create a file.
Resource type, must be "file" for DataLakeFileClient.
Create a file.
Delete current path (directory or file).
Returns true if the Data Lake file represented by this client exists; false otherwise.
NOTE: use this function with care since an existing file might be deleted by other clients or applications. Vice versa new files might be added by other clients or applications after this function completes.
Flushes (writes) previously appended data to a file.
File position to flush. This parameter allows the caller to upload data in parallel and control the order in which it is appended to the file. It is required when uploading data to be appended to the file and when flushing previously uploaded data to the file. The value must be the position where the data is to be appended. Uploaded data is not immediately flushed, or written, to the file. To flush, the previously uploaded data must be contiguous, the position parameter must be specified and equal to the length of the file after all data has been written, and there must not be a request entity body included with the request.
Returns the access control data for a path (directory of file).
Get a DataLakeLeaseClient that manages leases on the path (directory or file).
Returns all user-defined metadata, standard HTTP properties, and system properties for the path (directory or file).
WARNING: The metadata
object returned in the response will have its keys in lowercase, even if
they originally contained uppercase characters. This differs from the metadata keys returned by
the methods of DataLakeFileSystemClient that list paths using the includeMetadata
option, which
will retain their original casing.
Move directory or file within same file system.
Destination directory path like "directory" or file path "directory/file"
Move directory or file to another file system.
Destination file system like "filesystem".
Destination directory path like "directory" or file path "directory/file"
Downloads a file from the service, including its metadata and properties.
ONLY AVAILABLE IN NODE.JS RUNTIME.
Reads a Data Lake file in parallel to a buffer. Offset and count are optional, pass 0 for both to read the entire file.
Warning: Buffers can only support files up to about one gigabyte on 32-bit systems or about two gigabytes on 64-bit systems due to limitations of Node.js/V8. For files larger than this size, consider readToFile.
Buffer to be fill, must have length larger than count
From which position of the Data Lake file to read
ONLY AVAILABLE IN NODE.JS RUNTIME
Reads a Data Lake file in parallel to a buffer. Offset and count are optional, pass 0 for both to read the entire file
Warning: Buffers can only support files up to about one gigabyte on 32-bit systems or about two gigabytes on 64-bit systems due to limitations of Node.js/V8. For files larger than this size, consider readToFile.
From which position of the Data Lake file to read(in bytes)
ONLY AVAILABLE IN NODE.JS RUNTIME.
Downloads a Data Lake file to a local file. Fails if the the given file path already exits. Offset and count are optional, pass 0 and undefined respectively to download the entire file.
The response data for file read operation, but with readableStreamBody set to undefined since its content is already read and written into a local file at the specified path.
Set the access control data for a path (directory of file).
The POSIX access control list for the file or directory.
Sets system properties on the path (directory or file).
If no value provided, or no value provided for the specified blob HTTP headers, these blob HTTP headers without a value will be cleared.
Sets user-defined metadata for the specified path (directory of file) as one or more name-value pairs.
If no option provided, or no metadata defined in the parameter, the path metadata will be removed.
Sets the file permissions on a path.
The POSIX access permissions for the file owner, the file owning group, and others.
Convert current DataLakePathClient to DataLakeDirectoryClient if current path is a directory.
Convert current DataLakePathClient to DataLakeFileClient if current path is a file.
Uploads a Buffer(Node.js)/Blob/ArrayBuffer/ArrayBufferView to a File.
Buffer(Node), Blob, ArrayBuffer or ArrayBufferView
ONLY AVAILABLE IN NODE.JS RUNTIME.
Uploads a local file to a Data Lake file.
Full path of the local file
ONLY AVAILABLE IN NODE.JS RUNTIME.
Uploads a Node.js Readable stream into a Data Lake file. This method will try to create a file, then starts uploading chunk by chunk. Please make sure potential size of stream doesn't exceed FILE_MAX_SIZE_BYTES and potential number of chunks doesn't exceed BLOCK_BLOB_MAX_BLOCKS.
PERFORMANCE IMPROVEMENT TIPS:
Node.js Readable stream.
Generated using TypeDoc
A DataLakeFileClient represents a URL to the Azure Storage file.