Options
All
  • Public
  • Public/Protected
  • All
Menu

Class DataLakeFileClient

Package version

A DataLakeFileClient represents a URL to the Azure Storage file.

Hierarchy

Index

Constructors

constructor

Properties

accountName

accountName: string

Protected blobEndpointUrl

blobEndpointUrl: string

Encoded URL string value for corresponding blob endpoint.

credential

credential: StorageSharedKeyCredential | AnonymousCredential | TokenCredential

Such as AnonymousCredential, StorageSharedKeyCredential or any credential from the @azure/identity package to authenticate requests to the service. You can also provide an object that implements the TokenCredential interface. If not specified, AnonymousCredential is used.

Protected dfsEndpointUrl

dfsEndpointUrl: string

Encoded URL string value for corresponding dfs endpoint.

Protected isHttps

isHttps: boolean

Protected pipeline

pipeline: Pipeline

Request policy pipeline.

internal

Protected storageClientContext

storageClientContext: StorageClientContext

StorageClient is a reference to protocol layer operations entry, which is generated by AutoRest generator.

Protected storageClientContextToBlobEndpoint

storageClientContextToBlobEndpoint: StorageClientContext

storageClientContextWithBlobEndpoint is a reference to protocol layer operations entry, which is generated by AutoRest generator, with its url pointing to the Blob endpoint.

url

url: string

Encoded URL string value.

Accessors

fileSystemName

  • get fileSystemName(): string

name

  • get name(): string

Methods

append

  • append(body: HttpRequestBody, offset: number, length: number, options?: FileAppendOptions): Promise<FileAppendResponse>

create

createIfNotExists

delete

deleteIfExists

exists

flush

  • flush(position: number, options?: FileFlushOptions): Promise<FileFlushResponse>
  • Flushes (writes) previously appended data to a file.

    Parameters

    • position: number

      File position to flush. This parameter allows the caller to upload data in parallel and control the order in which it is appended to the file. It is required when uploading data to be appended to the file and when flushing previously uploaded data to the file. The value must be the position where the data is to be appended. Uploaded data is not immediately flushed, or written, to the file. To flush, the previously uploaded data must be contiguous, the position parameter must be specified and equal to the length of the file after all data has been written, and there must not be a request entity body included with the request.

    • Default value options: FileFlushOptions = {}

      Optional. Options when flushing data.

    Returns Promise<FileFlushResponse>

generateSasUrl

getAccessControl

getDataLakeLeaseClient

getProperties

move

query

  • Quick query for a JSON or CSV formatted file.

    Example usage (Node.js):

    // Query and convert a file to a string
    const queryResponse = await fileClient.query("select * from BlobStorage");
    const downloaded = (await streamToBuffer(queryResponse.readableStreamBody)).toString();
    console.log("Query file content:", downloaded);
    
    async function streamToBuffer(readableStream) {
      return new Promise((resolve, reject) => {
        const chunks = [];
        readableStream.on("data", (data) => {
          chunks.push(data instanceof Buffer ? data : Buffer.from(data));
        });
        readableStream.on("end", () => {
          resolve(Buffer.concat(chunks));
        });
        readableStream.on("error", reject);
      });
    }

    Parameters

    Returns Promise<FileReadResponse>

read

  • Downloads a file from the service, including its metadata and properties.

    • In Node.js, data returns in a Readable stream readableStreamBody
    • In browsers, data returns in a promise contentAsBlob
    see

    https://docs.microsoft.com/en-us/rest/api/storageservices/get-blob

    • Example usage (Node.js):
    // Download and convert a file to a string
    const downloadResponse = await fileClient.read();
    const downloaded = await streamToBuffer(downloadResponse.readableStreamBody);
    console.log("Downloaded file content:", downloaded.toString());
    
    async function streamToBuffer(readableStream) {
      return new Promise((resolve, reject) => {
        const chunks = [];
        readableStream.on("data", (data) => {
          chunks.push(data instanceof Buffer ? data : Buffer.from(data));
        });
        readableStream.on("end", () => {
          resolve(Buffer.concat(chunks));
        });
        readableStream.on("error", reject);
      });
    }

    Example usage (browser):

    // Download and convert a file to a string
    const downloadResponse = await fileClient.read();
    const downloaded = await blobToString(await downloadResponse.contentAsBlob);
    console.log("Downloaded file content", downloaded);
    
    async function blobToString(blob: Blob): Promise<string> {
      const fileReader = new FileReader();
      return new Promise<string>((resolve, reject) => {
        fileReader.onloadend = (ev: any) => {
          resolve(ev.target!.result);
        };
        fileReader.onerror = reject;
        fileReader.readAsText(blob);
      });
    }

    Parameters

    • Default value offset: number = 0

      Optional. Offset to read file, default value is 0.

    • Optional count: undefined | number

      Optional. How many bytes to read, default will read from offset to the end.

    • Default value options: FileReadOptions = {}

      Optional. Options when reading file.

    Returns Promise<FileReadResponse>

readToBuffer

  • readToBuffer(buffer: Buffer, offset?: undefined | number, count?: undefined | number, options?: FileReadToBufferOptions): Promise<Buffer>
  • readToBuffer(offset?: undefined | number, count?: undefined | number, options?: FileReadToBufferOptions): Promise<Buffer>
  • ONLY AVAILABLE IN NODE.JS RUNTIME.

    Reads a Data Lake file in parallel to a buffer. Offset and count are optional, pass 0 for both to read the entire file.

    Warning: Buffers can only support files up to about one gigabyte on 32-bit systems or about two gigabytes on 64-bit systems due to limitations of Node.js/V8. For files larger than this size, consider readToFile.

    Parameters

    • buffer: Buffer

      Buffer to be fill, must have length larger than count

    • Optional offset: undefined | number

      From which position of the Data Lake file to read

    • Optional count: undefined | number

      How much data to be read. Will read to the end when passing undefined

    • Optional options: FileReadToBufferOptions

      -

    Returns Promise<Buffer>

  • ONLY AVAILABLE IN NODE.JS RUNTIME

    Reads a Data Lake file in parallel to a buffer. Offset and count are optional, pass 0 for both to read the entire file

    Warning: Buffers can only support files up to about one gigabyte on 32-bit systems or about two gigabytes on 64-bit systems due to limitations of Node.js/V8. For files larger than this size, consider readToFile.

    Parameters

    • Optional offset: undefined | number

      From which position of the Data Lake file to read(in bytes)

    • Optional count: undefined | number

      How much data(in bytes) to be read. Will read to the end when passing undefined

    • Optional options: FileReadToBufferOptions

      -

    Returns Promise<Buffer>

readToFile

  • ONLY AVAILABLE IN NODE.JS RUNTIME.

    Downloads a Data Lake file to a local file. Fails if the the given file path already exits. Offset and count are optional, pass 0 and undefined respectively to download the entire file.

    Parameters

    • filePath: string

      -

    • Default value offset: number = 0

      From which position of the file to download.

    • Optional count: undefined | number

      How much data to be downloaded. Will download to the end when passing undefined.

    • Default value options: FileReadOptions = {}

      Options to read Data Lake file.

    Returns Promise<FileReadResponse>

    The response data for file read operation, but with readableStreamBody set to undefined since its content is already read and written into a local file at the specified path.

removeAccessControlRecursive

setAccessControl

setAccessControlRecursive

setExpiry

setHttpHeaders

setMetadata

setPermissions

toDirectoryClient

toFileClient

updateAccessControlRecursive

upload

  • upload(data: Buffer | Blob | ArrayBuffer | ArrayBufferView, options?: FileParallelUploadOptions): Promise<FileUploadResponse>

uploadFile

uploadStream

  • ONLY AVAILABLE IN NODE.JS RUNTIME.

    Uploads a Node.js Readable stream into a Data Lake file. This method will try to create a file, then starts uploading chunk by chunk. Please make sure potential size of stream doesn't exceed FILE_MAX_SIZE_BYTES and potential number of chunks doesn't exceed BLOCK_BLOB_MAX_BLOCKS.

    PERFORMANCE IMPROVEMENT TIPS:

    • Input stream highWaterMark is better to set a same value with options.chunkSize parameter, which will avoid Buffer.concat() operations.

    Parameters

    Returns Promise<FileUploadResponse>

Generated using TypeDoc