Options
All
  • Public
  • Public/Protected
  • All
Menu

@azure/video-analyzer-edge

Package version

Azure Video Analyzer Edge client library for JavaScript

Azure Video Analyzer provides a platform to build intelligent video applications that span the edge and the cloud. The platform offers the capability to capture, record, and analyze live video along with publishing the results, video and video analytics, to Azure services in the cloud or the edge. It is designed to be an extensible platform, enabling you to connect different video analysis edge modules such as Cognitive services containers, custom edge modules built by you with open source machine learning models or custom models trained with your own data. You can then use them to analyze live video without worrying about the complexity of building and running a live video pipeline.

Use the client library for Video Analyzer Edge to:

  • Simplify interactions with the Microsoft Azure IoT SDKs
  • Programmatically construct pipeline topologies and live pipelines

Product documentation | Direct methods | Source code

Getting started

Install the package

Install the Video Analyzer client library for Typescript with npm:

npm install @azure/video-analyzer-edge

Prerequisites

  • TypeScript v3.6.

  • You need an active Azure subscription, and a IoT device connection string to use this package.

  • To interact with Azure IoT Hub you will need to run npm install azure-iothub

  • You will need to use the version of the SDK that corresponds to the version of the Video Analyzer edge module you are using.

    SDK Video Analyzer edge module
    1.0.0-beta.x 1.0

Creating a pipeline topology and making requests

Please visit the Examples for starter code.

We guarantee that all client instance methods are thread-safe and independent of each other (guideline). This ensures that the recommendation of reusing client instances is always safe, even across threads.

Key concepts

Pipeline topology vs live pipeline

A pipeline topology is a blueprint or template for instantiating live pipelines. It defines the parameters of the pipeline using placeholders as values for them. A live pipeline references a pipeline topology and specifies the parameters. This way you are able to have multiple live pipelines referencing the same topology but with different values for parameters. For more information please visit pipeline topologies and live pipelines.

Examples

Creating a pipeline topology

To create a pipeline topology you need to define sources and sinks.

const rtspSource: RtspSource = {
    name: "rtspSource",
    endpoint: {
      url: "${rtspUrl}",
      "@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
      credentials: {
        username: "${rtspUserName}",
        password: "${rtspPassword}",
        "@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials"
      }
    } as UnsecuredEndpoint,
    "@type": "#Microsoft.VideoAnalyzer.RtspSource"
  };

  const nodeInput: NodeInput = {
    nodeName: "rtspSource"
  };

  const msgSink: IotHubMessageSink = {
    name: "msgSink",
    inputs: [nodeInput],
    hubOutputName: "${hubSinkOutputName}",
    "@type": "#Microsoft.VideoAnalyzer.IotHubMessageSink"
  };

  const pipelineTopology: PipelineTopology = {
    name: "jsTestTopology",
    properties: {
      description: "Continuous video recording to a Video Analyzer video",
      parameters: [
        { name: "rtspUserName", type: "String", default: "dummyUsername" },
        { name: "rtspPassword", type: "SecretString", default: "dummyPassword" },
        { name: "rtspUrl", type: "String" }
        { name: "hubSinkOutputName", type: "String" }
      ],
      sources: [rtspSource],
      sinks: [msgSink]
    }
  };

Creating a live pipeline

To create a live pipeline instance, you need to have an existing pipeline topology.

const livePipeline: LivePipeline = {
  name: pipelineTopologyName,
  properties: {
    description: "Continuous video recording to a Video Analyzer video",
    topologyName: "jsTestTopology",
    parameters: [{ name: "rtspUrl", value: "rtsp://sample.com" }]
  }
};

Invoking a direct method

To invoke a direct method on your device you need to first define the request using the Video Analyzer Edge SDK, then send that method request using the IoT SDK's CloudToDeviceMethod.

import { createRequest } from "@azure/video-analyzer-edge";
import { Client } from "azure-iothub";

const deviceId = "lva-sample-device";
const moduleId = "mediaEdge";
const connectionString = "connectionString";
const iotHubClient = Client.fromConnectionString(connectionString);
const pipelineTopologySetRequest = createRequest("pipelineTopologySet", pipelineTopology);
const setPipelineTopResponse = await iotHubClient.invokeDeviceMethod(deviceId, moduleId, {
  methodName: pipelineTopologySetRequest.methodName,
  payload: pipelineTopologySetRequest.payload
});

Troubleshooting

  • When creating a method request remember to check the spelling of the name of the method

Next steps

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

If you encounter any issues, please open an issue on our Github.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information, see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Generated using TypeDoc