Base class for pipeline extension processors. Pipeline extensions allow for custom media analysis and processing to be plugged into the Video Analyzer pipeline.
File sink allows for video and audio content to be recorded on the file system on the edge device.
GRPC extension processor allows pipeline extension plugins to be connected to the pipeline through over a gRPC channel. Extension plugins must act as an gRPC server. Please see https://aka.ms/ava-extension-grpc for details.
Defines values for GrpcExtensionDataTransferMode.
KnownGrpcExtensionDataTransferMode can be used interchangeably with GrpcExtensionDataTransferMode,
this enum contains the known values that the service supports.
embedded: Media samples are embedded into the gRPC messages. This mode is less efficient but it requires a simpler implementations and can be used with plugins which are not on the same node as the Video Analyzer module.
sharedMemory: Media samples are made available through shared memory. This mode enables efficient data transfers but it requires that the extension plugin to be co-located on the same node and sharing the same shared memory space.
HTTP extension processor allows pipeline extension plugins to be connected to the pipeline through over the HTTP protocol. Extension plugins must act as an HTTP server. Please see https://aka.ms/ava-extension-http for details.
HTTP header credentials.
BMP image encoding.
JPEG image encoding.
PNG image encoding.
Raw image formatting.
Defines values for ImageFormatRawPixelFormat.
KnownImageFormatRawPixelFormat can be used interchangeably with ImageFormatRawPixelFormat,
this enum contains the known values that the service supports.
yuv420p: Planar YUV 4:2:0, 12bpp, (1 Cr and Cb sample per 2x2 Y samples).
rgb565be: Packed RGB 5:6:5, 16bpp, (msb) 5R 6G 5B(lsb), big-endian.
rgb565le: Packed RGB 5:6:5, 16bpp, (msb) 5R 6G 5B(lsb), little-endian.
rgb555be: Packed RGB 5:5:5, 16bpp, (msb)1X 5R 5G 5B(lsb), big-endian , X=unused/undefined.
rgb555le: Packed RGB 5:5:5, 16bpp, (msb)1X 5R 5G 5B(lsb), little-endian, X=unused/undefined.
rgb24: Packed RGB 8:8:8, 24bpp, RGBRGB.
bgr24: Packed RGB 8:8:8, 24bpp, BGRBGR.
argb: Packed ARGB 8:8:8:8, 32bpp, ARGBARGB.
rgba: Packed RGBA 8:8:8:8, 32bpp, RGBARGBA.
abgr: Packed ABGR 8:8:8:8, 32bpp, ABGRABGR.
bgra: Packed BGRA 8:8:8:8, 32bpp, BGRABGRA.
Defines values for ImageScaleMode.
KnownImageScaleMode can be used interchangeably with ImageScaleMode,
this enum contains the known values that the service supports.
preserveAspectRatio: Preserves the same aspect ratio as the input image. If only one image dimension is provided, the second dimension is calculated based on the input image aspect ratio. When 2 dimensions are provided, the image is resized to fit the most constraining dimension, considering the input image size and aspect ratio.
pad: Pads the image with black horizontal stripes (letterbox) or black vertical stripes (pillar-box) so the image is resized to the specified dimensions while not altering the content aspect ratio.
stretch: Stretches the original image so it resized to the specified dimensions.
IoT Hub Message sink allows for pipeline messages to published into the IoT Edge Hub. Published messages can then be delivered to the cloud and other modules via routes declared in the IoT Edge deployment manifest.
IoT Hub Message source allows for the pipeline to consume messages from the IoT Edge Hub. Messages can be routed from other IoT modules via routes declared in the IoT Edge deployment manifest.
Line crossing processor allows for the detection of tracked objects moving across one or more predefined lines. It must be downstream of an object tracker of downstream on an AI extension node that generates sequenceId for objects which are tracked across different frames of the video. Inference events are generated every time objects crosses from one side of the line to another.
Defines values for LivePipelineState.
KnownLivePipelineState can be used interchangeably with LivePipelineState,
this enum contains the known values that the service supports.
inactive: The live pipeline is idle and not processing media.
activating: The live pipeline is transitioning into the active state.
active: The live pipeline is active and able to process media. If your data source is not available, for instance, if your RTSP camera is powered off or unreachable, the pipeline will still be active and periodically retrying the connection. Your Azure subscription will be billed for the duration in which the live pipeline is in the active state.
deactivating: The live pipeline is transitioning into the inactive state.
Motion detection processor allows for motion detection on the video stream. It generates motion events whenever motion is present on the video.
Defines values for MotionDetectionSensitivity.
KnownMotionDetectionSensitivity can be used interchangeably with MotionDetectionSensitivity,
this enum contains the known values that the service supports.
low: Low sensitivity.
medium: Medium sensitivity.
high: High sensitivity.
Describes a line configuration.
Describes a closed polygon configuration.
Defines values for ObjectTrackingAccuracy.
KnownObjectTrackingAccuracy can be used interchangeably with ObjectTrackingAccuracy,
this enum contains the known values that the service supports.
low: Low accuracy.
medium: Medium accuracy.
high: High accuracy.
Object tracker processor allows for continuous tracking of one of more objects over a finite sequence of video frames. It must be used downstream of an object detector extension node, thus allowing for the extension to be configured to to perform inferences on sparse frames through the use of the 'maximumSamplesPerSecond' sampling property. The object tracker node will then track the detected objects over the frames in which the detector is not invoked resulting on a smother tracking of detected objects across the continuum of video frames. The tracker will stop tracking objects which are not subsequently detected by the upstream detector on the subsequent detections.
Defines values for OutputSelectorOperator.
KnownOutputSelectorOperator can be used interchangeably with OutputSelectorOperator,
this enum contains the known values that the service supports.
is: The property is of the type defined by value.
isNot: The property is not of the type defined by value.
Defines values for OutputSelectorProperty.
KnownOutputSelectorProperty can be used interchangeably with OutputSelectorProperty,
this enum contains the known values that the service supports.
mediaType: The stream's MIME type or subtype: audio, video or application
Defines values for ParameterType.
KnownParameterType can be used interchangeably with ParameterType,
this enum contains the known values that the service supports.
string: The parameter's value is a string.
secretString: The parameter's value is a string that holds sensitive information.
int: The parameter's value is a 32-bit signed integer.
double: The parameter's value is a 64-bit double-precision floating point.
bool: The parameter's value is a boolean value that is either true or false.
A list of PEM formatted certificates.
All of the options for type of request to send
RTSP source allows for media from an RTSP camera or generic RTSP server to be ingested into a live pipeline.
Defines values for RtspTransport.
KnownRtspTransport can be used interchangeably with RtspTransport,
this enum contains the known values that the service supports.
http: HTTP transport. RTSP messages are exchanged over long running HTTP requests and RTP packets are interleaved within the HTTP channel.
tcp: TCP transport. RTSP is used directly over TCP and RTP packets are interleaved within the TCP channel.
A signal gate determines when to block (gate) incoming media, and when to allow it through. It gathers input events over the activationEvaluationWindow, and determines whether to open or close the gate. See https://aka.ms/ava-signalgate for more information.
Defines a Spatial Analysis custom operation. This requires the Azure Cognitive Services Spatial analysis module to be deployed alongside the Video Analyzer module, please see https://aka.ms/ava-spatial-analysis for more information.
Defines values for SpatialAnalysisOperationFocus.
KnownSpatialAnalysisOperationFocus can be used interchangeably with SpatialAnalysisOperationFocus,
this enum contains the known values that the service supports.
center: The center of the object.
bottomCenter: The bottom center of the object.
footprint: The footprint.
Defines a Spatial Analysis person count operation eventing configuration.
Defines values for SpatialAnalysisPersonCountEventTrigger.
KnownSpatialAnalysisPersonCountEventTrigger can be used interchangeably with SpatialAnalysisPersonCountEventTrigger,
this enum contains the known values that the service supports.
event: Event trigger.
interval: Interval trigger.
Defines a Spatial Analysis person count operation. This requires the Azure Cognitive Services Spatial analysis module to be deployed alongside the Video Analyzer module, please see https://aka.ms/ava-spatial-analysis for more information.
Defines a Spatial Analysis person distance operation eventing configuration.
Defines values for SpatialAnalysisPersonDistanceEventTrigger.
KnownSpatialAnalysisPersonDistanceEventTrigger can be used interchangeably with SpatialAnalysisPersonDistanceEventTrigger,
this enum contains the known values that the service supports.
event: Event trigger.
interval: Interval trigger.
Defines a Spatial Analysis person distance operation. This requires the Azure Cognitive Services Spatial analysis module to be deployed alongside the Video Analyzer module, please see https://aka.ms/ava-spatial-analysis for more information.
Defines a Spatial Analysis person line crossing operation eventing configuration.
Defines a Spatial Analysis person line crossing operation. This requires the Azure Cognitive Services Spatial analysis module to be deployed alongside the Video Analyzer module, please see https://aka.ms/ava-spatial-analysis for more information.
Defines a Spatial Analysis person crossing zone operation eventing configuration.
Defines values for SpatialAnalysisPersonZoneCrossingEventType.
KnownSpatialAnalysisPersonZoneCrossingEventType can be used interchangeably with SpatialAnalysisPersonZoneCrossingEventType,
this enum contains the known values that the service supports.
zoneCrossing: Zone crossing event type.
zoneDwellTime: Zone dwell time event type.
Defines a Spatial Analysis person zone crossing operation. This requires the Azure Cognitive Services Spatial analysis module to be deployed alongside the Video Analyzer module, please see https://aka.ms/ava-spatial-analysis for more information.
Base class for Azure Cognitive Services Spatial Analysis typed operations.
TLS endpoint describes an endpoint that the pipeline can connect to over TLS transport (data is encrypted in transit).
Unsecured endpoint describes an endpoint that the pipeline can connect to over clear transport (no encryption in transit).
Username and password credentials.
Video sink allows for video and audio to be recorded to the Video Analyzer service. The recorded video can be played from anywhere and further managed from the cloud. Due to security reasons, a given Video Analyzer edge module instance can only record content to new video entries, or existing video entries previously recorded by the same module. Any attempt to record content to an existing video which has not been created by the same module instance will result in failure to record.
Create a request to set a pipeline topology.
The string which determines the type of request. In this case a PipelineTopologySet request.
The data to send in the request. PipelineTopologySet requests require a pipeline topology.
Create a request to get a pipeline topology.
The string which determines the type of request. In this case a PipelineTopologyGet request.
The data to send in the request. PipelineTopologyGet requests require the name of a pipeline topology.
Create a request to list all pipeline topologies.
The string which determines the type of request. In this case a PipelineTopologyList request.
Create a request to delete a pipeline topology.
The string which determines the type of request. In this case a PipelineTopologyDelete request.
The data to send in the request. PipelineTopologyDelete requests require the name of a pipeline topology.
Create a request to set a live pipeline.
The string which determines the type of request. In this case a LivePipelineSet request.
The data to send in the request. LivePipelineSet requests require a live pipeline.
Create a request to get a live pipeline.
The string which determines the type of request. In this case a LivePipelineGet request.
The data to send in the request. LivePipelineGet requests require a live pipeline name.
Create a request to list all live pipelines.
The string which determines the type of request. In this case a LivePipelineList request.
Create a request to delete a live pipeline
The string which determines the type of request. In this case a LivePipelineDelete request.
The data to send in the request. LivePipelineDelete requests require a live pipeline name.
Create a request to activate a live pipeline
The string which determines the type of request. In this case a LivePipelineActivate request.
The data to send in the request. LivePipelineActivate requests require a live pipeline name.
Create a request to deactivate a live pipeline
The string which determines the type of request. In this case a LivePipelineDeactivate request.
The data to send in the request. LivePipelineDeactivate requests require a live pipeline name.
Generated using TypeDoc
A processor that allows the pipeline topology to send video frames to a Cognitive Services Vision extension. Inference results are relayed to downstream nodes.