azure.mgmt.media.models module¶
-
exception
azure.mgmt.media.models.
ApiErrorException
(deserialize, response, *args)[source]¶ Bases:
msrest.exceptions.HttpOperationError
Server responsed with exception of type: ‘ApiError’.
- Parameters
deserialize – A deserializer
response – Server response to be deserialized.
-
class
azure.mgmt.media.models.
AacAudio
(*, label: str = None, channels: int = None, sampling_rate: int = None, bitrate: int = None, profile=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.Audio
Describes Advanced Audio Codec (AAC) audio encoding settings.
All required parameters must be populated in order to send to Azure.
- Parameters
label (str) – An optional label for the codec. The label can be used to control muxing behavior.
odatatype (str) – Required. Constant filled by server.
channels (int) – The number of channels in the audio.
sampling_rate (int) – The sampling rate to use for encoding in hertz.
bitrate (int) – The bitrate, in bits per second, of the output encoded audio.
profile (str or AacAudioProfile) – The encoding profile to be used when encoding audio with AAC. Possible values include: ‘AacLc’, ‘HeAacV1’, ‘HeAacV2’
-
class
azure.mgmt.media.models.
AbsoluteClipTime
(*, time, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ClipTime
Specifies the clip time as an absolute time position in the media file. The absolute time can point to a different position depending on whether the media file starts from a timestamp of zero or not.
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
time (timedelta) – Required. The time position on the timeline of the input media. It is usually specified as an ISO8601 period. e.g PT30S for 30 seconds.
-
class
azure.mgmt.media.models.
AccountFilter
(*, presentation_time_range=None, first_quality=None, tracks=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ProxyResource
An Account Filter.
Variables are only populated by the server, and will be ignored when sending a request.
- Variables
- Parameters
presentation_time_range (PresentationTimeRange) – The presentation time range.
first_quality (FirstQuality) – The first quality.
tracks (list[FilterTrackSelection]) – The tracks selection conditions.
-
class
azure.mgmt.media.models.
AkamaiAccessControl
(*, akamai_signature_header_authentication_key_list=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Akamai access control.
- Parameters
akamai_signature_header_authentication_key_list (list[AkamaiSignatureHeaderAuthenticationKey]) – authentication key list
-
class
azure.mgmt.media.models.
AkamaiSignatureHeaderAuthenticationKey
(*, identifier: str = None, base64_key: str = None, expiration=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Akamai Signature Header authentication key.
-
class
azure.mgmt.media.models.
ApiError
(*, error=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The API error.
- Parameters
error (ODataError) – ApiError. The error properties.
-
class
azure.mgmt.media.models.
Asset
(*, alternate_id: str = None, description: str = None, container: str = None, storage_account_name: str = None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ProxyResource
An Asset.
Variables are only populated by the server, and will be ignored when sending a request.
- Variables
asset_id (str) – The Asset ID.
created (datetime) – The creation date of the Asset.
last_modified (datetime) – The last modified date of the Asset.
storage_encryption_format (str or AssetStorageEncryptionFormat) – The Asset encryption format. One of None or MediaStorageEncryption. Possible values include: ‘None’, ‘MediaStorageClientEncryption’
- Parameters
-
class
azure.mgmt.media.models.
AssetContainerSas
(*, asset_container_sas_urls=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The Asset Storage container SAS URLs.
-
class
azure.mgmt.media.models.
AssetFileEncryptionMetadata
(*, asset_file_id: str, initialization_vector: str = None, asset_file_name: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The Asset File Storage encryption metadata.
All required parameters must be populated in order to send to Azure.
-
class
azure.mgmt.media.models.
AssetFilter
(*, presentation_time_range=None, first_quality=None, tracks=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ProxyResource
An Asset Filter.
Variables are only populated by the server, and will be ignored when sending a request.
- Variables
- Parameters
presentation_time_range (PresentationTimeRange) – The presentation time range.
first_quality (FirstQuality) – The first quality.
tracks (list[FilterTrackSelection]) – The tracks selection conditions.
-
class
azure.mgmt.media.models.
AssetStreamingLocator
(**kwargs)[source]¶ Bases:
msrest.serialization.Model
Properties of the Streaming Locator.
Variables are only populated by the server, and will be ignored when sending a request.
- Variables
asset_name (str) – Asset Name.
created (datetime) – The creation time of the Streaming Locator.
start_time (datetime) – The start time of the Streaming Locator.
end_time (datetime) – The end time of the Streaming Locator.
streaming_locator_id (str) – StreamingLocatorId of the Streaming Locator.
streaming_policy_name (str) – Name of the Streaming Policy used by this Streaming Locator.
default_content_key_policy_name (str) – Name of the default ContentKeyPolicy used by this Streaming Locator.
-
class
azure.mgmt.media.models.
Audio
(*, label: str = None, channels: int = None, sampling_rate: int = None, bitrate: int = None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.Codec
Defines the common properties for all audio codecs.
You probably want to use the sub-classes and not this class directly. Known sub-classes are: AacAudio
All required parameters must be populated in order to send to Azure.
- Parameters
label (str) – An optional label for the codec. The label can be used to control muxing behavior.
odatatype (str) – Required. Constant filled by server.
channels (int) – The number of channels in the audio.
sampling_rate (int) – The sampling rate to use for encoding in hertz.
bitrate (int) – The bitrate, in bits per second, of the output encoded audio.
-
class
azure.mgmt.media.models.
AudioAnalyzerPreset
(*, audio_language: str = None, experimental_options=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.Preset
The Audio Analyzer preset applies a pre-defined set of AI-based analysis operations, including speech transcription. Currently, the preset supports processing of content with a single audio track.
You probably want to use the sub-classes and not this class directly. Known sub-classes are: VideoAnalyzerPreset
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
audio_language (str) – The language for the audio payload in the input using the BCP-47 format of ‘language tag-region’ (e.g: ‘en-US’). If you know the language of your content, it is recommended that you specify it. If the language isn’t specified or set to null, automatic language detection will choose the first language detected and process with the selected language for the duration of the file. It does not currently support dynamically switching between languages after the first language is detected. The automatic detection works best with audio recordings with clearly discernable speech. If automatic detection fails to find the language, transcription would fallback to ‘en-US’.” The list of supported languages is available here: https://go.microsoft.com/fwlink/?linkid=2109463
experimental_options (dict[str, str]) – Dictionary containing key value pairs for parameters not exposed in the preset itself
-
class
azure.mgmt.media.models.
AudioOverlay
(*, input_label: str, start=None, end=None, fade_in_duration=None, fade_out_duration=None, audio_gain_level: float = None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.Overlay
Describes the properties of an audio overlay.
All required parameters must be populated in order to send to Azure.
- Parameters
input_label (str) – Required. The label of the job input which is to be used as an overlay. The Input must specify exactly one file. You can specify an image file in JPG or PNG formats, or an audio file (such as a WAV, MP3, WMA or M4A file), or a video file. See https://aka.ms/mesformats for the complete list of supported audio and video file formats.
start (timedelta) – The start position, with reference to the input video, at which the overlay starts. The value should be in ISO 8601 format. For example, PT05S to start the overlay at 5 seconds in to the input video. If not specified the overlay starts from the beginning of the input video.
end (timedelta) – The position in the input video at which the overlay ends. The value should be in ISO 8601 duration format. For example, PT30S to end the overlay at 30 seconds in to the input video. If not specified the overlay will be applied until the end of the input video if inputLoop is true. Else, if inputLoop is false, then overlay will last as long as the duration of the overlay media.
fade_in_duration (timedelta) – The duration over which the overlay fades in onto the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade in (same as PT0S).
fade_out_duration (timedelta) – The duration over which the overlay fades out of the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade out (same as PT0S).
audio_gain_level (float) – The gain level of audio in the overlay. The value should be in the range [0, 1.0]. The default is 1.0.
odatatype (str) – Required. Constant filled by server.
-
class
azure.mgmt.media.models.
BuiltInStandardEncoderPreset
(*, preset_name, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.Preset
Describes a built-in preset for encoding the input video with the Standard Encoder.
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
preset_name (str or EncoderNamedPreset) – Required. The built-in preset to be used for encoding videos. Possible values include: ‘H264SingleBitrateSD’, ‘H264SingleBitrate720p’, ‘H264SingleBitrate1080p’, ‘AdaptiveStreaming’, ‘AACGoodQualityAudio’, ‘ContentAwareEncodingExperimental’, ‘ContentAwareEncoding’, ‘H264MultipleBitrate1080p’, ‘H264MultipleBitrate720p’, ‘H264MultipleBitrateSD’
-
class
azure.mgmt.media.models.
CbcsDrmConfiguration
(*, fair_play=None, play_ready=None, widevine=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Class to specify DRM configurations of CommonEncryptionCbcs scheme in Streaming Policy.
- Parameters
fair_play (StreamingPolicyFairPlayConfiguration) – FairPlay configurations
play_ready (StreamingPolicyPlayReadyConfiguration) – PlayReady configurations
widevine (StreamingPolicyWidevineConfiguration) – Widevine configurations
-
class
azure.mgmt.media.models.
CencDrmConfiguration
(*, play_ready=None, widevine=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Class to specify DRM configurations of CommonEncryptionCenc scheme in Streaming Policy.
- Parameters
play_ready (StreamingPolicyPlayReadyConfiguration) – PlayReady configurations
widevine (StreamingPolicyWidevineConfiguration) – Widevine configurations
-
class
azure.mgmt.media.models.
CheckNameAvailabilityInput
(*, name: str = None, type: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The input to the check name availability request.
-
class
azure.mgmt.media.models.
ClipTime
(**kwargs)[source]¶ Bases:
msrest.serialization.Model
Base class for specifying a clip time. Use sub classes of this class to specify the time position in the media.
You probably want to use the sub-classes and not this class directly. Known sub-classes are: AbsoluteClipTime
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
-
class
azure.mgmt.media.models.
Codec
(*, label: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Describes the basic properties of all codecs.
You probably want to use the sub-classes and not this class directly. Known sub-classes are: Audio, CopyVideo, Video, CopyAudio
All required parameters must be populated in order to send to Azure.
-
class
azure.mgmt.media.models.
CommonEncryptionCbcs
(*, enabled_protocols=None, clear_tracks=None, content_keys=None, drm=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Class for CommonEncryptionCbcs encryption scheme.
- Parameters
enabled_protocols (EnabledProtocols) – Representing supported protocols
clear_tracks (list[TrackSelection]) – Representing which tracks should not be encrypted
content_keys (StreamingPolicyContentKeys) – Representing default content key for each encryption scheme and separate content keys for specific tracks
drm (CbcsDrmConfiguration) – Configuration of DRMs for current encryption scheme
-
class
azure.mgmt.media.models.
CommonEncryptionCenc
(*, enabled_protocols=None, clear_tracks=None, content_keys=None, drm=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Class for envelope encryption scheme.
- Parameters
enabled_protocols (EnabledProtocols) – Representing supported protocols
clear_tracks (list[TrackSelection]) – Representing which tracks should not be encrypted
content_keys (StreamingPolicyContentKeys) – Representing default content key for each encryption scheme and separate content keys for specific tracks
drm (CencDrmConfiguration) – Configuration of DRMs for CommonEncryptionCenc encryption scheme
-
class
azure.mgmt.media.models.
ContentKeyPolicy
(*, options, description: str = None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ProxyResource
A Content Key Policy resource.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
- Variables
- Parameters
description (str) – A description for the Policy.
options (list[ContentKeyPolicyOption]) – Required. The Key Policy options.
-
class
azure.mgmt.media.models.
ContentKeyPolicyClearKeyConfiguration
(**kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ContentKeyPolicyConfiguration
Represents a configuration for non-DRM keys.
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
-
class
azure.mgmt.media.models.
ContentKeyPolicyConfiguration
(**kwargs)[source]¶ Bases:
msrest.serialization.Model
Base class for Content Key Policy configuration. A derived class must be used to create a configuration.
You probably want to use the sub-classes and not this class directly. Known sub-classes are: ContentKeyPolicyClearKeyConfiguration, ContentKeyPolicyUnknownConfiguration, ContentKeyPolicyWidevineConfiguration, ContentKeyPolicyPlayReadyConfiguration, ContentKeyPolicyFairPlayConfiguration
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
-
class
azure.mgmt.media.models.
ContentKeyPolicyFairPlayConfiguration
(*, ask: bytearray, fair_play_pfx_password: str, fair_play_pfx: str, rental_and_lease_key_type, rental_duration: int, offline_rental_configuration=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ContentKeyPolicyConfiguration
Specifies a configuration for FairPlay licenses.
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
ask (bytearray) – Required. The key that must be used as FairPlay Application Secret key.
fair_play_pfx_password (str) – Required. The password encrypting FairPlay certificate in PKCS 12 (pfx) format.
fair_play_pfx (str) – Required. The Base64 representation of FairPlay certificate in PKCS 12 (pfx) format (including private key).
rental_and_lease_key_type (str or ContentKeyPolicyFairPlayRentalAndLeaseKeyType) – Required. The rental and lease key type. Possible values include: ‘Unknown’, ‘Undefined’, ‘DualExpiry’, ‘PersistentUnlimited’, ‘PersistentLimited’
rental_duration (long) – Required. The rental duration. Must be greater than or equal to 0.
offline_rental_configuration (ContentKeyPolicyFairPlayOfflineRentalConfiguration) – Offline rental policy
-
class
azure.mgmt.media.models.
ContentKeyPolicyFairPlayOfflineRentalConfiguration
(*, playback_duration_seconds: int, storage_duration_seconds: int, **kwargs)[source]¶ Bases:
msrest.serialization.Model
ContentKeyPolicyFairPlayOfflineRentalConfiguration.
All required parameters must be populated in order to send to Azure.
- Parameters
playback_duration_seconds (long) – Required. Playback duration
storage_duration_seconds (long) – Required. Storage duration
-
class
azure.mgmt.media.models.
ContentKeyPolicyOpenRestriction
(**kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ContentKeyPolicyRestriction
Represents an open restriction. License or key will be delivered on every request.
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
-
class
azure.mgmt.media.models.
ContentKeyPolicyOption
(*, configuration, restriction, name: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Represents a policy option.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
- Variables
policy_option_id (str) – The legacy Policy Option ID.
- Parameters
name (str) – The Policy Option description.
configuration (ContentKeyPolicyConfiguration) – Required. The key delivery configuration.
restriction (ContentKeyPolicyRestriction) – Required. The requirements that must be met to deliver keys with this configuration
-
class
azure.mgmt.media.models.
ContentKeyPolicyPlayReadyConfiguration
(*, licenses, response_custom_data: str = None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ContentKeyPolicyConfiguration
Specifies a configuration for PlayReady licenses.
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
licenses (list[ContentKeyPolicyPlayReadyLicense]) – Required. The PlayReady licenses.
response_custom_data (str) – The custom response data.
-
class
azure.mgmt.media.models.
ContentKeyPolicyPlayReadyContentEncryptionKeyFromHeader
(**kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ContentKeyPolicyPlayReadyContentKeyLocation
Specifies that the content key ID is in the PlayReady header.
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
-
class
azure.mgmt.media.models.
ContentKeyPolicyPlayReadyContentEncryptionKeyFromKeyIdentifier
(*, key_id: str, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ContentKeyPolicyPlayReadyContentKeyLocation
Specifies that the content key ID is specified in the PlayReady configuration.
All required parameters must be populated in order to send to Azure.
-
class
azure.mgmt.media.models.
ContentKeyPolicyPlayReadyContentKeyLocation
(**kwargs)[source]¶ Bases:
msrest.serialization.Model
Base class for content key ID location. A derived class must be used to represent the location.
You probably want to use the sub-classes and not this class directly. Known sub-classes are: ContentKeyPolicyPlayReadyContentEncryptionKeyFromHeader, ContentKeyPolicyPlayReadyContentEncryptionKeyFromKeyIdentifier
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
-
class
azure.mgmt.media.models.
ContentKeyPolicyPlayReadyExplicitAnalogTelevisionRestriction
(*, best_effort: bool, configuration_data: int, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Configures the Explicit Analog Television Output Restriction control bits. For further details see the PlayReady Compliance Rules.
All required parameters must be populated in order to send to Azure.
-
class
azure.mgmt.media.models.
ContentKeyPolicyPlayReadyLicense
(*, allow_test_devices: bool, license_type, content_key_location, content_type, begin_date=None, expiration_date=None, relative_begin_date=None, relative_expiration_date=None, grace_period=None, play_right=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The PlayReady license.
All required parameters must be populated in order to send to Azure.
- Parameters
allow_test_devices (bool) – Required. A flag indicating whether test devices can use the license.
begin_date (datetime) – The begin date of license
expiration_date (datetime) – The expiration date of license.
relative_begin_date (timedelta) – The relative begin date of license.
relative_expiration_date (timedelta) – The relative expiration date of license.
grace_period (timedelta) – The grace period of license.
play_right (ContentKeyPolicyPlayReadyPlayRight) – The license PlayRight
license_type (str or ContentKeyPolicyPlayReadyLicenseType) – Required. The license type. Possible values include: ‘Unknown’, ‘NonPersistent’, ‘Persistent’
content_key_location (ContentKeyPolicyPlayReadyContentKeyLocation) – Required. The content key location.
content_type (str or ContentKeyPolicyPlayReadyContentType) – Required. The PlayReady content type. Possible values include: ‘Unknown’, ‘Unspecified’, ‘UltraVioletDownload’, ‘UltraVioletStreaming’
-
class
azure.mgmt.media.models.
ContentKeyPolicyPlayReadyPlayRight
(*, digital_video_only_content_restriction: bool, image_constraint_for_analog_component_video_restriction: bool, image_constraint_for_analog_computer_monitor_restriction: bool, allow_passing_video_content_to_unknown_output, first_play_expiration=None, scms_restriction: int = None, agc_and_color_stripe_restriction: int = None, explicit_analog_television_output_restriction=None, uncompressed_digital_video_opl: int = None, compressed_digital_video_opl: int = None, analog_video_opl: int = None, compressed_digital_audio_opl: int = None, uncompressed_digital_audio_opl: int = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Configures the Play Right in the PlayReady license.
All required parameters must be populated in order to send to Azure.
- Parameters
first_play_expiration (timedelta) – The amount of time that the license is valid after the license is first used to play content.
scms_restriction (int) – Configures the Serial Copy Management System (SCMS) in the license. Must be between 0 and 3 inclusive.
agc_and_color_stripe_restriction (int) – Configures Automatic Gain Control (AGC) and Color Stripe in the license. Must be between 0 and 3 inclusive.
explicit_analog_television_output_restriction (ContentKeyPolicyPlayReadyExplicitAnalogTelevisionRestriction) – Configures the Explicit Analog Television Output Restriction in the license. Configuration data must be between 0 and 3 inclusive.
digital_video_only_content_restriction (bool) – Required. Enables the Image Constraint For Analog Component Video Restriction in the license.
image_constraint_for_analog_component_video_restriction (bool) – Required. Enables the Image Constraint For Analog Component Video Restriction in the license.
image_constraint_for_analog_computer_monitor_restriction (bool) – Required. Enables the Image Constraint For Analog Component Video Restriction in the license.
allow_passing_video_content_to_unknown_output (str or ContentKeyPolicyPlayReadyUnknownOutputPassingOption) – Required. Configures Unknown output handling settings of the license. Possible values include: ‘Unknown’, ‘NotAllowed’, ‘Allowed’, ‘AllowedWithVideoConstriction’
uncompressed_digital_video_opl (int) – Specifies the output protection level for uncompressed digital video.
compressed_digital_video_opl (int) – Specifies the output protection level for compressed digital video.
analog_video_opl (int) – Specifies the output protection level for compressed digital audio.
compressed_digital_audio_opl (int) – Specifies the output protection level for compressed digital audio.
uncompressed_digital_audio_opl (int) – Specifies the output protection level for uncompressed digital audio.
-
class
azure.mgmt.media.models.
ContentKeyPolicyProperties
(*, options, description: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The properties of the Content Key Policy.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
- Variables
policy_id (str) – The legacy Policy ID.
created (datetime) – The creation date of the Policy
last_modified (datetime) – The last modified date of the Policy
- Parameters
description (str) – A description for the Policy.
options (list[ContentKeyPolicyOption]) – Required. The Key Policy options.
-
class
azure.mgmt.media.models.
ContentKeyPolicyRestriction
(**kwargs)[source]¶ Bases:
msrest.serialization.Model
Base class for Content Key Policy restrictions. A derived class must be used to create a restriction.
You probably want to use the sub-classes and not this class directly. Known sub-classes are: ContentKeyPolicyOpenRestriction, ContentKeyPolicyUnknownRestriction, ContentKeyPolicyTokenRestriction
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
-
class
azure.mgmt.media.models.
ContentKeyPolicyRestrictionTokenKey
(**kwargs)[source]¶ Bases:
msrest.serialization.Model
Base class for Content Key Policy key for token validation. A derived class must be used to create a token key.
You probably want to use the sub-classes and not this class directly. Known sub-classes are: ContentKeyPolicySymmetricTokenKey, ContentKeyPolicyRsaTokenKey, ContentKeyPolicyX509CertificateTokenKey
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
-
class
azure.mgmt.media.models.
ContentKeyPolicyRsaTokenKey
(*, exponent: bytearray, modulus: bytearray, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ContentKeyPolicyRestrictionTokenKey
Specifies a RSA key for token validation.
All required parameters must be populated in order to send to Azure.
-
class
azure.mgmt.media.models.
ContentKeyPolicySymmetricTokenKey
(*, key_value: bytearray, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ContentKeyPolicyRestrictionTokenKey
Specifies a symmetric key for token validation.
All required parameters must be populated in order to send to Azure.
-
class
azure.mgmt.media.models.
ContentKeyPolicyTokenClaim
(*, claim_type: str = None, claim_value: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Represents a token claim.
-
class
azure.mgmt.media.models.
ContentKeyPolicyTokenRestriction
(*, issuer: str, audience: str, primary_verification_key, restriction_token_type, alternate_verification_keys=None, required_claims=None, open_id_connect_discovery_document: str = None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ContentKeyPolicyRestriction
Represents a token restriction. Provided token must match these requirements for successful license or key delivery.
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
issuer (str) – Required. The token issuer.
audience (str) – Required. The audience for the token.
primary_verification_key (ContentKeyPolicyRestrictionTokenKey) – Required. The primary verification key.
alternate_verification_keys (list[ContentKeyPolicyRestrictionTokenKey]) – A list of alternative verification keys.
required_claims (list[ContentKeyPolicyTokenClaim]) – A list of required token claims.
restriction_token_type (str or ContentKeyPolicyRestrictionTokenType) – Required. The type of token. Possible values include: ‘Unknown’, ‘Swt’, ‘Jwt’
open_id_connect_discovery_document (str) – The OpenID connect discovery document.
-
class
azure.mgmt.media.models.
ContentKeyPolicyUnknownConfiguration
(**kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ContentKeyPolicyConfiguration
Represents a ContentKeyPolicyConfiguration that is unavailable in the current API version.
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
-
class
azure.mgmt.media.models.
ContentKeyPolicyUnknownRestriction
(**kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ContentKeyPolicyRestriction
Represents a ContentKeyPolicyRestriction that is unavailable in the current API version.
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
-
class
azure.mgmt.media.models.
ContentKeyPolicyWidevineConfiguration
(*, widevine_template: str, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ContentKeyPolicyConfiguration
Specifies a configuration for Widevine licenses.
All required parameters must be populated in order to send to Azure.
-
class
azure.mgmt.media.models.
ContentKeyPolicyX509CertificateTokenKey
(*, raw_body: bytearray, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ContentKeyPolicyRestrictionTokenKey
Specifies a certificate for token validation.
All required parameters must be populated in order to send to Azure.
-
class
azure.mgmt.media.models.
CopyAudio
(*, label: str = None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.Codec
A codec flag, which tells the encoder to copy the input audio bitstream.
All required parameters must be populated in order to send to Azure.
-
class
azure.mgmt.media.models.
CopyVideo
(*, label: str = None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.Codec
A codec flag, which tells the encoder to copy the input video bitstream without re-encoding.
All required parameters must be populated in order to send to Azure.
-
class
azure.mgmt.media.models.
CrossSiteAccessPolicies
(*, client_access_policy: str = None, cross_domain_policy: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The client access policy.
-
class
azure.mgmt.media.models.
DefaultKey
(*, label: str = None, policy_name: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Class to specify properties of default content key for each encryption scheme.
-
class
azure.mgmt.media.models.
Deinterlace
(*, parity=None, mode=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Describes the de-interlacing settings.
- Parameters
parity (str or DeinterlaceParity) – The field parity for de-interlacing, defaults to Auto. Possible values include: ‘Auto’, ‘TopFieldFirst’, ‘BottomFieldFirst’
mode (str or DeinterlaceMode) – The deinterlacing mode. Defaults to AutoPixelAdaptive. Possible values include: ‘Off’, ‘AutoPixelAdaptive’
-
class
azure.mgmt.media.models.
EdgePolicies
(*, usage_data_collection_policy=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
EdgePolicies.
- Parameters
usage_data_collection_policy (EdgeUsageDataCollectionPolicy) –
-
class
azure.mgmt.media.models.
EdgeUsageDataCollectionPolicy
(*, data_collection_frequency: str = None, data_reporting_frequency: str = None, max_allowed_unreported_usage_duration: str = None, event_hub_details=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
EdgeUsageDataCollectionPolicy.
- Parameters
data_collection_frequency (str) – Usage data collection frequency in ISO 8601 duration format e.g. PT10M , PT5H.
data_reporting_frequency (str) – Usage data reporting frequency in ISO 8601 duration format e.g. PT10M , PT5H.
max_allowed_unreported_usage_duration (str) – Maximum time for which the functionality of the device will not be hampered for not reporting the usage data.
event_hub_details (EdgeUsageDataEventHub) – Details of Event Hub where the usage will be reported.
-
class
azure.mgmt.media.models.
EdgeUsageDataEventHub
(*, name: str = None, namespace: str = None, token: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
EdgeUsageDataEventHub.
-
class
azure.mgmt.media.models.
EnabledProtocols
(*, download: bool, dash: bool, hls: bool, smooth_streaming: bool, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Class to specify which protocols are enabled.
All required parameters must be populated in order to send to Azure.
-
class
azure.mgmt.media.models.
EntityNameAvailabilityCheckOutput
(*, name_available: bool, reason: str = None, message: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The response from the check name availability request.
All required parameters must be populated in order to send to Azure.
-
class
azure.mgmt.media.models.
EnvelopeEncryption
(*, enabled_protocols=None, clear_tracks=None, content_keys=None, custom_key_acquisition_url_template: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Class for EnvelopeEncryption encryption scheme.
- Parameters
enabled_protocols (EnabledProtocols) – Representing supported protocols
clear_tracks (list[TrackSelection]) – Representing which tracks should not be encrypted
content_keys (StreamingPolicyContentKeys) – Representing default content key for each encryption scheme and separate content keys for specific tracks
custom_key_acquisition_url_template (str) – Template for the URL of the custom service delivering keys to end user players. Not required when using Azure Media Services for issuing keys. The template supports replaceable tokens that the service will update at runtime with the value specific to the request. The currently supported token values are {AlternativeMediaId}, which is replaced with the value of StreamingLocatorId.AlternativeMediaId, and {ContentKeyId}, which is replaced with the value of identifier of the key being requested.
-
class
azure.mgmt.media.models.
FaceDetectorPreset
(*, resolution=None, experimental_options=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.Preset
Describes all the settings to be used when analyzing a video in order to detect all the faces present.
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
resolution (str or AnalysisResolution) – Specifies the maximum resolution at which your video is analyzed. The default behavior is “SourceResolution,” which will keep the input video at its original resolution when analyzed. Using “StandardDefinition” will resize input videos to standard definition while preserving the appropriate aspect ratio. It will only resize if the video is of higher resolution. For example, a 1920x1080 input would be scaled to 640x360 before processing. Switching to “StandardDefinition” will reduce the time it takes to process high resolution video. It may also reduce the cost of using this component (see https://azure.microsoft.com/en-us/pricing/details/media-services/#analytics for details). However, faces that end up being too small in the resized video may not be detected. Possible values include: ‘SourceResolution’, ‘StandardDefinition’
experimental_options (dict[str, str]) – Dictionary containing key value pairs for parameters not exposed in the preset itself
-
class
azure.mgmt.media.models.
Filters
(*, deinterlace=None, rotation=None, crop=None, overlays=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Describes all the filtering operations, such as de-interlacing, rotation etc. that are to be applied to the input media before encoding.
- Parameters
deinterlace (Deinterlace) – The de-interlacing settings.
rotation (str or Rotation) – The rotation, if any, to be applied to the input video, before it is encoded. Default is Auto. Possible values include: ‘Auto’, ‘None’, ‘Rotate0’, ‘Rotate90’, ‘Rotate180’, ‘Rotate270’
crop (Rectangle) – The parameters for the rectangular window with which to crop the input video.
overlays (list[Overlay]) – The properties of overlays to be applied to the input video. These could be audio, image or video overlays.
-
class
azure.mgmt.media.models.
FilterTrackPropertyCondition
(*, property, value: str, operation, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The class to specify one track property condition.
All required parameters must be populated in order to send to Azure.
- Parameters
property (str or FilterTrackPropertyType) – Required. The track property type. Possible values include: ‘Unknown’, ‘Type’, ‘Name’, ‘Language’, ‘FourCC’, ‘Bitrate’
value (str) – Required. The track property value.
operation (str or FilterTrackPropertyCompareOperation) – Required. The track property condition operation. Possible values include: ‘Equal’, ‘NotEqual’
-
class
azure.mgmt.media.models.
FilterTrackSelection
(*, track_selections, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Representing a list of FilterTrackPropertyConditions to select a track. The filters are combined using a logical AND operation.
All required parameters must be populated in order to send to Azure.
- Parameters
track_selections (list[FilterTrackPropertyCondition]) – Required. The track selections.
-
class
azure.mgmt.media.models.
FirstQuality
(*, bitrate: int, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Filter First Quality.
All required parameters must be populated in order to send to Azure.
- Parameters
bitrate (int) – Required. The first quality bitrate.
-
class
azure.mgmt.media.models.
Format
(*, filename_pattern: str, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Base class for output.
You probably want to use the sub-classes and not this class directly. Known sub-classes are: ImageFormat, MultiBitrateFormat
All required parameters must be populated in order to send to Azure.
- Parameters
filename_pattern (str) – Required. The pattern of the file names for the generated output files. The following macros are supported in the file name: {Basename} - The base name of the input video {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {Bitrate} - The audio/video bitrate. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. Any unsubstituted macros will be collapsed and removed from the filename.
odatatype (str) – Required. Constant filled by server.
-
class
azure.mgmt.media.models.
H264Layer
(*, bitrate: int, width: str = None, height: str = None, label: str = None, max_bitrate: int = None, b_frames: int = None, frame_rate: str = None, slices: int = None, adaptive_bframe: bool = None, profile=None, level: str = None, buffer_window=None, reference_frames: int = None, entropy_mode=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.VideoLayer
Describes the settings to be used when encoding the input video into a desired output bitrate layer with the H.264 video codec.
All required parameters must be populated in order to send to Azure.
- Parameters
width (str) – The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
height (str) – The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
label (str) – The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
odatatype (str) – Required. Constant filled by server.
bitrate (int) – Required. The average bitrate in bits per second at which to encode the input video when generating this layer. This is a required field.
max_bitrate (int) – The maximum bitrate (in bits per second), at which the VBV buffer should be assumed to refill. If not specified, defaults to the same value as bitrate.
b_frames (int) – The number of B-frames to be used when encoding this layer. If not specified, the encoder chooses an appropriate number based on the video profile and level.
frame_rate (str) – The frame rate (in frames per second) at which to encode this layer. The value can be in the form of M/N where M and N are integers (For example, 30000/1001), or in the form of a number (For example, 30, or 29.97). The encoder enforces constraints on allowed frame rates based on the profile and level. If it is not specified, the encoder will use the same frame rate as the input video.
slices (int) – The number of slices to be used when encoding this layer. If not specified, default is zero, which means that encoder will use a single slice for each frame.
adaptive_bframe (bool) – Whether or not adaptive B-frames are to be used when encoding this layer. If not specified, the encoder will turn it on whenever the video profile permits its use.
profile (str or H264VideoProfile) – We currently support Baseline, Main, High, High422, High444. Default is Auto. Possible values include: ‘Auto’, ‘Baseline’, ‘Main’, ‘High’, ‘High422’, ‘High444’
level (str) – We currently support Level up to 6.2. The value can be Auto, or a number that matches the H.264 profile. If not specified, the default is Auto, which lets the encoder choose the Level that is appropriate for this layer.
buffer_window (timedelta) – The VBV buffer window length. The value should be in ISO 8601 format. The value should be in the range [0.1-100] seconds. The default is 5 seconds (for example, PT5S).
reference_frames (int) – The number of reference frames to be used when encoding this layer. If not specified, the encoder determines an appropriate number based on the encoder complexity setting.
entropy_mode (str or EntropyMode) – The entropy mode to be used for this layer. If not specified, the encoder chooses the mode that is appropriate for the profile and level. Possible values include: ‘Cabac’, ‘Cavlc’
-
class
azure.mgmt.media.models.
H264Video
(*, label: str = None, key_frame_interval=None, stretch_mode=None, scene_change_detection: bool = None, complexity=None, layers=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.Video
Describes all the properties for encoding a video with the H.264 codec.
All required parameters must be populated in order to send to Azure.
- Parameters
label (str) – An optional label for the codec. The label can be used to control muxing behavior.
odatatype (str) – Required. Constant filled by server.
key_frame_interval (timedelta) – The distance between two key frames, thereby defining a group of pictures (GOP). The value should be a non-zero integer in the range [1, 30] seconds, specified in ISO 8601 format. The default is 2 seconds (PT2S).
stretch_mode (str or StretchMode) – The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize. Possible values include: ‘None’, ‘AutoSize’, ‘AutoFit’
scene_change_detection (bool) – Whether or not the encoder should insert key frames at scene changes. If not specified, the default is false. This flag should be set to true only when the encoder is being configured to produce a single output video.
complexity (str or H264Complexity) – Tells the encoder how to choose its encoding settings. The default value is Balanced. Possible values include: ‘Speed’, ‘Balanced’, ‘Quality’
layers (list[H264Layer]) – The collection of output H.264 layers to be produced by the encoder.
-
class
azure.mgmt.media.models.
Hls
(*, fragments_per_ts_segment: int = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The HLS configuration.
- Parameters
fragments_per_ts_segment (int) – The amount of fragments per HTTP Live Streaming (HLS) segment.
-
class
azure.mgmt.media.models.
Image
(*, start: str, label: str = None, key_frame_interval=None, stretch_mode=None, step: str = None, range: str = None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.Video
Describes the basic properties for generating thumbnails from the input video.
You probably want to use the sub-classes and not this class directly. Known sub-classes are: JpgImage, PngImage
All required parameters must be populated in order to send to Azure.
- Parameters
label (str) – An optional label for the codec. The label can be used to control muxing behavior.
odatatype (str) – Required. Constant filled by server.
key_frame_interval (timedelta) – The distance between two key frames, thereby defining a group of pictures (GOP). The value should be a non-zero integer in the range [1, 30] seconds, specified in ISO 8601 format. The default is 2 seconds (PT2S).
stretch_mode (str or StretchMode) – The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize. Possible values include: ‘None’, ‘AutoSize’, ‘AutoFit’
start (str) – Required. The position in the input video from where to start generating thumbnails. The value can be in absolute timestamp (ISO 8601, e.g: PT05S), or a frame count (For example, 10 for the 10th frame), or a relative value (For example, 1%). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video.
step (str) – The intervals at which thumbnails are generated. The value can be in absolute timestamp (ISO 8601, e.g: PT05S for one image every 5 seconds), or a frame count (For example, 30 for every 30 frames), or a relative value (For example, 1%).
range (str) – The position in the input video at which to stop generating thumbnails. The value can be in absolute timestamp (ISO 8601, e.g: PT5M30S to stop at 5 minutes and 30 seconds), or a frame count (For example, 300 to stop at the 300th frame), or a relative value (For example, 100%).
-
class
azure.mgmt.media.models.
ImageFormat
(*, filename_pattern: str, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.Format
Describes the properties for an output image file.
You probably want to use the sub-classes and not this class directly. Known sub-classes are: JpgFormat, PngFormat
All required parameters must be populated in order to send to Azure.
- Parameters
filename_pattern (str) – Required. The pattern of the file names for the generated output files. The following macros are supported in the file name: {Basename} - The base name of the input video {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {Bitrate} - The audio/video bitrate. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. Any unsubstituted macros will be collapsed and removed from the filename.
odatatype (str) – Required. Constant filled by server.
-
class
azure.mgmt.media.models.
IPAccessControl
(*, allow=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The IP access control.
-
class
azure.mgmt.media.models.
IPRange
(*, name: str = None, address: str = None, subnet_prefix_length: int = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The IP address range in the CIDR scheme.
-
class
azure.mgmt.media.models.
Job
(*, input, outputs, description: str = None, priority=None, correlation_data=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ProxyResource
A Job resource type. The progress and state can be obtained by polling a Job or subscribing to events using EventGrid.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
- Variables
created (datetime) – The UTC date and time when the Job was created, in ‘YYYY-MM-DDThh:mm:ssZ’ format.
state (str or JobState) – The current state of the job. Possible values include: ‘Canceled’, ‘Canceling’, ‘Error’, ‘Finished’, ‘Processing’, ‘Queued’, ‘Scheduled’
last_modified (datetime) – The UTC date and time when the Job was last updated, in ‘YYYY-MM-DDThh:mm:ssZ’ format.
start_time (datetime) – The UTC date and time at which this Job began processing.
end_time (datetime) – The UTC date and time at which this Job finished processing.
- Parameters
description (str) – Optional customer supplied description of the Job.
input (JobInput) – Required. The inputs for the Job.
outputs (list[JobOutput]) – Required. The outputs for the Job.
priority (str or Priority) – Priority with which the job should be processed. Higher priority jobs are processed before lower priority jobs. If not set, the default is normal. Possible values include: ‘Low’, ‘Normal’, ‘High’
correlation_data (dict[str, str]) – Customer provided key, value pairs that will be returned in Job and JobOutput state events.
-
class
azure.mgmt.media.models.
JobError
(**kwargs)[source]¶ Bases:
msrest.serialization.Model
Details of JobOutput errors.
Variables are only populated by the server, and will be ignored when sending a request.
- Variables
code (str or JobErrorCode) – Error code describing the error. Possible values include: ‘ServiceError’, ‘ServiceTransientError’, ‘DownloadNotAccessible’, ‘DownloadTransientError’, ‘UploadNotAccessible’, ‘UploadTransientError’, ‘ConfigurationUnsupported’, ‘ContentMalformed’, ‘ContentUnsupported’
message (str) – A human-readable language-dependent representation of the error.
category (str or JobErrorCategory) – Helps with categorization of errors. Possible values include: ‘Service’, ‘Download’, ‘Upload’, ‘Configuration’, ‘Content’
retry (str or JobRetry) – Indicates that it may be possible to retry the Job. If retry is unsuccessful, please contact Azure support via Azure Portal. Possible values include: ‘DoNotRetry’, ‘MayRetry’
details (list[JobErrorDetail]) – An array of details about specific errors that led to this reported error.
-
class
azure.mgmt.media.models.
JobErrorDetail
(**kwargs)[source]¶ Bases:
msrest.serialization.Model
Details of JobOutput errors.
Variables are only populated by the server, and will be ignored when sending a request.
-
class
azure.mgmt.media.models.
JobInput
(**kwargs)[source]¶ Bases:
msrest.serialization.Model
Base class for inputs to a Job.
You probably want to use the sub-classes and not this class directly. Known sub-classes are: JobInputClip, JobInputs
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
-
class
azure.mgmt.media.models.
JobInputAsset
(*, asset_name: str, files=None, start=None, end=None, label: str = None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.JobInputClip
Represents an Asset for input into a Job.
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
files (list[str]) – List of files. Required for JobInputHttp. Maximum of 4000 characters each.
start (ClipTime) – Defines a point on the timeline of the input media at which processing will start. Defaults to the beginning of the input media.
end (ClipTime) – Defines a point on the timeline of the input media at which processing will end. Defaults to the end of the input media.
label (str) – A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label ‘xyz’ and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label ‘xyz’.
asset_name (str) – Required. The name of the input Asset.
-
class
azure.mgmt.media.models.
JobInputClip
(*, files=None, start=None, end=None, label: str = None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.JobInput
Represents input files for a Job.
You probably want to use the sub-classes and not this class directly. Known sub-classes are: JobInputAsset, JobInputHttp
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
files (list[str]) – List of files. Required for JobInputHttp. Maximum of 4000 characters each.
start (ClipTime) – Defines a point on the timeline of the input media at which processing will start. Defaults to the beginning of the input media.
end (ClipTime) – Defines a point on the timeline of the input media at which processing will end. Defaults to the end of the input media.
label (str) – A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label ‘xyz’ and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label ‘xyz’.
-
class
azure.mgmt.media.models.
JobInputHttp
(*, files=None, start=None, end=None, label: str = None, base_uri: str = None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.JobInputClip
Represents HTTPS job input.
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
files (list[str]) – List of files. Required for JobInputHttp. Maximum of 4000 characters each.
start (ClipTime) – Defines a point on the timeline of the input media at which processing will start. Defaults to the beginning of the input media.
end (ClipTime) – Defines a point on the timeline of the input media at which processing will end. Defaults to the end of the input media.
label (str) – A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label ‘xyz’ and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label ‘xyz’.
base_uri (str) – Base URI for HTTPS job input. It will be concatenated with provided file names. If no base uri is given, then the provided file list is assumed to be fully qualified uris. Maximum length of 4000 characters.
-
class
azure.mgmt.media.models.
JobInputs
(*, inputs=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.JobInput
Describes a list of inputs to a Job.
All required parameters must be populated in order to send to Azure.
-
class
azure.mgmt.media.models.
JobOutput
(*, label: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Describes all the properties of a JobOutput.
You probably want to use the sub-classes and not this class directly. Known sub-classes are: JobOutputAsset
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
- Variables
error (JobError) – If the JobOutput is in the Error state, it contains the details of the error.
state (str or JobState) – Describes the state of the JobOutput. Possible values include: ‘Canceled’, ‘Canceling’, ‘Error’, ‘Finished’, ‘Processing’, ‘Queued’, ‘Scheduled’
progress (int) – If the JobOutput is in a Processing state, this contains the Job completion percentage. The value is an estimate and not intended to be used to predict Job completion times. To determine if the JobOutput is complete, use the State property.
start_time (datetime) – The UTC date and time at which this Job Output began processing.
end_time (datetime) – The UTC date and time at which this Job Output finished processing.
- Parameters
label (str) – A label that is assigned to a JobOutput in order to help uniquely identify it. This is useful when your Transform has more than one TransformOutput, whereby your Job has more than one JobOutput. In such cases, when you submit the Job, you will add two or more JobOutputs, in the same order as TransformOutputs in the Transform. Subsequently, when you retrieve the Job, either through events or on a GET request, you can use the label to easily identify the JobOutput. If a label is not provided, a default value of ‘{presetName}_{outputIndex}’ will be used, where the preset name is the name of the preset in the corresponding TransformOutput and the output index is the relative index of the this JobOutput within the Job. Note that this index is the same as the relative index of the corresponding TransformOutput within its Transform.
odatatype (str) – Required. Constant filled by server.
-
class
azure.mgmt.media.models.
JobOutputAsset
(*, asset_name: str, label: str = None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.JobOutput
Represents an Asset used as a JobOutput.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
- Variables
error (JobError) – If the JobOutput is in the Error state, it contains the details of the error.
state (str or JobState) – Describes the state of the JobOutput. Possible values include: ‘Canceled’, ‘Canceling’, ‘Error’, ‘Finished’, ‘Processing’, ‘Queued’, ‘Scheduled’
progress (int) – If the JobOutput is in a Processing state, this contains the Job completion percentage. The value is an estimate and not intended to be used to predict Job completion times. To determine if the JobOutput is complete, use the State property.
start_time (datetime) – The UTC date and time at which this Job Output began processing.
end_time (datetime) – The UTC date and time at which this Job Output finished processing.
- Parameters
label (str) – A label that is assigned to a JobOutput in order to help uniquely identify it. This is useful when your Transform has more than one TransformOutput, whereby your Job has more than one JobOutput. In such cases, when you submit the Job, you will add two or more JobOutputs, in the same order as TransformOutputs in the Transform. Subsequently, when you retrieve the Job, either through events or on a GET request, you can use the label to easily identify the JobOutput. If a label is not provided, a default value of ‘{presetName}_{outputIndex}’ will be used, where the preset name is the name of the preset in the corresponding TransformOutput and the output index is the relative index of the this JobOutput within the Job. Note that this index is the same as the relative index of the corresponding TransformOutput within its Transform.
odatatype (str) – Required. Constant filled by server.
asset_name (str) – Required. The name of the output Asset.
-
class
azure.mgmt.media.models.
JpgFormat
(*, filename_pattern: str, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ImageFormat
Describes the settings for producing JPEG thumbnails.
All required parameters must be populated in order to send to Azure.
- Parameters
filename_pattern (str) – Required. The pattern of the file names for the generated output files. The following macros are supported in the file name: {Basename} - The base name of the input video {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {Bitrate} - The audio/video bitrate. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. Any unsubstituted macros will be collapsed and removed from the filename.
odatatype (str) – Required. Constant filled by server.
-
class
azure.mgmt.media.models.
JpgImage
(*, start: str, label: str = None, key_frame_interval=None, stretch_mode=None, step: str = None, range: str = None, layers=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.Image
Describes the properties for producing a series of JPEG images from the input video.
All required parameters must be populated in order to send to Azure.
- Parameters
label (str) – An optional label for the codec. The label can be used to control muxing behavior.
odatatype (str) – Required. Constant filled by server.
key_frame_interval (timedelta) – The distance between two key frames, thereby defining a group of pictures (GOP). The value should be a non-zero integer in the range [1, 30] seconds, specified in ISO 8601 format. The default is 2 seconds (PT2S).
stretch_mode (str or StretchMode) – The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize. Possible values include: ‘None’, ‘AutoSize’, ‘AutoFit’
start (str) – Required. The position in the input video from where to start generating thumbnails. The value can be in absolute timestamp (ISO 8601, e.g: PT05S), or a frame count (For example, 10 for the 10th frame), or a relative value (For example, 1%). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video.
step (str) – The intervals at which thumbnails are generated. The value can be in absolute timestamp (ISO 8601, e.g: PT05S for one image every 5 seconds), or a frame count (For example, 30 for every 30 frames), or a relative value (For example, 1%).
range (str) – The position in the input video at which to stop generating thumbnails. The value can be in absolute timestamp (ISO 8601, e.g: PT5M30S to stop at 5 minutes and 30 seconds), or a frame count (For example, 300 to stop at the 300th frame), or a relative value (For example, 100%).
layers (list[JpgLayer]) – A collection of output JPEG image layers to be produced by the encoder.
-
class
azure.mgmt.media.models.
JpgLayer
(*, width: str = None, height: str = None, label: str = None, quality: int = None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.Layer
Describes the settings to produce a JPEG image from the input video.
All required parameters must be populated in order to send to Azure.
- Parameters
width (str) – The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
height (str) – The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
label (str) – The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
odatatype (str) – Required. Constant filled by server.
quality (int) – The compression quality of the JPEG output. Range is from 0-100 and the default is 70.
-
class
azure.mgmt.media.models.
Layer
(*, width: str = None, height: str = None, label: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The encoder can be configured to produce video and/or images (thumbnails) at different resolutions, by specifying a layer for each desired resolution. A layer represents the properties for the video or image at a resolution.
You probably want to use the sub-classes and not this class directly. Known sub-classes are: VideoLayer, JpgLayer, PngLayer
All required parameters must be populated in order to send to Azure.
- Parameters
width (str) – The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
height (str) – The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
label (str) – The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
odatatype (str) – Required. Constant filled by server.
-
class
azure.mgmt.media.models.
ListContainerSasInput
(*, permissions=None, expiry_time=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The parameters to the list SAS request.
- Parameters
permissions (str or AssetContainerPermission) – The permissions to set on the SAS URL. Possible values include: ‘Read’, ‘ReadWrite’, ‘ReadWriteDelete’
expiry_time (datetime) – The SAS URL expiration time. This must be less than 24 hours from the current time.
-
class
azure.mgmt.media.models.
ListContentKeysResponse
(*, content_keys=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Class of response for listContentKeys action.
- Parameters
content_keys (list[StreamingLocatorContentKey]) – ContentKeys used by current Streaming Locator
-
class
azure.mgmt.media.models.
ListEdgePoliciesInput
(*, device_id: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
ListEdgePoliciesInput.
- Parameters
device_id (str) – Unique identifier of the edge device.
-
class
azure.mgmt.media.models.
ListPathsResponse
(*, streaming_paths=None, download_paths=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Class of response for listPaths action.
- Parameters
streaming_paths (list[StreamingPath]) – Streaming Paths supported by current Streaming Locator
download_paths (list[str]) – Download Paths supported by current Streaming Locator
-
class
azure.mgmt.media.models.
ListStreamingLocatorsResponse
(**kwargs)[source]¶ Bases:
msrest.serialization.Model
The Streaming Locators associated with this Asset.
Variables are only populated by the server, and will be ignored when sending a request.
- Variables
streaming_locators (list[AssetStreamingLocator]) – The list of Streaming Locators.
-
class
azure.mgmt.media.models.
LiveEvent
(*, input, tags=None, location: str = None, description: str = None, preview=None, encoding=None, cross_site_access_policies=None, vanity_url: bool = None, stream_options=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.TrackedResource
The Live Event.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
- Variables
provisioning_state (str) – The provisioning state of the Live Event.
resource_state (str or LiveEventResourceState) – The resource state of the Live Event. Possible values include: ‘Stopped’, ‘Starting’, ‘Running’, ‘Stopping’, ‘Deleting’
created (datetime) – The exact time the Live Event was created.
last_modified (datetime) – The exact time the Live Event was last modified.
- Parameters
location (str) – The Azure Region of the resource.
description (str) – The Live Event description.
input (LiveEventInput) – Required. The Live Event input.
preview (LiveEventPreview) – The Live Event preview.
encoding (LiveEventEncoding) – The Live Event encoding.
cross_site_access_policies (CrossSiteAccessPolicies) – The Live Event access policies.
vanity_url (bool) – Specifies whether to use a vanity url with the Live Event. This value is specified at creation time and cannot be updated.
stream_options (list[str or StreamOptionsFlag]) – The options to use for the LiveEvent. This value is specified at creation time and cannot be updated.
-
class
azure.mgmt.media.models.
LiveEventActionInput
(*, remove_outputs_on_stop: bool = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The LiveEvent action input parameter definition.
- Parameters
remove_outputs_on_stop (bool) – The flag indicates if remove LiveOutputs on Stop.
-
class
azure.mgmt.media.models.
LiveEventEncoding
(*, encoding_type=None, preset_name: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The Live Event encoding.
- Parameters
encoding_type (str or LiveEventEncodingType) – The encoding type for Live Event. This value is specified at creation time and cannot be updated. Possible values include: ‘None’, ‘Basic’, ‘Standard’, ‘Premium1080p’
preset_name (str) – The encoding preset name. This value is specified at creation time and cannot be updated.
-
class
azure.mgmt.media.models.
LiveEventEndpoint
(*, protocol: str = None, url: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The Live Event endpoint.
-
class
azure.mgmt.media.models.
LiveEventInput
(*, streaming_protocol, access_control=None, key_frame_interval_duration: str = None, access_token: str = None, endpoints=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The Live Event input.
All required parameters must be populated in order to send to Azure.
- Parameters
streaming_protocol (str or LiveEventInputProtocol) – Required. The streaming protocol for the Live Event. This is specified at creation time and cannot be updated. Possible values include: ‘FragmentedMP4’, ‘RTMP’
access_control (LiveEventInputAccessControl) – The access control for LiveEvent Input.
key_frame_interval_duration (str) – ISO 8601 timespan duration of the key frame interval duration.
access_token (str) – A unique identifier for a stream. This can be specified at creation time but cannot be updated. If omitted, the service will generate a unique value.
endpoints (list[LiveEventEndpoint]) – The input endpoints for the Live Event.
-
class
azure.mgmt.media.models.
LiveEventInputAccessControl
(*, ip=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The IP access control for Live Event Input.
- Parameters
ip (IPAccessControl) – The IP access control properties.
-
class
azure.mgmt.media.models.
LiveEventPreview
(*, endpoints=None, access_control=None, preview_locator: str = None, streaming_policy_name: str = None, alternative_media_id: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The Live Event preview.
- Parameters
endpoints (list[LiveEventEndpoint]) – The endpoints for preview.
access_control (LiveEventPreviewAccessControl) – The access control for LiveEvent preview.
preview_locator (str) – The identifier of the preview locator in Guid format. Specifying this at creation time allows the caller to know the preview locator url before the event is created. If omitted, the service will generate a random identifier. This value cannot be updated once the live event is created.
streaming_policy_name (str) – The name of streaming policy used for the LiveEvent preview. This value is specified at creation time and cannot be updated.
alternative_media_id (str) – An Alternative Media Identifier associated with the StreamingLocator created for the preview. This value is specified at creation time and cannot be updated. The identifier can be used in the CustomLicenseAcquisitionUrlTemplate or the CustomKeyAcquisitionUrlTemplate of the StreamingPolicy specified in the StreamingPolicyName field.
-
class
azure.mgmt.media.models.
LiveEventPreviewAccessControl
(*, ip=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The IP access control for Live Event preview.
- Parameters
ip (IPAccessControl) – The IP access control properties.
-
class
azure.mgmt.media.models.
LiveOutput
(*, asset_name: str, archive_window_length, description: str = None, manifest_name: str = None, hls=None, output_snap_time: int = None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ProxyResource
The Live Output.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
- Variables
created (datetime) – The exact time the Live Output was created.
last_modified (datetime) – The exact time the Live Output was last modified.
provisioning_state (str) – The provisioning state of the Live Output.
resource_state (str or LiveOutputResourceState) – The resource state of the Live Output. Possible values include: ‘Creating’, ‘Running’, ‘Deleting’
- Parameters
description (str) – The description of the Live Output.
asset_name (str) – Required. The asset name.
archive_window_length (timedelta) – Required. ISO 8601 timespan duration of the archive window length. This is duration that customer want to retain the recorded content.
manifest_name (str) – The manifest file name. If not provided, the service will generate one automatically.
hls (Hls) – The HLS configuration.
output_snap_time (long) – The output snapshot time.
-
class
azure.mgmt.media.models.
Location
(*, name: str, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Location.
All required parameters must be populated in order to send to Azure.
- Parameters
name (str) – Required.
-
class
azure.mgmt.media.models.
MediaService
(*, tags=None, location: str = None, storage_accounts=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.TrackedResource
A Media Services account.
Variables are only populated by the server, and will be ignored when sending a request.
- Variables
- Parameters
-
class
azure.mgmt.media.models.
Metric
(**kwargs)[source]¶ Bases:
msrest.serialization.Model
A metric emitted by service.
Variables are only populated by the server, and will be ignored when sending a request.
- Variables
display_name (str) – The metric display name.
display_description (str) – The metric display description.
unit (str or MetricUnit) – The metric unit. Possible values include: ‘Bytes’, ‘Count’, ‘Milliseconds’
aggregation_type (str or MetricAggregationType) – The metric aggregation type. Possible values include: ‘Average’, ‘Count’, ‘Total’
dimensions (list[MetricDimension]) – The metric dimensions.
-
class
azure.mgmt.media.models.
MetricDimension
(**kwargs)[source]¶ Bases:
msrest.serialization.Model
A metric dimension.
Variables are only populated by the server, and will be ignored when sending a request.
-
class
azure.mgmt.media.models.
MetricProperties
(**kwargs)[source]¶ Bases:
msrest.serialization.Model
Metric properties.
Variables are only populated by the server, and will be ignored when sending a request.
- Variables
service_specification (ServiceSpecification) – The service specifications.
-
class
azure.mgmt.media.models.
Mp4Format
(*, filename_pattern: str, output_files=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.MultiBitrateFormat
Describes the properties for an output ISO MP4 file.
All required parameters must be populated in order to send to Azure.
- Parameters
filename_pattern (str) – Required. The pattern of the file names for the generated output files. The following macros are supported in the file name: {Basename} - The base name of the input video {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {Bitrate} - The audio/video bitrate. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. Any unsubstituted macros will be collapsed and removed from the filename.
odatatype (str) – Required. Constant filled by server.
output_files (list[OutputFile]) – The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .
-
class
azure.mgmt.media.models.
MultiBitrateFormat
(*, filename_pattern: str, output_files=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.Format
Describes the properties for producing a collection of GOP aligned multi-bitrate files. The default behavior is to produce one output file for each video layer which is muxed together with all the audios. The exact output files produced can be controlled by specifying the outputFiles collection.
You probably want to use the sub-classes and not this class directly. Known sub-classes are: Mp4Format, TransportStreamFormat
All required parameters must be populated in order to send to Azure.
- Parameters
filename_pattern (str) – Required. The pattern of the file names for the generated output files. The following macros are supported in the file name: {Basename} - The base name of the input video {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {Bitrate} - The audio/video bitrate. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. Any unsubstituted macros will be collapsed and removed from the filename.
odatatype (str) – Required. Constant filled by server.
output_files (list[OutputFile]) – The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .
-
class
azure.mgmt.media.models.
NoEncryption
(*, enabled_protocols=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Class for NoEncryption scheme.
- Parameters
enabled_protocols (EnabledProtocols) – Representing supported protocols
-
class
azure.mgmt.media.models.
ODataError
(*, code: str = None, message: str = None, target: str = None, details=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Information about an error.
- Parameters
code (str) – A language-independent error name.
message (str) – The error message.
target (str) – The target of the error (for example, the name of the property in error).
details (list[ODataError]) – The error details.
-
class
azure.mgmt.media.models.
Operation
(*, name: str, display=None, origin: str = None, properties=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
An operation.
All required parameters must be populated in order to send to Azure.
- Parameters
name (str) – Required. The operation name.
display (OperationDisplay) – The operation display name.
origin (str) – Origin of the operation.
properties (MetricProperties) – Operation properties format.
-
class
azure.mgmt.media.models.
OperationDisplay
(*, provider: str = None, resource: str = None, operation: str = None, description: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Operation details.
-
class
azure.mgmt.media.models.
OutputFile
(*, labels, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Represents an output file produced.
All required parameters must be populated in order to send to Azure.
- Parameters
labels (list[str]) – Required. The list of labels that describe how the encoder should multiplex video and audio into an output file. For example, if the encoder is producing two video layers with labels v1 and v2, and one audio layer with label a1, then an array like ‘[v1, a1]’ tells the encoder to produce an output file with the video track represented by v1 and the audio track represented by a1.
-
class
azure.mgmt.media.models.
Overlay
(*, input_label: str, start=None, end=None, fade_in_duration=None, fade_out_duration=None, audio_gain_level: float = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Base type for all overlays - image, audio or video.
You probably want to use the sub-classes and not this class directly. Known sub-classes are: AudioOverlay, VideoOverlay
All required parameters must be populated in order to send to Azure.
- Parameters
input_label (str) – Required. The label of the job input which is to be used as an overlay. The Input must specify exactly one file. You can specify an image file in JPG or PNG formats, or an audio file (such as a WAV, MP3, WMA or M4A file), or a video file. See https://aka.ms/mesformats for the complete list of supported audio and video file formats.
start (timedelta) – The start position, with reference to the input video, at which the overlay starts. The value should be in ISO 8601 format. For example, PT05S to start the overlay at 5 seconds in to the input video. If not specified the overlay starts from the beginning of the input video.
end (timedelta) – The position in the input video at which the overlay ends. The value should be in ISO 8601 duration format. For example, PT30S to end the overlay at 30 seconds in to the input video. If not specified the overlay will be applied until the end of the input video if inputLoop is true. Else, if inputLoop is false, then overlay will last as long as the duration of the overlay media.
fade_in_duration (timedelta) – The duration over which the overlay fades in onto the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade in (same as PT0S).
fade_out_duration (timedelta) – The duration over which the overlay fades out of the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade out (same as PT0S).
audio_gain_level (float) – The gain level of audio in the overlay. The value should be in the range [0, 1.0]. The default is 1.0.
odatatype (str) – Required. Constant filled by server.
-
class
azure.mgmt.media.models.
PngFormat
(*, filename_pattern: str, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ImageFormat
Describes the settings for producing PNG thumbnails.
All required parameters must be populated in order to send to Azure.
- Parameters
filename_pattern (str) – Required. The pattern of the file names for the generated output files. The following macros are supported in the file name: {Basename} - The base name of the input video {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {Bitrate} - The audio/video bitrate. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. Any unsubstituted macros will be collapsed and removed from the filename.
odatatype (str) – Required. Constant filled by server.
-
class
azure.mgmt.media.models.
PngImage
(*, start: str, label: str = None, key_frame_interval=None, stretch_mode=None, step: str = None, range: str = None, layers=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.Image
Describes the properties for producing a series of PNG images from the input video.
All required parameters must be populated in order to send to Azure.
- Parameters
label (str) – An optional label for the codec. The label can be used to control muxing behavior.
odatatype (str) – Required. Constant filled by server.
key_frame_interval (timedelta) – The distance between two key frames, thereby defining a group of pictures (GOP). The value should be a non-zero integer in the range [1, 30] seconds, specified in ISO 8601 format. The default is 2 seconds (PT2S).
stretch_mode (str or StretchMode) – The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize. Possible values include: ‘None’, ‘AutoSize’, ‘AutoFit’
start (str) – Required. The position in the input video from where to start generating thumbnails. The value can be in absolute timestamp (ISO 8601, e.g: PT05S), or a frame count (For example, 10 for the 10th frame), or a relative value (For example, 1%). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video.
step (str) – The intervals at which thumbnails are generated. The value can be in absolute timestamp (ISO 8601, e.g: PT05S for one image every 5 seconds), or a frame count (For example, 30 for every 30 frames), or a relative value (For example, 1%).
range (str) – The position in the input video at which to stop generating thumbnails. The value can be in absolute timestamp (ISO 8601, e.g: PT5M30S to stop at 5 minutes and 30 seconds), or a frame count (For example, 300 to stop at the 300th frame), or a relative value (For example, 100%).
layers (list[PngLayer]) – A collection of output PNG image layers to be produced by the encoder.
-
class
azure.mgmt.media.models.
PngLayer
(*, width: str = None, height: str = None, label: str = None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.Layer
Describes the settings to produce a PNG image from the input video.
All required parameters must be populated in order to send to Azure.
- Parameters
width (str) – The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
height (str) – The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
label (str) – The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
odatatype (str) – Required. Constant filled by server.
-
class
azure.mgmt.media.models.
PresentationTimeRange
(*, start_timestamp: int = None, end_timestamp: int = None, presentation_window_duration: int = None, live_backoff_duration: int = None, timescale: int = None, force_end_timestamp: bool = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The presentation time range, this is asset related and not recommended for Account Filter.
- Parameters
start_timestamp (long) – The absolute start time boundary.
end_timestamp (long) – The absolute end time boundary.
presentation_window_duration (long) – The relative to end sliding window.
live_backoff_duration (long) – The relative to end right edge.
timescale (long) – The time scale of time stamps.
force_end_timestamp (bool) – The indicator of forcing existing of end time stamp.
-
class
azure.mgmt.media.models.
Preset
(**kwargs)[source]¶ Bases:
msrest.serialization.Model
Base type for all Presets, which define the recipe or instructions on how the input media files should be processed.
You probably want to use the sub-classes and not this class directly. Known sub-classes are: FaceDetectorPreset, AudioAnalyzerPreset, BuiltInStandardEncoderPreset, StandardEncoderPreset
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
-
class
azure.mgmt.media.models.
Provider
(*, provider_name: str, **kwargs)[source]¶ Bases:
msrest.serialization.Model
A resource provider.
All required parameters must be populated in order to send to Azure.
- Parameters
provider_name (str) – Required. The provider name.
-
class
azure.mgmt.media.models.
ProxyResource
(**kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.Resource
The resource model definition for a ARM proxy resource.
Variables are only populated by the server, and will be ignored when sending a request.
-
class
azure.mgmt.media.models.
Rectangle
(*, left: str = None, top: str = None, width: str = None, height: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Describes the properties of a rectangular window applied to the input media before processing it.
- Parameters
left (str) – The number of pixels from the left-margin. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
top (str) – The number of pixels from the top-margin. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
width (str) – The width of the rectangular region in pixels. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
height (str) – The height of the rectangular region in pixels. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
-
class
azure.mgmt.media.models.
Resource
(**kwargs)[source]¶ Bases:
msrest.serialization.Model
The core properties of ARM resources.
Variables are only populated by the server, and will be ignored when sending a request.
-
class
azure.mgmt.media.models.
ServiceSpecification
(**kwargs)[source]¶ Bases:
msrest.serialization.Model
The service metric specifications.
Variables are only populated by the server, and will be ignored when sending a request.
-
class
azure.mgmt.media.models.
StandardEncoderPreset
(*, codecs, formats, filters=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.Preset
Describes all the settings to be used when encoding the input video with the Standard Encoder.
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
filters (Filters) – One or more filtering operations that are applied to the input media before encoding.
codecs (list[Codec]) – Required. The list of codecs to be used when encoding the input video.
formats (list[Format]) – Required. The list of outputs to be produced by the encoder.
-
class
azure.mgmt.media.models.
StorageAccount
(*, type, id: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The storage account details.
All required parameters must be populated in order to send to Azure.
- Parameters
id (str) – The ID of the storage account resource. Media Services relies on tables and queues as well as blobs, so the primary storage account must be a Standard Storage account (either Microsoft.ClassicStorage or Microsoft.Storage). Blob only storage accounts can be added as secondary storage accounts.
type (str or StorageAccountType) – Required. The type of the storage account. Possible values include: ‘Primary’, ‘Secondary’
-
class
azure.mgmt.media.models.
StorageEncryptedAssetDecryptionData
(*, key: bytearray = None, asset_file_encryption_metadata=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Data needed to decrypt asset files encrypted with legacy storage encryption.
- Parameters
key (bytearray) – The Asset File storage encryption key.
asset_file_encryption_metadata (list[AssetFileEncryptionMetadata]) – Asset File encryption metadata.
-
class
azure.mgmt.media.models.
StreamingEndpoint
(*, scale_units: int, tags=None, location: str = None, description: str = None, availability_set_name: str = None, access_control=None, max_cache_age: int = None, custom_host_names=None, cdn_enabled: bool = None, cdn_provider: str = None, cdn_profile: str = None, cross_site_access_policies=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.TrackedResource
The StreamingEndpoint.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
- Variables
host_name (str) – The StreamingEndpoint host name.
provisioning_state (str) – The provisioning state of the StreamingEndpoint.
resource_state (str or StreamingEndpointResourceState) – The resource state of the StreamingEndpoint. Possible values include: ‘Stopped’, ‘Starting’, ‘Running’, ‘Stopping’, ‘Deleting’, ‘Scaling’
free_trial_end_time (datetime) – The free trial expiration time.
created (datetime) – The exact time the StreamingEndpoint was created.
last_modified (datetime) – The exact time the StreamingEndpoint was last modified.
- Parameters
location (str) – The Azure Region of the resource.
description (str) – The StreamingEndpoint description.
scale_units (int) – Required. The number of scale units. Use the Scale operation to adjust this value.
availability_set_name (str) – The name of the AvailabilitySet used with this StreamingEndpoint for high availability streaming. This value can only be set at creation time.
access_control (StreamingEndpointAccessControl) – The access control definition of the StreamingEndpoint.
max_cache_age (long) – Max cache age
custom_host_names (list[str]) – The custom host names of the StreamingEndpoint
cdn_enabled (bool) – The CDN enabled flag.
cdn_provider (str) – The CDN provider name.
cdn_profile (str) – The CDN profile name.
cross_site_access_policies (CrossSiteAccessPolicies) – The StreamingEndpoint access policies.
-
class
azure.mgmt.media.models.
StreamingEndpointAccessControl
(*, akamai=None, ip=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
StreamingEndpoint access control definition.
- Parameters
akamai (AkamaiAccessControl) – The access control of Akamai
ip (IPAccessControl) – The IP access control of the StreamingEndpoint.
-
class
azure.mgmt.media.models.
StreamingEntityScaleUnit
(*, scale_unit: int = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
scale units definition.
- Parameters
scale_unit (int) – The scale unit number of the StreamingEndpoint.
-
class
azure.mgmt.media.models.
StreamingLocator
(*, asset_name: str, streaming_policy_name: str, start_time=None, end_time=None, streaming_locator_id: str = None, default_content_key_policy_name: str = None, content_keys=None, alternative_media_id: str = None, filters=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ProxyResource
A Streaming Locator resource.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
- Variables
- Parameters
asset_name (str) – Required. Asset Name
start_time (datetime) – The start time of the Streaming Locator.
end_time (datetime) – The end time of the Streaming Locator.
streaming_locator_id (str) – The StreamingLocatorId of the Streaming Locator.
streaming_policy_name (str) – Required. Name of the Streaming Policy used by this Streaming Locator. Either specify the name of Streaming Policy you created or use one of the predefined Streaming Policies. The predefined Streaming Policies available are: ‘Predefined_DownloadOnly’, ‘Predefined_ClearStreamingOnly’, ‘Predefined_DownloadAndClearStreaming’, ‘Predefined_ClearKey’, ‘Predefined_MultiDrmCencStreaming’ and ‘Predefined_MultiDrmStreaming’
default_content_key_policy_name (str) – Name of the default ContentKeyPolicy used by this Streaming Locator.
content_keys (list[StreamingLocatorContentKey]) – The ContentKeys used by this Streaming Locator.
alternative_media_id (str) – Alternative Media ID of this Streaming Locator
filters (list[str]) – A list of asset or account filters which apply to this streaming locator
-
class
azure.mgmt.media.models.
StreamingLocatorContentKey
(*, id: str, label_reference_in_streaming_policy: str = None, value: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Class for content key in Streaming Locator.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
- Parameters
- Variables
type (str or StreamingLocatorContentKeyType) – Encryption type of Content Key. Possible values include: ‘CommonEncryptionCenc’, ‘CommonEncryptionCbcs’, ‘EnvelopeEncryption’
policy_name (str) – ContentKeyPolicy used by Content Key
tracks (list[TrackSelection]) – Tracks which use this Content Key
-
class
azure.mgmt.media.models.
StreamingPath
(*, streaming_protocol, encryption_scheme, paths=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Class of paths for streaming.
All required parameters must be populated in order to send to Azure.
- Parameters
streaming_protocol (str or StreamingPolicyStreamingProtocol) – Required. Streaming protocol. Possible values include: ‘Hls’, ‘Dash’, ‘SmoothStreaming’, ‘Download’
encryption_scheme (str or EncryptionScheme) – Required. Encryption scheme. Possible values include: ‘NoEncryption’, ‘EnvelopeEncryption’, ‘CommonEncryptionCenc’, ‘CommonEncryptionCbcs’
paths (list[str]) – Streaming paths for each protocol and encryptionScheme pair
-
class
azure.mgmt.media.models.
StreamingPolicy
(*, default_content_key_policy_name: str = None, envelope_encryption=None, common_encryption_cenc=None, common_encryption_cbcs=None, no_encryption=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ProxyResource
A Streaming Policy resource.
Variables are only populated by the server, and will be ignored when sending a request.
- Variables
- Parameters
default_content_key_policy_name (str) – Default ContentKey used by current Streaming Policy
envelope_encryption (EnvelopeEncryption) – Configuration of EnvelopeEncryption
common_encryption_cenc (CommonEncryptionCenc) – Configuration of CommonEncryptionCenc
common_encryption_cbcs (CommonEncryptionCbcs) – Configuration of CommonEncryptionCbcs
no_encryption (NoEncryption) – Configurations of NoEncryption
-
class
azure.mgmt.media.models.
StreamingPolicyContentKey
(*, label: str = None, policy_name: str = None, tracks=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Class to specify properties of content key.
- Parameters
label (str) – Label can be used to specify Content Key when creating a Streaming Locator
policy_name (str) – Policy used by Content Key
tracks (list[TrackSelection]) – Tracks which use this content key
-
class
azure.mgmt.media.models.
StreamingPolicyContentKeys
(*, default_key=None, key_to_track_mappings=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Class to specify properties of all content keys in Streaming Policy.
- Parameters
default_key (DefaultKey) – Default content key for an encryption scheme
key_to_track_mappings (list[StreamingPolicyContentKey]) – Representing tracks needs separate content key
-
class
azure.mgmt.media.models.
StreamingPolicyFairPlayConfiguration
(*, allow_persistent_license: bool, custom_license_acquisition_url_template: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Class to specify configurations of FairPlay in Streaming Policy.
All required parameters must be populated in order to send to Azure.
- Parameters
custom_license_acquisition_url_template (str) – Template for the URL of the custom service delivering licenses to end user players. Not required when using Azure Media Services for issuing licenses. The template supports replaceable tokens that the service will update at runtime with the value specific to the request. The currently supported token values are {AlternativeMediaId}, which is replaced with the value of StreamingLocatorId.AlternativeMediaId, and {ContentKeyId}, which is replaced with the value of identifier of the key being requested.
allow_persistent_license (bool) – Required. All license to be persistent or not
-
class
azure.mgmt.media.models.
StreamingPolicyPlayReadyConfiguration
(*, custom_license_acquisition_url_template: str = None, play_ready_custom_attributes: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Class to specify configurations of PlayReady in Streaming Policy.
- Parameters
custom_license_acquisition_url_template (str) – Template for the URL of the custom service delivering licenses to end user players. Not required when using Azure Media Services for issuing licenses. The template supports replaceable tokens that the service will update at runtime with the value specific to the request. The currently supported token values are {AlternativeMediaId}, which is replaced with the value of StreamingLocatorId.AlternativeMediaId, and {ContentKeyId}, which is replaced with the value of identifier of the key being requested.
play_ready_custom_attributes (str) – Custom attributes for PlayReady
-
class
azure.mgmt.media.models.
StreamingPolicyWidevineConfiguration
(*, custom_license_acquisition_url_template: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Class to specify configurations of Widevine in Streaming Policy.
- Parameters
custom_license_acquisition_url_template (str) – Template for the URL of the custom service delivering licenses to end user players. Not required when using Azure Media Services for issuing licenses. The template supports replaceable tokens that the service will update at runtime with the value specific to the request. The currently supported token values are {AlternativeMediaId}, which is replaced with the value of StreamingLocatorId.AlternativeMediaId, and {ContentKeyId}, which is replaced with the value of identifier of the key being requested.
-
class
azure.mgmt.media.models.
SubscriptionMediaService
(*, tags=None, location: str = None, storage_accounts=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.TrackedResource
A Media Services account.
Variables are only populated by the server, and will be ignored when sending a request.
- Variables
- Parameters
-
class
azure.mgmt.media.models.
SyncStorageKeysInput
(*, id: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
The input to the sync storage keys request.
- Parameters
id (str) – The ID of the storage account resource.
-
class
azure.mgmt.media.models.
TrackedResource
(*, tags=None, location: str = None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.Resource
The resource model definition for a ARM tracked resource.
Variables are only populated by the server, and will be ignored when sending a request.
-
class
azure.mgmt.media.models.
TrackPropertyCondition
(*, property, operation, value: str = None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Class to specify one track property condition.
All required parameters must be populated in order to send to Azure.
- Parameters
property (str or TrackPropertyType) – Required. Track property type. Possible values include: ‘Unknown’, ‘FourCC’
operation (str or TrackPropertyCompareOperation) – Required. Track property condition operation. Possible values include: ‘Unknown’, ‘Equal’
value (str) – Track property value
-
class
azure.mgmt.media.models.
TrackSelection
(*, track_selections=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Class to select a track.
- Parameters
track_selections (list[TrackPropertyCondition]) – TrackSelections is a track property condition list which can specify track(s)
-
class
azure.mgmt.media.models.
Transform
(*, outputs, description: str = None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.ProxyResource
A Transform encapsulates the rules or instructions for generating desired outputs from input media, such as by transcoding or by extracting insights. After the Transform is created, it can be applied to input media by creating Jobs.
Variables are only populated by the server, and will be ignored when sending a request.
All required parameters must be populated in order to send to Azure.
- Variables
created (datetime) – The UTC date and time when the Transform was created, in ‘YYYY-MM-DDThh:mm:ssZ’ format.
last_modified (datetime) – The UTC date and time when the Transform was last updated, in ‘YYYY-MM-DDThh:mm:ssZ’ format.
- Parameters
description (str) – An optional verbose description of the Transform.
outputs (list[TransformOutput]) – Required. An array of one or more TransformOutputs that the Transform should generate.
-
class
azure.mgmt.media.models.
TransformOutput
(*, preset, on_error=None, relative_priority=None, **kwargs)[source]¶ Bases:
msrest.serialization.Model
Describes the properties of a TransformOutput, which are the rules to be applied while generating the desired output.
All required parameters must be populated in order to send to Azure.
- Parameters
on_error (str or OnErrorType) – A Transform can define more than one outputs. This property defines what the service should do when one output fails - either continue to produce other outputs, or, stop the other outputs. The overall Job state will not reflect failures of outputs that are specified with ‘ContinueJob’. The default is ‘StopProcessingJob’. Possible values include: ‘StopProcessingJob’, ‘ContinueJob’
relative_priority (str or Priority) – Sets the relative priority of the TransformOutputs within a Transform. This sets the priority that the service uses for processing TransformOutputs. The default priority is Normal. Possible values include: ‘Low’, ‘Normal’, ‘High’
preset (Preset) – Required. Preset that describes the operations that will be used to modify, transcode, or extract insights from the source file to generate the output.
-
class
azure.mgmt.media.models.
TransportStreamFormat
(*, filename_pattern: str, output_files=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.MultiBitrateFormat
Describes the properties for generating an MPEG-2 Transport Stream (ISO/IEC 13818-1) output video file(s).
All required parameters must be populated in order to send to Azure.
- Parameters
filename_pattern (str) – Required. The pattern of the file names for the generated output files. The following macros are supported in the file name: {Basename} - The base name of the input video {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {Bitrate} - The audio/video bitrate. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. Any unsubstituted macros will be collapsed and removed from the filename.
odatatype (str) – Required. Constant filled by server.
output_files (list[OutputFile]) – The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .
-
class
azure.mgmt.media.models.
Video
(*, label: str = None, key_frame_interval=None, stretch_mode=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.Codec
Describes the basic properties for encoding the input video.
You probably want to use the sub-classes and not this class directly. Known sub-classes are: Image, H264Video
All required parameters must be populated in order to send to Azure.
- Parameters
label (str) – An optional label for the codec. The label can be used to control muxing behavior.
odatatype (str) – Required. Constant filled by server.
key_frame_interval (timedelta) – The distance between two key frames, thereby defining a group of pictures (GOP). The value should be a non-zero integer in the range [1, 30] seconds, specified in ISO 8601 format. The default is 2 seconds (PT2S).
stretch_mode (str or StretchMode) – The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize. Possible values include: ‘None’, ‘AutoSize’, ‘AutoFit’
-
class
azure.mgmt.media.models.
VideoAnalyzerPreset
(*, audio_language: str = None, experimental_options=None, insights_to_extract=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.AudioAnalyzerPreset
A video analyzer preset that extracts insights (rich metadata) from both audio and video, and outputs a JSON format file.
All required parameters must be populated in order to send to Azure.
- Parameters
odatatype (str) – Required. Constant filled by server.
audio_language (str) – The language for the audio payload in the input using the BCP-47 format of ‘language tag-region’ (e.g: ‘en-US’). If you know the language of your content, it is recommended that you specify it. If the language isn’t specified or set to null, automatic language detection will choose the first language detected and process with the selected language for the duration of the file. It does not currently support dynamically switching between languages after the first language is detected. The automatic detection works best with audio recordings with clearly discernable speech. If automatic detection fails to find the language, transcription would fallback to ‘en-US’.” The list of supported languages is available here: https://go.microsoft.com/fwlink/?linkid=2109463
experimental_options (dict[str, str]) – Dictionary containing key value pairs for parameters not exposed in the preset itself
insights_to_extract (str or InsightsType) – Defines the type of insights that you want the service to generate. The allowed values are ‘AudioInsightsOnly’, ‘VideoInsightsOnly’, and ‘AllInsights’. The default is AllInsights. If you set this to AllInsights and the input is audio only, then only audio insights are generated. Similarly if the input is video only, then only video insights are generated. It is recommended that you not use AudioInsightsOnly if you expect some of your inputs to be video only; or use VideoInsightsOnly if you expect some of your inputs to be audio only. Your Jobs in such conditions would error out. Possible values include: ‘AudioInsightsOnly’, ‘VideoInsightsOnly’, ‘AllInsights’
-
class
azure.mgmt.media.models.
VideoLayer
(*, bitrate: int, width: str = None, height: str = None, label: str = None, max_bitrate: int = None, b_frames: int = None, frame_rate: str = None, slices: int = None, adaptive_bframe: bool = None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.Layer
Describes the settings to be used when encoding the input video into a desired output bitrate layer.
You probably want to use the sub-classes and not this class directly. Known sub-classes are: H264Layer
All required parameters must be populated in order to send to Azure.
- Parameters
width (str) – The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
height (str) – The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input.
label (str) – The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file.
odatatype (str) – Required. Constant filled by server.
bitrate (int) – Required. The average bitrate in bits per second at which to encode the input video when generating this layer. This is a required field.
max_bitrate (int) – The maximum bitrate (in bits per second), at which the VBV buffer should be assumed to refill. If not specified, defaults to the same value as bitrate.
b_frames (int) – The number of B-frames to be used when encoding this layer. If not specified, the encoder chooses an appropriate number based on the video profile and level.
frame_rate (str) – The frame rate (in frames per second) at which to encode this layer. The value can be in the form of M/N where M and N are integers (For example, 30000/1001), or in the form of a number (For example, 30, or 29.97). The encoder enforces constraints on allowed frame rates based on the profile and level. If it is not specified, the encoder will use the same frame rate as the input video.
slices (int) – The number of slices to be used when encoding this layer. If not specified, default is zero, which means that encoder will use a single slice for each frame.
adaptive_bframe (bool) – Whether or not adaptive B-frames are to be used when encoding this layer. If not specified, the encoder will turn it on whenever the video profile permits its use.
-
class
azure.mgmt.media.models.
VideoOverlay
(*, input_label: str, start=None, end=None, fade_in_duration=None, fade_out_duration=None, audio_gain_level: float = None, position=None, opacity: float = None, crop_rectangle=None, **kwargs)[source]¶ Bases:
azure.mgmt.media.models._models_py3.Overlay
Describes the properties of a video overlay.
All required parameters must be populated in order to send to Azure.
- Parameters
input_label (str) – Required. The label of the job input which is to be used as an overlay. The Input must specify exactly one file. You can specify an image file in JPG or PNG formats, or an audio file (such as a WAV, MP3, WMA or M4A file), or a video file. See https://aka.ms/mesformats for the complete list of supported audio and video file formats.
start (timedelta) – The start position, with reference to the input video, at which the overlay starts. The value should be in ISO 8601 format. For example, PT05S to start the overlay at 5 seconds in to the input video. If not specified the overlay starts from the beginning of the input video.
end (timedelta) – The position in the input video at which the overlay ends. The value should be in ISO 8601 duration format. For example, PT30S to end the overlay at 30 seconds in to the input video. If not specified the overlay will be applied until the end of the input video if inputLoop is true. Else, if inputLoop is false, then overlay will last as long as the duration of the overlay media.
fade_in_duration (timedelta) – The duration over which the overlay fades in onto the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade in (same as PT0S).
fade_out_duration (timedelta) – The duration over which the overlay fades out of the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade out (same as PT0S).
audio_gain_level (float) – The gain level of audio in the overlay. The value should be in the range [0, 1.0]. The default is 1.0.
odatatype (str) – Required. Constant filled by server.
position (Rectangle) – The location in the input video where the overlay is applied.
opacity (float) – The opacity of the overlay. This is a value in the range [0 - 1.0]. Default is 1.0 which mean the overlay is opaque.
crop_rectangle (Rectangle) – An optional rectangular window used to crop the overlay image or video.
-
class
azure.mgmt.media.models.
AccountFilterPaged
(*args, **kwargs)[source]¶ Bases:
msrest.paging.Paged
A paging container for iterating over a list of
AccountFilter
objectBring async to Paging.
“async_command” is mandatory keyword argument for this mixin to work.
-
class
azure.mgmt.media.models.
OperationPaged
(*args, **kwargs)[source]¶ Bases:
msrest.paging.Paged
A paging container for iterating over a list of
Operation
objectBring async to Paging.
“async_command” is mandatory keyword argument for this mixin to work.
-
class
azure.mgmt.media.models.
MediaServicePaged
(*args, **kwargs)[source]¶ Bases:
msrest.paging.Paged
A paging container for iterating over a list of
MediaService
objectBring async to Paging.
“async_command” is mandatory keyword argument for this mixin to work.
-
class
azure.mgmt.media.models.
SubscriptionMediaServicePaged
(*args, **kwargs)[source]¶ Bases:
msrest.paging.Paged
A paging container for iterating over a list of
SubscriptionMediaService
objectBring async to Paging.
“async_command” is mandatory keyword argument for this mixin to work.
-
class
azure.mgmt.media.models.
AssetPaged
(*args, **kwargs)[source]¶ Bases:
msrest.paging.Paged
A paging container for iterating over a list of
Asset
objectBring async to Paging.
“async_command” is mandatory keyword argument for this mixin to work.
-
class
azure.mgmt.media.models.
AssetFilterPaged
(*args, **kwargs)[source]¶ Bases:
msrest.paging.Paged
A paging container for iterating over a list of
AssetFilter
objectBring async to Paging.
“async_command” is mandatory keyword argument for this mixin to work.
-
class
azure.mgmt.media.models.
ContentKeyPolicyPaged
(*args, **kwargs)[source]¶ Bases:
msrest.paging.Paged
A paging container for iterating over a list of
ContentKeyPolicy
objectBring async to Paging.
“async_command” is mandatory keyword argument for this mixin to work.
-
class
azure.mgmt.media.models.
TransformPaged
(*args, **kwargs)[source]¶ Bases:
msrest.paging.Paged
A paging container for iterating over a list of
Transform
objectBring async to Paging.
“async_command” is mandatory keyword argument for this mixin to work.
-
class
azure.mgmt.media.models.
JobPaged
(*args, **kwargs)[source]¶ Bases:
msrest.paging.Paged
A paging container for iterating over a list of
Job
objectBring async to Paging.
“async_command” is mandatory keyword argument for this mixin to work.
-
class
azure.mgmt.media.models.
StreamingPolicyPaged
(*args, **kwargs)[source]¶ Bases:
msrest.paging.Paged
A paging container for iterating over a list of
StreamingPolicy
objectBring async to Paging.
“async_command” is mandatory keyword argument for this mixin to work.
-
class
azure.mgmt.media.models.
StreamingLocatorPaged
(*args, **kwargs)[source]¶ Bases:
msrest.paging.Paged
A paging container for iterating over a list of
StreamingLocator
objectBring async to Paging.
“async_command” is mandatory keyword argument for this mixin to work.
-
class
azure.mgmt.media.models.
LiveEventPaged
(*args, **kwargs)[source]¶ Bases:
msrest.paging.Paged
A paging container for iterating over a list of
LiveEvent
objectBring async to Paging.
“async_command” is mandatory keyword argument for this mixin to work.
-
class
azure.mgmt.media.models.
LiveOutputPaged
(*args, **kwargs)[source]¶ Bases:
msrest.paging.Paged
A paging container for iterating over a list of
LiveOutput
objectBring async to Paging.
“async_command” is mandatory keyword argument for this mixin to work.
-
class
azure.mgmt.media.models.
StreamingEndpointPaged
(*args, **kwargs)[source]¶ Bases:
msrest.paging.Paged
A paging container for iterating over a list of
StreamingEndpoint
objectBring async to Paging.
“async_command” is mandatory keyword argument for this mixin to work.
-
class
azure.mgmt.media.models.
FilterTrackPropertyType
[source]¶ -
An enumeration.
-
bitrate
= 'Bitrate'¶ The bitrate.
-
four_cc
= 'FourCC'¶ The fourCC.
-
language
= 'Language'¶ The language.
-
name
= 'Name'¶ The name.
-
type
= 'Type'¶ The type.
-
unknown
= 'Unknown'¶ The unknown track property type.
-
-
class
azure.mgmt.media.models.
FilterTrackPropertyCompareOperation
[source]¶ -
An enumeration.
-
equal
= 'Equal'¶ The equal operation.
-
not_equal
= 'NotEqual'¶ The not equal operation.
-
-
class
azure.mgmt.media.models.
MetricUnit
[source]¶ -
An enumeration.
-
bytes
= 'Bytes'¶ The number of bytes.
-
count
= 'Count'¶ The count.
-
milliseconds
= 'Milliseconds'¶ The number of milliseconds.
-
-
class
azure.mgmt.media.models.
MetricAggregationType
[source]¶ -
An enumeration.
-
average
= 'Average'¶ The average.
-
count
= 'Count'¶ The count of a number of items, usually requests.
-
total
= 'Total'¶ The sum.
-
-
class
azure.mgmt.media.models.
StorageAccountType
[source]¶ -
An enumeration.
-
primary
= 'Primary'¶ The primary storage account for the Media Services account.
-
secondary
= 'Secondary'¶ A secondary storage account for the Media Services account.
-
-
class
azure.mgmt.media.models.
AssetStorageEncryptionFormat
[source]¶ -
An enumeration.
-
media_storage_client_encryption
= 'MediaStorageClientEncryption'¶ The Asset is encrypted with Media Services client-side encryption.
-
none
= 'None'¶ The Asset does not use client-side storage encryption (this is the only allowed value for new Assets).
-
-
class
azure.mgmt.media.models.
AssetContainerPermission
[source]¶ -
An enumeration.
-
read
= 'Read'¶ The SAS URL will allow read access to the container.
-
read_write
= 'ReadWrite'¶ The SAS URL will allow read and write access to the container.
-
read_write_delete
= 'ReadWriteDelete'¶ The SAS URL will allow read, write and delete access to the container.
-
-
class
azure.mgmt.media.models.
ContentKeyPolicyPlayReadyUnknownOutputPassingOption
[source]¶ -
An enumeration.
-
allowed
= 'Allowed'¶ Passing the video portion of protected content to an Unknown Output is allowed.
-
allowed_with_video_constriction
= 'AllowedWithVideoConstriction'¶ Passing the video portion of protected content to an Unknown Output is allowed but with constrained resolution.
-
not_allowed
= 'NotAllowed'¶ Passing the video portion of protected content to an Unknown Output is not allowed.
-
unknown
= 'Unknown'¶ Represents a ContentKeyPolicyPlayReadyUnknownOutputPassingOption that is unavailable in current API version.
-
-
class
azure.mgmt.media.models.
ContentKeyPolicyPlayReadyLicenseType
[source]¶ -
An enumeration.
-
non_persistent
= 'NonPersistent'¶ Non persistent license.
-
persistent
= 'Persistent'¶ Persistent license. Allows offline playback.
-
unknown
= 'Unknown'¶ Represents a ContentKeyPolicyPlayReadyLicenseType that is unavailable in current API version.
-
-
class
azure.mgmt.media.models.
ContentKeyPolicyPlayReadyContentType
[source]¶ -
An enumeration.
-
ultra_violet_download
= 'UltraVioletDownload'¶ Ultraviolet download content type.
-
ultra_violet_streaming
= 'UltraVioletStreaming'¶ Ultraviolet streaming content type.
-
unknown
= 'Unknown'¶ Represents a ContentKeyPolicyPlayReadyContentType that is unavailable in current API version.
-
unspecified
= 'Unspecified'¶ Unspecified content type.
-
-
class
azure.mgmt.media.models.
ContentKeyPolicyRestrictionTokenType
[source]¶ -
An enumeration.
-
jwt
= 'Jwt'¶ JSON Web Token.
-
swt
= 'Swt'¶ Simple Web Token.
-
unknown
= 'Unknown'¶ Represents a ContentKeyPolicyRestrictionTokenType that is unavailable in current API version.
-
-
class
azure.mgmt.media.models.
ContentKeyPolicyFairPlayRentalAndLeaseKeyType
[source]¶ -
An enumeration.
-
dual_expiry
= 'DualExpiry'¶ Dual expiry for offline rental.
-
persistent_limited
= 'PersistentLimited'¶ Content key can be persisted and the valid duration is limited by the Rental Duration value
-
persistent_unlimited
= 'PersistentUnlimited'¶ Content key can be persisted with an unlimited duration
-
undefined
= 'Undefined'¶ Key duration is not specified.
-
unknown
= 'Unknown'¶ Represents a ContentKeyPolicyFairPlayRentalAndLeaseKeyType that is unavailable in current API version.
-
-
class
azure.mgmt.media.models.
AacAudioProfile
[source]¶ -
An enumeration.
-
aac_lc
= 'AacLc'¶ Specifies that the output audio is to be encoded into AAC Low Complexity profile (AAC-LC).
-
he_aac_v1
= 'HeAacV1'¶ Specifies that the output audio is to be encoded into HE-AAC v1 profile.
-
he_aac_v2
= 'HeAacV2'¶ Specifies that the output audio is to be encoded into HE-AAC v2 profile.
-
-
class
azure.mgmt.media.models.
AnalysisResolution
[source]¶ -
An enumeration.
-
source_resolution
= 'SourceResolution'¶
-
standard_definition
= 'StandardDefinition'¶
-
-
class
azure.mgmt.media.models.
StretchMode
[source]¶ -
An enumeration.
-
auto_fit
= 'AutoFit'¶ 9, and pillar box regions 280 pixels wide at the left and right.
- Type
Pad the output (with either letterbox or pillar box) to honor the output resolution, while ensuring that the active video region in the output has the same aspect ratio as the input. For example, if the input is 1920x1080 and the encoding preset asks for 1280x1280, then the output will be at 1280x1280, which contains an inner rectangle of 1280x720 at aspect ratio of 16
-
auto_size
= 'AutoSize'¶ - Type
Override the output resolution, and change it to match the display aspect ratio of the input, without padding. For example, if the input is 1920x1080 and the encoding preset asks for 1280x1280, then the value in the preset is overridden, and the output will be at 1280x720, which maintains the input aspect ratio of 16
-
none
= 'None'¶ Strictly respect the output resolution without considering the pixel aspect ratio or display aspect ratio of the input video.
-
-
class
azure.mgmt.media.models.
DeinterlaceParity
[source]¶ -
An enumeration.
-
auto
= 'Auto'¶ Automatically detect the order of fields
-
bottom_field_first
= 'BottomFieldFirst'¶ Apply bottom field first processing of input video.
-
top_field_first
= 'TopFieldFirst'¶ Apply top field first processing of input video.
-
-
class
azure.mgmt.media.models.
DeinterlaceMode
[source]¶ -
An enumeration.
-
auto_pixel_adaptive
= 'AutoPixelAdaptive'¶ Apply automatic pixel adaptive de-interlacing on each frame in the input video.
-
off
= 'Off'¶ Disables de-interlacing of the source video.
-
-
class
azure.mgmt.media.models.
Rotation
[source]¶ -
An enumeration.
-
auto
= 'Auto'¶ Automatically detect and rotate as needed.
-
none
= 'None'¶ Do not rotate the video. If the output format supports it, any metadata about rotation is kept intact.
-
rotate0
= 'Rotate0'¶ Do not rotate the video but remove any metadata about the rotation.
-
rotate180
= 'Rotate180'¶ Rotate 180 degrees clockwise.
-
rotate270
= 'Rotate270'¶ Rotate 270 degrees clockwise.
-
rotate90
= 'Rotate90'¶ Rotate 90 degrees clockwise.
-
-
class
azure.mgmt.media.models.
H264VideoProfile
[source]¶ -
An enumeration.
-
auto
= 'Auto'¶ Tells the encoder to automatically determine the appropriate H.264 profile.
-
baseline
= 'Baseline'¶ Baseline profile
-
high
= 'High'¶ High profile.
-
high422
= 'High422'¶ 2 profile.
- Type
High 4
- Type
2
-
high444
= 'High444'¶ 4 predictive profile.
- Type
High 4
- Type
4
-
main
= 'Main'¶ Main profile
-
-
class
azure.mgmt.media.models.
EntropyMode
[source]¶ -
An enumeration.
-
cabac
= 'Cabac'¶ Context Adaptive Binary Arithmetic Coder (CABAC) entropy encoding.
-
cavlc
= 'Cavlc'¶ Context Adaptive Variable Length Coder (CAVLC) entropy encoding.
-
-
class
azure.mgmt.media.models.
H264Complexity
[source]¶ -
An enumeration.
-
balanced
= 'Balanced'¶ Tells the encoder to use settings that achieve a balance between speed and quality.
-
quality
= 'Quality'¶ Tells the encoder to use settings that are optimized to produce higher quality output at the expense of slower overall encode time.
-
speed
= 'Speed'¶ Tells the encoder to use settings that are optimized for faster encoding. Quality is sacrificed to decrease encoding time.
-
-
class
azure.mgmt.media.models.
EncoderNamedPreset
[source]¶ -
An enumeration.
-
aac_good_quality_audio
= 'AACGoodQualityAudio'¶ Produces a single MP4 file containing only stereo audio encoded at 192 kbps.
-
adaptive_streaming
= 'AdaptiveStreaming'¶ Produces a set of GOP aligned MP4 files with H.264 video and stereo AAC audio. Auto-generates a bitrate ladder based on the input resolution and bitrate. The auto-generated preset will never exceed the input resolution and bitrate. For example, if the input is 720p at 3 Mbps, output will remain 720p at best, and will start at rates lower than 3 Mbps. The output will have video and audio in separate MP4 files, which is optimal for adaptive streaming.
-
content_aware_encoding
= 'ContentAwareEncoding'¶ Produces a set of GOP-aligned MP4s by using content-aware encoding. Given any input content, the service performs an initial lightweight analysis of the input content, and uses the results to determine the optimal number of layers, appropriate bitrate and resolution settings for delivery by adaptive streaming. This preset is particularly effective for low and medium complexity videos, where the output files will be at lower bitrates but at a quality that still delivers a good experience to viewers. The output will contain MP4 files with video and audio interleaved.
-
content_aware_encoding_experimental
= 'ContentAwareEncodingExperimental'¶ Exposes an experimental preset for content-aware encoding. Given any input content, the service attempts to automatically determine the optimal number of layers, appropriate bitrate and resolution settings for delivery by adaptive streaming. The underlying algorithms will continue to evolve over time. The output will contain MP4 files with video and audio interleaved.
-
h264_multiple_bitrate1080p
= 'H264MultipleBitrate1080p'¶ Produces a set of 8 GOP-aligned MP4 files, ranging from 6000 kbps to 400 kbps, and stereo AAC audio. Resolution starts at 1080p and goes down to 360p.
-
h264_multiple_bitrate720p
= 'H264MultipleBitrate720p'¶ Produces a set of 6 GOP-aligned MP4 files, ranging from 3400 kbps to 400 kbps, and stereo AAC audio. Resolution starts at 720p and goes down to 360p.
-
h264_multiple_bitrate_sd
= 'H264MultipleBitrateSD'¶ Produces a set of 5 GOP-aligned MP4 files, ranging from 1600kbps to 400 kbps, and stereo AAC audio. Resolution starts at 480p and goes down to 360p.
-
h264_single_bitrate1080p
= 'H264SingleBitrate1080p'¶ Produces an MP4 file where the video is encoded with H.264 codec at 6750 kbps and a picture height of 1080 pixels, and the stereo audio is encoded with AAC-LC codec at 64 kbps.
-
h264_single_bitrate720p
= 'H264SingleBitrate720p'¶ Produces an MP4 file where the video is encoded with H.264 codec at 4500 kbps and a picture height of 720 pixels, and the stereo audio is encoded with AAC-LC codec at 64 kbps.
-
h264_single_bitrate_sd
= 'H264SingleBitrateSD'¶ Produces an MP4 file where the video is encoded with H.264 codec at 2200 kbps and a picture height of 480 pixels, and the stereo audio is encoded with AAC-LC codec at 64 kbps.
-
-
class
azure.mgmt.media.models.
InsightsType
[source]¶ -
An enumeration.
-
all_insights
= 'AllInsights'¶ Generate both audio and video insights. Fails if either audio or video Insights fail.
-
audio_insights_only
= 'AudioInsightsOnly'¶ Generate audio only insights. Ignore video even if present. Fails if no audio is present.
-
video_insights_only
= 'VideoInsightsOnly'¶ Generate video only insights. Ignore audio if present. Fails if no video is present.
-
-
class
azure.mgmt.media.models.
OnErrorType
[source]¶ -
An enumeration.
-
continue_job
= 'ContinueJob'¶ Tells the service that if this TransformOutput fails, then allow any other TransformOutput to continue.
-
stop_processing_job
= 'StopProcessingJob'¶ Tells the service that if this TransformOutput fails, then any other incomplete TransformOutputs can be stopped.
-
-
class
azure.mgmt.media.models.
Priority
[source]¶ -
An enumeration.
-
high
= 'High'¶ Used for TransformOutputs that should take precedence over others.
-
low
= 'Low'¶ Used for TransformOutputs that can be generated after Normal and High priority TransformOutputs.
-
normal
= 'Normal'¶ Used for TransformOutputs that can be generated at Normal priority.
-
-
class
azure.mgmt.media.models.
JobErrorCode
[source]¶ -
An enumeration.
-
configuration_unsupported
= 'ConfigurationUnsupported'¶ There was a problem with the combination of input files and the configuration settings applied, fix the configuration settings and retry with the same input, or change input to match the configuration.
-
content_malformed
= 'ContentMalformed'¶ zero byte files, or corrupt/non-decodable files), check the input files.
- Type
There was a problem with the input content (for example
-
content_unsupported
= 'ContentUnsupported'¶ There was a problem with the format of the input (not valid media file, or an unsupported file/codec), check the validity of the input files.
-
download_not_accessible
= 'DownloadNotAccessible'¶ While trying to download the input files, the files were not accessible, please check the availability of the source.
-
download_transient_error
= 'DownloadTransientError'¶ While trying to download the input files, there was an issue during transfer (storage service, network errors), see details and check your source.
-
service_error
= 'ServiceError'¶ Fatal service error, please contact support.
-
service_transient_error
= 'ServiceTransientError'¶ Transient error, please retry, if retry is unsuccessful, please contact support.
-
upload_not_accessible
= 'UploadNotAccessible'¶ While trying to upload the output files, the destination was not reachable, please check the availability of the destination.
-
upload_transient_error
= 'UploadTransientError'¶ While trying to upload the output files, there was an issue during transfer (storage service, network errors), see details and check your destination.
-
-
class
azure.mgmt.media.models.
JobErrorCategory
[source]¶ -
An enumeration.
-
configuration
= 'Configuration'¶ The error is configuration related.
-
content
= 'Content'¶ The error is related to data in the input files.
-
download
= 'Download'¶ The error is download related.
-
service
= 'Service'¶ The error is service related.
-
upload
= 'Upload'¶ The error is upload related.
-
-
class
azure.mgmt.media.models.
JobRetry
[source]¶ -
An enumeration.
-
do_not_retry
= 'DoNotRetry'¶ Issue needs to be investigated and then the job resubmitted with corrections or retried once the underlying issue has been corrected.
-
may_retry
= 'MayRetry'¶ Issue may be resolved after waiting for a period of time and resubmitting the same Job.
-
-
class
azure.mgmt.media.models.
JobState
[source]¶ -
An enumeration.
-
canceled
= 'Canceled'¶ The job was canceled. This is a final state for the job.
-
canceling
= 'Canceling'¶ The job is in the process of being canceled. This is a transient state for the job.
-
error
= 'Error'¶ The job has encountered an error. This is a final state for the job.
-
finished
= 'Finished'¶ The job is finished. This is a final state for the job.
-
processing
= 'Processing'¶ The job is processing. This is a transient state for the job.
-
queued
= 'Queued'¶ The job is in a queued state, waiting for resources to become available. This is a transient state.
-
scheduled
= 'Scheduled'¶ The job is being scheduled to run on an available resource. This is a transient state, between queued and processing states.
-
-
class
azure.mgmt.media.models.
TrackPropertyType
[source]¶ -
An enumeration.
-
four_cc
= 'FourCC'¶ Track FourCC
-
unknown
= 'Unknown'¶ Unknown track property
-
-
class
azure.mgmt.media.models.
TrackPropertyCompareOperation
[source]¶ -
An enumeration.
-
equal
= 'Equal'¶ Equal operation
-
unknown
= 'Unknown'¶ Unknown track property compare operation
-
-
class
azure.mgmt.media.models.
StreamingLocatorContentKeyType
[source]¶ -
An enumeration.
-
common_encryption_cbcs
= 'CommonEncryptionCbcs'¶ Common Encryption using CBCS
-
common_encryption_cenc
= 'CommonEncryptionCenc'¶ Common Encryption using CENC
-
envelope_encryption
= 'EnvelopeEncryption'¶ Envelope Encryption
-
-
class
azure.mgmt.media.models.
StreamingPolicyStreamingProtocol
[source]¶ -
An enumeration.
-
dash
= 'Dash'¶ DASH protocol
-
download
= 'Download'¶ Download protocol
-
hls
= 'Hls'¶ HLS protocol
-
smooth_streaming
= 'SmoothStreaming'¶ SmoothStreaming protocol
-
-
class
azure.mgmt.media.models.
EncryptionScheme
[source]¶ -
An enumeration.
-
common_encryption_cbcs
= 'CommonEncryptionCbcs'¶ CommonEncryptionCbcs scheme
-
common_encryption_cenc
= 'CommonEncryptionCenc'¶ CommonEncryptionCenc scheme
-
envelope_encryption
= 'EnvelopeEncryption'¶ EnvelopeEncryption scheme
-
no_encryption
= 'NoEncryption'¶ NoEncryption scheme
-
-
class
azure.mgmt.media.models.
LiveOutputResourceState
[source]¶ -
An enumeration.
-
creating
= 'Creating'¶
-
deleting
= 'Deleting'¶
-
running
= 'Running'¶
-
-
class
azure.mgmt.media.models.
LiveEventInputProtocol
[source]¶ -
An enumeration.
-
fragmented_mp4
= 'FragmentedMP4'¶
-
rtmp
= 'RTMP'¶
-
-
class
azure.mgmt.media.models.
LiveEventEncodingType
[source]¶ -
An enumeration.
-
basic
= 'Basic'¶
-
none
= 'None'¶
-
standard
= 'Standard'¶
-
-
class
azure.mgmt.media.models.
LiveEventResourceState
[source]¶ -
An enumeration.
-
deleting
= 'Deleting'¶
-
running
= 'Running'¶
-
starting
= 'Starting'¶
-
stopped
= 'Stopped'¶
-
stopping
= 'Stopping'¶
-