azure.ai.ml package

class azure.ai.ml.AmlToken(**kwargs)[source]

AML Token identity configuration.

All required parameters must be populated in order to send to Azure.

Variables

identity_type (str or IdentityConfigurationType) – Required. [Required] Specifies the type of identity framework.Constant filled by server. Possible values include: “Managed”, “AMLToken”, “UserIdentity”.

as_dict(keep_readonly=True, key_transformer=<function attribute_transformer>, **kwargs)

Return a dict that can be JSONify using json.dump.

Advanced usage might optionally use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

  • attribute_transformer

  • full_restapi_key_transformer

  • last_restapi_key_transformer

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

key_transformer (function) – A key transformer function.

Returns

A dict JSON compatible object

Return type

dict

classmethod deserialize(data, content_type=None)

Parse a str using the RestAPI syntax and return a model.

Parameters
  • data (str) – A str using RestAPI structure. JSON by default.

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod enable_additional_properties_sending()
classmethod from_dict(data, key_extractors=None, content_type=None)

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

Parameters
  • data (dict) – A dict using RestAPI structure

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod is_xml_model()
serialize(keep_readonly=False, **kwargs)

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

keep_readonly (bool) – If you want to serialize the readonly attributes

Returns

A dict JSON compatible object

Return type

dict

validate()

Validate this model recursively and return a list of ValidationError.

Returns

A list of validation error

Return type

list

class azure.ai.ml.Input(*, type: str = "'uri_folder'", path: str = 'None', mode: str = "'ro_mount'", optional: bool = 'None', description: str = 'None', **kwargs)[source]
class azure.ai.ml.Input(*, type: str = "'number'", default: float = 'None', min: float = 'None', max: float = 'None', optional: bool = 'None', description: str = 'None', **kwargs)
class azure.ai.ml.Input(*, type: str = "'integer'", default: int = 'None', min: int = 'None', max: int = 'None', optional: bool = 'None', description: str = 'None', **kwargs)
class azure.ai.ml.Input(*, type: str = "'string'", default: str = 'None', optional: bool = 'None', description: str = 'None', **kwargs)
class azure.ai.ml.Input(*, type: str = "'boolean'", default: bool = 'None', optional: bool = 'None', description: str = 'None', **kwargs)

Define an input of a Component or Job.

Default to be a uri_folder Input.

Parameters
  • type (str) – The type of the data input. Possible values include: ‘uri_folder’, ‘uri_file’, ‘mltable’, ‘mlflow_model’, ‘custom_model’, ‘integer’, ‘number’, ‘string’, ‘boolean’

  • path (str) – The path to which the input is pointing. Could be pointing to local data, cloud data, a registered name, etc.

  • mode (str) – The mode of the data input. Possible values are: ‘ro_mount’: Read-only mount the data, ‘download’: Download the data to the compute target, ‘direct’: Pass in the URI as a string

  • default (Union[str, integer, float, bool]) – The default value of this input. When a default is set, the input will be optional

  • min (Union[integer, float]) – The min value – if a smaller value is passed to a job, the job execution will fail

  • max (Union[integer, float]) – The max value – if a larger value is passed to a job, the job execution will fail

  • optional (bool) – Determine if this input is optional

  • description (str) – Description of the input

get(key: Any, default: Optional[Any] = None)Any
has_key(k: Any)bool
items()list
keys()list
update(*args: Any, **kwargs: Any)None
values()list
class azure.ai.ml.MLClient(credential: azure.identity._credentials.chained.ChainedTokenCredential, subscription_id: str, resource_group_name: str, workspace_name: Optional[str] = None, **kwargs)[source]

A client class to interact with Azure ML services.

Use this client to manage Azure ML resources, e.g. workspaces, jobs, models and so on.

Initiate Azure ML client

Parameters
  • credential (ChainedTokenCredential) – Credential to use for authentication.

  • subscription_id (str) – Azure subscription ID.

  • resource_group_name (str) – Azure resource group.

  • workspace_name (str, optional) – Workspace to use in the client, optional for non workspace dependent operations., defaults to None

begin_create_or_update(entity: Union[azure.ai.ml.entities._workspace.workspace.Workspace, azure.ai.ml.entities._compute.compute.Compute, azure.ai.ml.entities._deployment.online_deployment.OnlineDeployment, azure.ai.ml.entities._endpoint.online_endpoint.OnlineEndpoint, azure.ai.ml.entities._deployment.batch_deployment.BatchDeployment, azure.ai.ml.entities._endpoint.batch_endpoint.BatchEndpoint], **kwargs)azure.core.polling._poller.LROPoller[source]

Creates or updates an Azure ML resource asynchronously.

Parameters

entity (Union[azure.ai.ml.entities.Workspace, azure.ai.ml.entities.Compute, azure.ai.ml.entities.OnlineDeployment, azure.ai.ml.entities.OnlineEndpoint, azure.ai.ml.entities.BatchDeployment, azure.ai.ml.entities.BatchEndpoint]) – The resource to create or update.

Returns

The resource after create/update operation

Return type

Optional[Union[azure.ai.ml.entities.Workspace, azure.ai.ml.entities.Compute, azure.ai.ml.entities.OnlineDeployment, azure.ai.ml.entities.OnlineEndpoint, azure.ai.ml.entities.BatchDeployment, azure.ai.ml.entities.BatchEndpoint]]

create_or_update(entity: Union[azure.ai.ml.entities._job.job.Job, azure.ai.ml.entities._builders.base_node.BaseNode, azure.ai.ml.entities._assets._artifacts.model.Model, azure.ai.ml.entities._assets.environment.Environment, azure.ai.ml.entities._component.component.Component, azure.ai.ml.entities._datastore.datastore.Datastore], **kwargs)Union[azure.ai.ml.entities._job.job.Job, azure.ai.ml.entities._assets._artifacts.model.Model, azure.ai.ml.entities._assets.environment.Environment, azure.ai.ml.entities._component.component.Component, azure.ai.ml.entities._datastore.datastore.Datastore][source]

Creates or updates an Azure ML resource.

Parameters

entity (Union[azure.ai.ml.entities.Job, azure.ai.ml.entities.Model, azure.ai.ml.entities.Environment, azure.ai.ml.entities.Component, azure.ai.ml.entities.Datastore]) – The resource to create or update.

Returns

The created or updated resource

Return type

Union[azure.ai.ml.entities.Job, azure.ai.ml.entities.Model, azure.ai.ml.entities.Environment, azure.ai.ml.entities.Component, azure.ai.ml.entities.Datastore]

classmethod from_config(credential: azure.identity._credentials.chained.ChainedTokenCredential, path: Optional[Union[os.PathLike, str]] = None, _file_name=None, **kwargs)azure.ai.ml._ml_client.MLClient[source]

Return a workspace object from an existing Azure Machine Learning Workspace.

Reads workspace configuration from a file. Throws an exception if the config file can’t be found.

The method provides a simple way to reuse the same workspace across multiple Python notebooks or projects. Users can save the workspace Azure Resource Manager (ARM) properties using the [workspace.write_config](https://docs.microsoft.com/python/api/azureml-core/azureml.core.workspace.workspace?view=azure-ml-py) method, and use this method to load the same workspace in different Python notebooks or projects without retyping the workspace ARM properties.

Parameters
  • credential (azureml.core.authentication.ChainedTokenCredential) – The credential object for the workspace.

  • path (str) – The path to the config file or starting directory to search. The parameter defaults to starting the search in the current directory.

  • _file_name (str) – Allows overriding the config file name to search for when path is a directory path.

  • kwargs (dict) – A dictionary of additional configuration parameters.

Returns

The workspace object for an existing Azure ML Workspace.

Return type

MLClient

property batch_deployments

A collection of batch deployment related operations

Returns

Batch Deployment operations

Return type

BatchDeploymentOperations

property batch_endpoints

A collection of batch endpoint related operations

Returns

Batch Endpoint operations

Return type

BatchEndpointOperations

property components

A collection of component related operations

Returns

Component operations

Return type

ComponentOperations

property compute

A collection of compute related operations

Returns

Compute operations

Return type

ComputeOperations

property connections

A collection of workspace connection related operations

Returns

Workspace Connections operations

Return type

WorkspaceConnectionsOperations

property data

A collection of data related operations

Returns

Data operations

Return type

DataOperations

property datasets

A collection of dataset related operations

Returns

Dataset operations

Return type

DatasetOperations

property datastores

A collection of datastore related operations

Returns

Datastore operations

Return type

DatastoreOperations

property environments

A collection of environment related operations

Returns

Environment operations

Return type

EnvironmentOperations

property jobs

A collection of job related operations

Returns

Job operations

Return type

JObOperations

property models

A collection of model related operations

Returns

Model operations

Return type

ModelOperations

property online_deployments

A collection of online deployment related operations

Returns

Online Deployment operations

Return type

OnlineDeploymentOperations

property online_endpoints

A collection of online endpoint related operations

Returns

Online Endpoint operations

Return type

OnlineEndpointOperations

property workspace_name

The workspace where workspace dependent operations will be executed in.

Returns

Default workspace name

Return type

Optional[str]

property workspaces

A collection of workspace related operations

Returns

Workspace operations

Return type

WorkspaceOperations

class azure.ai.ml.ManagedIdentity(*, client_id: Optional[str] = None, object_id: Optional[str] = None, resource_id: Optional[str] = None, **kwargs)[source]

Managed identity configuration.

All required parameters must be populated in order to send to Azure.

Variables
  • identity_type (str or IdentityConfigurationType) – Required. [Required] Specifies the type of identity framework.Constant filled by server. Possible values include: “Managed”, “AMLToken”, “UserIdentity”.

  • client_id (str) – Specifies a user-assigned identity by client ID. For system-assigned, do not set this field.

  • object_id (str) – Specifies a user-assigned identity by object ID. For system-assigned, do not set this field.

  • resource_id (str) – Specifies a user-assigned identity by ARM resource ID. For system-assigned, do not set this field.

Keyword Arguments
  • client_id (str) – Specifies a user-assigned identity by client ID. For system-assigned, do not set this field.

  • object_id (str) – Specifies a user-assigned identity by object ID. For system-assigned, do not set this field.

  • resource_id (str) – Specifies a user-assigned identity by ARM resource ID. For system-assigned, do not set this field.

as_dict(keep_readonly=True, key_transformer=<function attribute_transformer>, **kwargs)

Return a dict that can be JSONify using json.dump.

Advanced usage might optionally use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

  • attribute_transformer

  • full_restapi_key_transformer

  • last_restapi_key_transformer

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

key_transformer (function) – A key transformer function.

Returns

A dict JSON compatible object

Return type

dict

classmethod deserialize(data, content_type=None)

Parse a str using the RestAPI syntax and return a model.

Parameters
  • data (str) – A str using RestAPI structure. JSON by default.

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod enable_additional_properties_sending()
classmethod from_dict(data, key_extractors=None, content_type=None)

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

Parameters
  • data (dict) – A dict using RestAPI structure

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod is_xml_model()
serialize(keep_readonly=False, **kwargs)

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

keep_readonly (bool) – If you want to serialize the readonly attributes

Returns

A dict JSON compatible object

Return type

dict

validate()

Validate this model recursively and return a list of ValidationError.

Returns

A list of validation error

Return type

list

class azure.ai.ml.MpiDistribution(*, process_count_per_instance: Optional[int] = None, **kwargs)[source]

MPI distribution configuration.

Parameters

process_count_per_instance (int) – Number of processes per MPI node.

Keyword Arguments

process_count_per_instance (int) – Number of processes per MPI node.

as_dict(keep_readonly=True, key_transformer=<function attribute_transformer>, **kwargs)

Return a dict that can be JSONify using json.dump.

Advanced usage might optionally use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

  • attribute_transformer

  • full_restapi_key_transformer

  • last_restapi_key_transformer

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

key_transformer (function) – A key transformer function.

Returns

A dict JSON compatible object

Return type

dict

classmethod deserialize(data, content_type=None)

Parse a str using the RestAPI syntax and return a model.

Parameters
  • data (str) – A str using RestAPI structure. JSON by default.

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod enable_additional_properties_sending()
classmethod from_dict(data, key_extractors=None, content_type=None)

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

Parameters
  • data (dict) – A dict using RestAPI structure

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod is_xml_model()
serialize(keep_readonly=False, **kwargs)

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

keep_readonly (bool) – If you want to serialize the readonly attributes

Returns

A dict JSON compatible object

Return type

dict

validate()

Validate this model recursively and return a list of ValidationError.

Returns

A list of validation error

Return type

list

type

A data descriptor that transforms value from snake_case to CamelCase in setter, CamelCase to snake_case in getter. When the optional private_name is provided, the descriptor will set the private_name in the object’s __dict__.

class azure.ai.ml.Output(type='uri_folder', path=None, mode='rw_mount', description=None)[source]
class azure.ai.ml.Output(type='uri_file', path=None, mode='rw_mount', description=None)

Define an output of a Component or Job.

Parameters
  • type (str) – The type of the data output. Possible values include: ‘uri_folder’, ‘uri_file’, ‘mltable’, ‘mlflow_model’, ‘custom_model’, and user-defined types.

  • path (str) – The path to which the output is pointing. Needs to point to a cloud path.

  • mode (str) – The mode of the data output. Possible values are: ‘rw_mount’: Read-write mount the data, ‘upload’: Upload the data from the compute target, ‘direct’: Pass in the URI as a string

  • description (str) – Description of the output

get(key: Any, default: Optional[Any] = None)Any
has_key(k: Any)bool
items()list
keys()list
update(*args: Any, **kwargs: Any)None
values()list
class azure.ai.ml.PyTorchDistribution(*, process_count_per_instance: Optional[int] = None, **kwargs)[source]

PyTorch distribution configuration.

Parameters

process_count_per_instance (int) – Number of processes per node.

Keyword Arguments

process_count_per_instance (int) – Number of processes per node.

as_dict(keep_readonly=True, key_transformer=<function attribute_transformer>, **kwargs)

Return a dict that can be JSONify using json.dump.

Advanced usage might optionally use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

  • attribute_transformer

  • full_restapi_key_transformer

  • last_restapi_key_transformer

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

key_transformer (function) – A key transformer function.

Returns

A dict JSON compatible object

Return type

dict

classmethod deserialize(data, content_type=None)

Parse a str using the RestAPI syntax and return a model.

Parameters
  • data (str) – A str using RestAPI structure. JSON by default.

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod enable_additional_properties_sending()
classmethod from_dict(data, key_extractors=None, content_type=None)

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

Parameters
  • data (dict) – A dict using RestAPI structure

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod is_xml_model()
serialize(keep_readonly=False, **kwargs)

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

keep_readonly (bool) – If you want to serialize the readonly attributes

Returns

A dict JSON compatible object

Return type

dict

validate()

Validate this model recursively and return a list of ValidationError.

Returns

A list of validation error

Return type

list

type

A data descriptor that transforms value from snake_case to CamelCase in setter, CamelCase to snake_case in getter. When the optional private_name is provided, the descriptor will set the private_name in the object’s __dict__.

class azure.ai.ml.TensorFlowDistribution(*, parameter_server_count: Optional[int] = 0, worker_count: Optional[int] = None, **kwargs)[source]

TensorFlow distribution configuration.

Variables
  • parameter_server_count (int) – Number of parameter server tasks.

  • worker_count (int) – Number of workers. If not specified, will default to the instance count.

Keyword Arguments
  • parameter_server_count (int) – Number of parameter server tasks.

  • worker_count (int) – Number of workers. If not specified, will default to the instance count.

as_dict(keep_readonly=True, key_transformer=<function attribute_transformer>, **kwargs)

Return a dict that can be JSONify using json.dump.

Advanced usage might optionally use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

  • attribute_transformer

  • full_restapi_key_transformer

  • last_restapi_key_transformer

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

key_transformer (function) – A key transformer function.

Returns

A dict JSON compatible object

Return type

dict

classmethod deserialize(data, content_type=None)

Parse a str using the RestAPI syntax and return a model.

Parameters
  • data (str) – A str using RestAPI structure. JSON by default.

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod enable_additional_properties_sending()
classmethod from_dict(data, key_extractors=None, content_type=None)

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

Parameters
  • data (dict) – A dict using RestAPI structure

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod is_xml_model()
serialize(keep_readonly=False, **kwargs)

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

keep_readonly (bool) – If you want to serialize the readonly attributes

Returns

A dict JSON compatible object

Return type

dict

validate()

Validate this model recursively and return a list of ValidationError.

Returns

A list of validation error

Return type

list

type

A data descriptor that transforms value from snake_case to CamelCase in setter, CamelCase to snake_case in getter. When the optional private_name is provided, the descriptor will set the private_name in the object’s __dict__.

class azure.ai.ml.UserIdentity(**kwargs)[source]

User identity configuration.

All required parameters must be populated in order to send to Azure.

Variables

identity_type (str or IdentityConfigurationType) – Required. [Required] Specifies the type of identity framework.Constant filled by server. Possible values include: “Managed”, “AMLToken”, “UserIdentity”.

as_dict(keep_readonly=True, key_transformer=<function attribute_transformer>, **kwargs)

Return a dict that can be JSONify using json.dump.

Advanced usage might optionally use a callback as parameter:

Key is the attribute name used in Python. Attr_desc is a dict of metadata. Currently contains ‘type’ with the msrest type and ‘key’ with the RestAPI encoded key. Value is the current value in this object.

The string returned will be used to serialize the key. If the return type is a list, this is considered hierarchical result dict.

See the three examples in this file:

  • attribute_transformer

  • full_restapi_key_transformer

  • last_restapi_key_transformer

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

key_transformer (function) – A key transformer function.

Returns

A dict JSON compatible object

Return type

dict

classmethod deserialize(data, content_type=None)

Parse a str using the RestAPI syntax and return a model.

Parameters
  • data (str) – A str using RestAPI structure. JSON by default.

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod enable_additional_properties_sending()
classmethod from_dict(data, key_extractors=None, content_type=None)

Parse a dict using given key extractor return a model.

By default consider key extractors (rest_key_case_insensitive_extractor, attribute_key_case_insensitive_extractor and last_rest_key_case_insensitive_extractor)

Parameters
  • data (dict) – A dict using RestAPI structure

  • content_type (str) – JSON by default, set application/xml if XML.

Returns

An instance of this model

Raises

DeserializationError if something went wrong

classmethod is_xml_model()
serialize(keep_readonly=False, **kwargs)

Return the JSON that would be sent to azure from this model.

This is an alias to as_dict(full_restapi_key_transformer, keep_readonly=False).

If you want XML serialization, you can pass the kwargs is_xml=True.

Parameters

keep_readonly (bool) – If you want to serialize the readonly attributes

Returns

A dict JSON compatible object

Return type

dict

validate()

Validate this model recursively and return a list of ValidationError.

Returns

A list of validation error

Return type

list

azure.ai.ml.command(*, name: Optional[str] = None, description: Optional[str] = None, tags: Optional[Dict] = None, properties: Optional[Dict] = None, display_name: Optional[str] = None, command: Optional[str] = None, experiment_name: Optional[str] = None, environment: Optional[Union[str, azure.ai.ml.entities._assets.environment.Environment]] = None, environment_variables: Optional[Dict] = None, distribution: Optional[Union[Dict, azure.ai.ml.entities._job.distribution.MpiDistribution, azure.ai.ml.entities._job.distribution.TensorFlowDistribution, azure.ai.ml.entities._job.distribution.PyTorchDistribution]] = None, compute: Optional[str] = None, inputs: Optional[Dict] = None, outputs: Optional[Dict] = None, instance_count: Optional[int] = None, instance_type: Optional[str] = None, timeout: Optional[int] = None, code: Optional[Union[os.PathLike, str]] = None, identity: Optional[Union[azure.ai.ml._restclient.v2022_02_01_preview.models._models_py3.ManagedIdentity, azure.ai.ml._restclient.v2022_02_01_preview.models._models_py3.AmlToken, azure.ai.ml._restclient.v2022_02_01_preview.models._models_py3.UserIdentity]] = None, **kwargs)azure.ai.ml.entities._builders.command.Command[source]
Create a Command object which can be used inside dsl.pipeline as a function and

can also be created as a standalone command job.

Parameters
  • name (str) – Name of the command job or component created

  • description (str) – a friendly description of the command

  • tags (Dict) – Tags to be attached to this command

  • properties (dict[str, str]) – The asset property dictionary.

  • display_name (str) – a friendly name

  • experiment_name (str) – Name of the experiment the job will be created under, if None is provided, default will be set to current directory name. Will be ignored as a pipeline step.

  • command (str) – the command string that will be run

  • environment (Union[str, azure.ai.ml.entities.Environment]) – the environment to use for this command

  • environment_variables (dict) – environment variables to set on the compute before this command is executed

  • distribution (Union[Dict, azure.ai.ml.MpiDistribution, azure.ai.ml.TensorFlowDistribution, azure.ai.ml.PyTorchDistribution]) – the distribution mode to use for this command

  • compute (str) – the name of the compute where the command job is executed( will not be used if the command is used as a component/function)

  • inputs (Dict) – a dict of inputs used by this command.

  • outputs (Dict) – the outputs of this command

  • instance_count – Optional number of instances or nodes used by the compute target. Defaults to 1.

  • instance_type – Optional type of VM used as supported by the compute target.

  • timeout – The number in seconds, after which the job will be cancelled.

  • code (Union[str, os.PathLike]) – the code folder to run – typically a local folder that will be uploaded as the job is submitted

  • identity (Union[azure.ai.ml.ManagedIdentity, azure.ai.ml.AmlToken]) – Identity that training job will use while running on compute.

azure.ai.ml.load_batch_deployment(path: Union[os.PathLike, str], **kwargs)azure.ai.ml.entities._deployment.batch_deployment.BatchDeployment[source]

Construct a batch deployment object from yaml file.

Parameters
  • path (str) – Path to a local file as the source.

  • params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]

Returns

Constructed batch deployment object.

Return type

BatchDeployment

azure.ai.ml.load_batch_endpoint(path: Union[os.PathLike, str], **kwargs)azure.ai.ml.entities._endpoint.batch_endpoint.BatchEndpoint[source]

Construct a batch endpoint object from yaml file.

Parameters
  • path (str) – Path to a local file as the source.

  • params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]

Returns

Constructed batch endpoint object.

Return type

BatchEndpoint

azure.ai.ml.load_component(path: Optional[Union[os.PathLike, str]] = None, **kwargs)Union[azure.ai.ml.entities._component.command_component.CommandComponent, azure.ai.ml.entities._component.parallel_component.ParallelComponent][source]

Load component from local or remote to a component function.

For example:

# Load a local component to a component function.
component_func = load_component(path="custom_component/component_spec.yaml")
# Load a remote component to a component function.
component_func = load_component(client=ml_client, name="my_component", version=1)

# Consuming the component func
component = component_func(param1=xxx, param2=xxx)
Parameters
  • path (str) – Local component yaml file.

  • params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]

  • client (MLClient) – An MLClient instance.

  • name (str) – Name of the component.

  • version (str) – Version of the component.

  • kwargs (dict) – A dictionary of additional configuration parameters.

Returns

A function that can be called with parameters to get a azure.ai.ml.entities.Component

Return type

Union[CommandComponent, ParallelComponent]

azure.ai.ml.load_compute(path: Union[os.PathLike, str], **kwargs)azure.ai.ml.entities._compute.compute.Compute[source]

Construct a compute object from a yaml file.

Parameters
  • path (str) – Path to a local file as the source.

  • params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]

Returns

Loaded compute object.

Return type

Compute

azure.ai.ml.load_data(path: Union[os.PathLike, str], **kwargs)azure.ai.ml.entities._assets._artifacts.data.Data[source]

Construct a data object from yaml file.

Parameters
  • path (str) – Path to a local file as the source.

  • params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]

Returns

Constructed data object.

Return type

Data

azure.ai.ml.load_datastore(path: Union[os.PathLike, str], **kwargs)azure.ai.ml.entities._datastore.datastore.Datastore[source]

Construct a datastore object from a yaml file.

Parameters
  • path (str) – Path to a local file as the source.

  • params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]

Returns

Loaded datastore object.

Return type

Datastore

azure.ai.ml.load_environment(path: Union[os.PathLike, str], **kwargs)azure.ai.ml.entities._assets.environment.Environment[source]

Construct a environment object from yaml file.

Parameters
  • path (str) – Path to a local file as the source.

  • params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]

Returns

Constructed environment object.

Return type

Environment

azure.ai.ml.load_job(path: Union[os.PathLike, str], **kwargs)azure.ai.ml.entities._job.job.Job[source]

Construct a job object from a yaml file.

Parameters
  • path (str) – Path to a local file as the source.

  • params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]

Returns

Loaded job object.

Return type

Job

azure.ai.ml.load_model(path: Union[os.PathLike, str], **kwargs)azure.ai.ml.entities._assets._artifacts.model.Model[source]

Construct a model object from yaml file.

Parameters
  • path (str) – Path to a local file as the source.

  • params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]

Returns

Constructed model object.

Return type

Model

azure.ai.ml.load_online_deployment(path: Union[os.PathLike, str], **kwargs)azure.ai.ml.entities._deployment.online_deployment.OnlineDeployment[source]

Construct a online deployment object from yaml file.

Parameters
  • path (str) – Path to a local file as the source.

  • params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]

Returns

Constructed online deployment object.

Return type

OnlineDeployment

azure.ai.ml.load_online_endpoint(path: Union[os.PathLike, str], **kwargs)azure.ai.ml.entities._endpoint.online_endpoint.OnlineEndpoint[source]

Construct a online endpoint object from yaml file.

Parameters
  • path (str) – Path to a local file as the source.

  • params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]

Returns

Constructed online endpoint object.

Return type

OnlineEndpoint

azure.ai.ml.load_workspace(path: Union[os.PathLike, str], **kwargs)azure.ai.ml.entities._workspace.workspace.Workspace[source]

Load a workspace object from a yaml file.

Parameters
  • path (str) – Path to a local file as the source.

  • params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]

Returns

Loaded workspace object.

Return type

Workspace

azure.ai.ml.load_workspace_connection(path: Union[os.PathLike, str], **kwargs)azure.ai.ml.entities._workspace.connections.workspace_connection.WorkspaceConnection[source]

Construct a workspace connection object from yaml file.

Parameters
  • path (str) – Path to a local file as the source.

  • params_override (List[Dict]) – Fields to overwrite on top of the yaml file. Format is [{“field1”: “value1”}, {“field2”: “value2”}]

Returns

Constructed workspace connection object.

Return type

WorkspaceConnection

Submodules

azure.ai.ml.constants module

class azure.ai.ml.constants.ArmConstants[source]
APP_INSIGHTS = 'AppInsights'
APP_INSIGHTS_PARAMETER_NAME = 'components'
ARRAY = 'Array'
ASSET_PATH = 'assetPath'
AZURE_MGMT_APPINSIGHT_API_VERSION = '2015-05-01'
AZURE_MGMT_CONTAINER_REG_API_VERSION = '2019-05-01'
AZURE_MGMT_KEYVAULT_API_VERSION = '2019-09-01'
AZURE_MGMT_RESOURCE_API_VERSION = '2020-06-01'
AZURE_MGMT_STORAGE_API_VERSION = '2019-06-01'
BASE_TYPE = 'base'
CODE_PARAMETER_NAME = 'codes'
CODE_RESOURCE_NAME = 'codeDeploymentCopy'
CODE_TYPE = 'code'
CODE_VERSION_PARAMETER_NAME = 'codeVersions'
CODE_VERSION_RESOURCE_NAME = 'codeVersionDeploymentCopy'
CODE_VERSION_TYPE = 'code_version'
CONTAINER_REGISTRY_PARAMETER_NAME = 'registries'
DATASTORE_ID = 'datastoreId'
DEFAULT_VALUE = 'defaultValue'
DEPENDSON_PARAMETER_NAME = 'dependsOn'
DEPLOYMENTS_PARAMETER_NAME = 'onlineDeployments'
ENDPOINT_CREATE_OR_UPDATE_PARAMETER_NAME = 'endpointCreateOrUpdate'
ENDPOINT_IDENTITY_PARAMETER_NAME = 'onlineEndpointIdentity'
ENDPOINT_NAME_PARAMETER_NAME = 'onlineEndpointName'
ENDPOINT_PARAMETER_NAME = 'onlineEndpoint'
ENDPOINT_PROPERTIES_PARAMETER_NAME = 'onlineEndpointProperties'
ENDPOINT_PROPERTIES_TRAFFIC_UPDATE_PARAMETER_NAME = 'onlineEndpointPropertiesTrafficUpdate'
ENDPOINT_TAGS_PARAMETER_NAME = 'onlineEndpointTags'
ENVIRONMENT_PARAMETER_NAME = 'environments'
ENVIRONMENT_TYPE = 'environment'
ENVIRONMENT_VERSION_RESOURCE_NAME = 'environmentVersionDeploymentCopy'
ENVIRONMENT_VERSION_TYPE = 'environment_version'
KEY_VAULT = 'KeyVault'
KEY_VAULT_PARAMETER_NAME = 'vaults'
LOCATION_PARAMETER_NAME = 'location'
MODEL_PARAMETER_NAME = 'models'
MODEL_RESOURCE_NAME = 'modelDeploymentCopy'
MODEL_TYPE = 'model'
MODEL_VERSION_PARAMETER_NAME = 'modelVersions'
MODEL_VERSION_RESOURCE_NAME = 'modelVersionDeploymentCopy'
MODEL_VERSION_TYPE = 'model_version'
NAME = 'name'
OBJECT = 'Object'
ONLINE_DEPLOYMENT_RESOURCE_NAME = 'onlineDeploymentCopy'
ONLINE_DEPLOYMENT_TYPE = 'online_deployment'
ONLINE_ENDPOINT_RESOURCE_NAME = 'onlineEndpointCopy'
ONLINE_ENDPOINT_TYPE = 'online_endpoint'
OPERATION_CREATE = 'create'
OPERATION_UPDATE = 'update'
PROPERTIES_PARAMETER_NAME = 'properties'
SKU = 'sku'
STORAGE = 'StorageAccount'
STORAGE_ACCOUNT_PARAMETER_NAME = 'storageAccounts'
STRING = 'String'
TAGS = 'tags'
TRAFFIC_PARAMETER_NAME = 'trafficRules'
UPDATE_ONLINE_ENDPOINT_TYPE = 'update_online_endpoint'
UPDATE_RESOURCE_NAME = 'updateEndpointWithTraffic'
VERSION = 'version'
WORKSPACE = 'Workspace'
WORKSPACE_BASE = 'workspace_base'
WORKSPACE_PARAM = 'workspace_param'
WORKSPACE_PARAMETER_NAME = 'workspaceName'
class azure.ai.ml.constants.AssetTypes[source]
CUSTOM_MODEL = 'custom_model'
MLFLOW_MODEL = 'mlflow_model'
MLTABLE = 'mltable'
TRITON_MODEL = 'triton_model'
URI_FILE = 'uri_file'
URI_FOLDER = 'uri_folder'
class azure.ai.ml.constants.AutoMLConstants[source]
ALLOWED_ALGORITHMS_YAML = 'allowed_training_algorithms'
AUTO = 'auto'
BLOCKED_ALGORITHMS_YAML = 'blocked_training_algorithms'
CLASSIFICATION_YAML = 'classification'
COUNTRY_OR_REGION_YAML = 'country_or_region_for_holidays'
CUSTOM = 'custom'
DATASET_YAML = 'dataset'
DATA_YAML = 'data'
ENSEMBLE_MODEL_DOWNLOAD_TIMEOUT_YAML = 'ensemble_model_download_timeout_minutes'
FEATURIZATION_YAML = 'featurization'
FORECASTING_YAML = 'forecasting'
GENERAL_YAML = 'general'
LIMITS_YAML = 'limits'
MAX_TRIALS_YAML = 'max_trials'
MODE = 'mode'
OFF = 'off'
REGRESSION_YAML = 'regression'
SWEEP_YAML = 'sweep'
TARGET_LAGS = 'target_lags'
TASK_TYPE_YAML = 'task'
TERMINATION_POLICY_TYPE_YAML = 'type'
TEST_DATA_SETTINGS_YAML = 'test'
TIMEOUT_YAML = 'timeout_minutes'
TIME_SERIES_ID_COLUMN_NAMES = 'time_series_id_column_names'
TRAINING_DATA_SETTINGS_YAML = 'training'
TRAINING_YAML = 'training'
TRANSFORMER_PARAMS = 'transformer_params'
TRIAL_TIMEOUT_YAML = 'trial_timeout_minutes'
VALIDATION_DATASET_SIZE_YAML = 'validation_dataset_size'
VALIDATION_DATA_SETTINGS_YAML = 'validation'
class azure.ai.ml.constants.AzureMLResourceType[source]
BATCH_DEPLOYMENT = 'batch_deployments'
BATCH_ENDPOINT = 'batch_endpoints'
CODE = 'codes'
COMPONENT = 'components'
COMPUTE = 'computes'
DATA = 'data'
DATASET = 'datasets'
DATASTORE = 'datastores'
DEPLOYMENT = 'deployments'
ENVIRONMENT = 'environments'
JOB = 'jobs'
MODEL = 'models'
NAMED_TYPES = {'computes', 'datastores', 'jobs', 'online_deployments', 'online_endpoints', 'workspaces'}
ONLINE_DEPLOYMENT = 'online_deployments'
ONLINE_ENDPOINT = 'online_endpoints'
VERSIONED_TYPES = {'codes', 'components', 'data', 'datasets', 'environments', 'models'}
VIRTUALCLUSTER = 'virtualclusters'
WORKSPACE = 'workspaces'
WORKSPACE_CONNECTION = 'workspace_connections'
class azure.ai.ml.constants.BatchDeploymentOutputAction[source]
APPEND_ROW = 'append_row'
SUMMARY_ONLY = 'summary_only'
class azure.ai.ml.constants.CommonYamlFields[source]
TYPE = 'type'
class azure.ai.ml.constants.ComponentJobConstants[source]
INPUT_DESTINATION_FORMAT = 'jobs.{}.inputs.{}'
INPUT_PATTERN = '^\\$\\{\\{parent\\.(inputs|jobs)\\.(.*?)\\}\\}$'
LEGACY_INPUT_PATTERN = '^\\$\\{\\{(inputs|jobs)\\.(.*?)\\}\\}$'
LEGACY_OUTPUT_PATTERN = '^\\$\\{\\{outputs\\.(.*?)\\}\\}$'
OUTPUT_DESTINATION_FORMAT = 'jobs.{}.outputs.{}'
OUTPUT_PATTERN = '^\\$\\{\\{parent\\.outputs\\.(.*?)\\}\\}$'
class azure.ai.ml.constants.ComponentSource[source]

Indicate where the component is constructed.

BUILDER = 'BUILDER'
DSL = 'DSL'
REST = 'REST'
SDK = 'SDK'
YAML = 'YAML'
class azure.ai.ml.constants.ComputeDefaults[source]
ADMIN_USER = 'azureuser'
IDLE_TIME = 1800
MAX_NODES = 4
MIN_NODES = 0
PRIORITY = 'Dedicated'
VMSIZE = 'STANDARD_DS3_V2'
class azure.ai.ml.constants.ComputeTier[source]
DEDICATED = 'dedicated'
LOWPRIORITY = 'low_priority'
class azure.ai.ml.constants.ComputeType[source]
AMLCOMPUTE = 'amlcompute'
COMPUTEINSTANCE = 'computeinstance'
KUBERNETES = 'kubernetes'
MANAGED = 'managed'
VIRTUALMACHINE = 'virtualmachine'
class azure.ai.ml.constants.DataType[source]
DATAFLOW = 'Dataflow'
SIMPLE = 'Simple'
class azure.ai.ml.constants.DeploymentType[source]
K8S = 'Kubernetes'
MANAGED = 'Managed'
class azure.ai.ml.constants.DistributionType[source]
MPI = 'mpi'
PYTORCH = 'pytorch'
TENSORFLOW = 'tensorflow'
class azure.ai.ml.constants.DockerTypes[source]
BUILD = 'Build'
IMAGE = 'Image'
class azure.ai.ml.constants.EndpointDeploymentLogContainerType[source]
INFERENCE_SERVER = 'inference-server'
INFERENCE_SERVER_REST = 'InferenceServer'
STORAGE_INITIALIZER = 'storage-initializer'
STORAGE_INITIALIZER_REST = 'StorageInitializer'
class azure.ai.ml.constants.EndpointGetLogsFields[source]
LINES = 5000
class azure.ai.ml.constants.EndpointInvokeFields[source]
AUTHORIZATION = 'Authorization'
DEFAULT_HEADER = {'Content-Type': 'application/json'}
MODEL_DEPLOYMENT = 'azureml-model-deployment'
class azure.ai.ml.constants.EndpointKeyType[source]
PRIMARY_KEY_TYPE = 'primary'
SECONDARY_KEY_TYPE = 'secondary'
class azure.ai.ml.constants.EndpointYamlFields[source]
BATCH_JOB_DATASET = 'dataset'
BATCH_JOB_INPUT_DATA = 'input_data'
BATCH_JOB_INSTANCE_COUNT = 'compute.instance_count'
BATCH_JOB_NAME = 'job_name'
BATCH_JOB_OUTPUT_DATSTORE = 'output_dataset.datastore_id'
BATCH_JOB_OUTPUT_PATH = 'output_dataset.path'
CODE = 'code'
CODE_CONFIGURATION = 'code_configuration'
COMPUTE = 'compute'
INSTANCE_COUNT = 'instance_count'
MAXIMUM = 'max_instances'
MINIMUM = 'min_instances'
MINI_BATCH_SIZE = 'mini_batch_size'
NAME = 'name'
POLLING_INTERVAL = 'polling_interval'
PROVISIONING_STATE = 'provisioning_state'
RETRY_SETTINGS = 'retry_settings'
SCALE_SETTINGS = 'scale_settings'
SCALE_TYPE = 'scale_type'
SCORING_SCRIPT = 'scoring_script'
SCORING_URI = 'scoring_uri'
SKU_DEFAULT = 'Standard_F4s_v2'
SWAGGER_URI = 'swagger_uri'
TARGET_UTILIZATION_PERCENTAGE = 'target_utilization_percentage'
TRAFFIC_NAME = 'traffic'
TYPE = 'type'
class azure.ai.ml.constants.GitProperties[source]
ENV_BRANCH = 'AZUREML_GIT_BRANCH'
ENV_BUILD_ID = 'AZUREML_GIT_BUILD_ID'
ENV_BUILD_URI = 'AZUREML_GIT_BUILD_URI'
ENV_COMMIT = 'AZUREML_GIT_COMMIT'
ENV_DIRTY = 'AZUREML_GIT_DIRTY'
ENV_REPOSITORY_URI = 'AZUREML_GIT_REPOSITORY_URI'
PROP_BUILD_ID = 'azureml.git.build_id'
PROP_BUILD_URI = 'azureml.git.build_uri'
PROP_DIRTY = 'azureml.git.dirty'
PROP_MLFLOW_GIT_BRANCH = 'mlflow.source.git.branch'
PROP_MLFLOW_GIT_COMMIT = 'mlflow.source.git.commit'
PROP_MLFLOW_GIT_REPO_URL = 'mlflow.source.git.repoURL'
class azure.ai.ml.constants.HttpResponseStatusCode[source]
NOT_FOUND = 404
class azure.ai.ml.constants.IdentityType[source]
BOTH = 'system_assigned,user_assigned'
SYSTEM_ASSIGNED = 'system_assigned'
USER_ASSIGNED = 'user_assigned'
class azure.ai.ml.constants.InputOutputModes[source]
DIRECT = 'direct'
DOWNLOAD = 'download'
EVAL_DOWNLOAD = 'eval_download'
EVAL_MOUNT = 'eval_mount'
MOUNT = 'mount'
RO_MOUNT = 'ro_mount'
RW_MOUNT = 'rw_mount'
UPLOAD = 'upload'
class azure.ai.ml.constants.JobComputePropertyFields[source]
AISUPERCOMPUTER = 'AISuperComputer'
SINGULARITY = 'Singularity'
class azure.ai.ml.constants.JobLimitsType[source]
SWEEP = 'Sweep'
class azure.ai.ml.constants.JobLogPattern[source]
COMMAND_JOB_LOG_PATTERN = 'azureml-logs/[\\d]{2}.+\\.txt'
COMMON_RUNTIME_ALL_USER_LOG_PATTERN = 'user_logs/std_log.*\\.txt'
COMMON_RUNTIME_STREAM_LOG_PATTERN = 'user_logs/std_log[\\D]*[0]*(?:_ps)?\\.txt'
PIPELINE_JOB_LOG_PATTERN = 'logs/azureml/executionlogs\\.txt'
SWEEP_JOB_LOG_PATTERN = 'azureml-logs/hyperdrive\\.txt'
class azure.ai.ml.constants.JobServices[source]
STUDIO = 'Studio'
class azure.ai.ml.constants.JobType[source]
AUTOML = 'automl'
BASE = 'base'
COMMAND = 'command'
COMPONENT = 'component'
PARALLEL = 'parallel'
PIPELINE = 'pipeline'
SWEEP = 'sweep'
class azure.ai.ml.constants.LROConfigurations[source]
MAX_WAIT_COUNT = 400
POLLING_TIMEOUT = 720
POLL_INTERVAL = 5
SLEEP_TIME = 5
class azure.ai.ml.constants.LegacyAssetTypes[source]
PATH = 'path'
class azure.ai.ml.constants.LocalEndpointConstants[source]
AZUREML_APP_PATH = '/var/azureml-app/'
CONDA_ENV_BIN_PATH = '/opt/miniconda/envs/inf-conda-env/bin'
CONDA_ENV_NAME = 'inf-conda-env'
CONDA_ENV_PYTHON_PATH = '/opt/miniconda/envs/inf-conda-env/bin/python'
CONDA_FILE_NAME = 'conda.yml'
CONTAINER_EXITED = 'exited'
DEFAULT_STARTUP_WAIT_TIME_SECONDS = 15
DOCKER_PORT = '5001'
ENDPOINT_STATE_FAILED = 'Failed'
ENDPOINT_STATE_LOCATION = 'local'
ENDPOINT_STATE_SUCCEEDED = 'Succeeded'
ENVVAR_KEY_AML_APP_ROOT = 'AML_APP_ROOT'
ENVVAR_KEY_AZUREML_ENTRY_SCRIPT = 'AZUREML_ENTRY_SCRIPT'
ENVVAR_KEY_AZUREML_INFERENCE_PYTHON_PATH = 'AZUREML_INFERENCE_PYTHON_PATH'
ENVVAR_KEY_AZUREML_MODEL_DIR = 'AZUREML_MODEL_DIR'
LABEL_KEY_AZUREML_LOCAL_ENDPOINT = 'azureml-local-endpoint'
LABEL_KEY_AZUREML_PORT = 'azureml-port'
LABEL_KEY_DEPLOYMENT_JSON = 'deployment-data'
LABEL_KEY_DEPLOYMENT_NAME = 'deployment'
LABEL_KEY_ENDPOINT_JSON = 'endpoint-data'
LABEL_KEY_ENDPOINT_NAME = 'endpoint'
class azure.ai.ml.constants.LoggingLevel[source]
DEBUG = 'DEBUG'
INFO = 'INFO'
WARN = 'WARNING'
class azure.ai.ml.constants.ModelType[source]
CUSTOM = 'CustomModel'
MLFLOW = 'MLFlowModel'
TRITON = 'TritonModel'
class azure.ai.ml.constants.NodeType[source]
AUTOML = 'automl'
COMMAND = 'command'
PARALLEL = 'parallel'
SWEEP = 'sweep'
class azure.ai.ml.constants.OnlineEndpointConfigurations[source]
MAX_NAME_LENGTH = 32
MIN_NAME_LENGTH = 3
NAME_REGEX_PATTERN = '^[a-zA-Z]([-a-zA-Z0-9]*[a-zA-Z0-9])?$'
class azure.ai.ml.constants.OperationStatus[source]
CANCELED = 'Canceled'
FAILED = 'Failed'
RUNNING = 'Running'
SUCCEEDED = 'Succeeded'
class azure.ai.ml.constants.OrderString[source]
CREATED_AT = 'createdtime asc'
CREATED_AT_DESC = 'createdtime desc'
class azure.ai.ml.constants.ParallelTaskType[source]
FUNCTION = 'function'
MODEL = 'model'
class azure.ai.ml.constants.PipelineConstants[source]
CODE = 'code'
CONTINUE_ON_STEP_FAILURE = 'continue_on_step_failure'
DATASTORE_REST = 'Datastore'
DEFAULT_COMPUTE = 'default_compute'
DEFAULT_DATASTORE = 'default_datastore'
DEFAULT_DATASTORE_REST = 'defaultDatastoreName'
DEFAULT_DATASTORE_SDK = 'default_datastore_name'
ENVIRONMENT = 'environment'
REUSED_FLAG_FIELD = 'azureml.isreused'
REUSED_FLAG_TRUE = 'true'
REUSED_JOB_ID = 'azureml.reusedrunid'
class azure.ai.ml.constants.PublicNetworkAccess[source]
DISABLED = 'Disabled'
ENABLED = 'Enabled'
class azure.ai.ml.constants.SearchSpace[source]
CHOICE = 'choice'
LOGNORMAL = 'lognormal'
LOGUNIFORM = 'loguniform'
NORMAL = 'normal'
NORMAL_LOGNORMAL = ['normal', 'lognormal']
QLOGNORMAL = 'qlognormal'
QLOGUNIFORM = 'qloguniform'
QNORMAL = 'qnormal'
QNORMAL_QLOGNORMAL = ['qnormal', 'qlognormal']
QUNIFORM = 'quniform'
QUNIFORM_QLOGUNIFORM = ['quniform', 'qloguniform']
RANDINT = 'randint'
UNIFORM = 'uniform'
UNIFORM_LOGUNIFORM = ['uniform', 'loguniform']
class azure.ai.ml.constants.TimeZone(value)[source]

Time zones that a job or compute instance schedule accepts

AFGHANISTANA_STANDARD_TIME = 'Afghanistan Standard Time'
ALASKAN_STANDARD_TIME = 'Alaskan Standard Time'
ALEUTIAN_STANDARD_TIME = 'Aleutian Standard Time'
ALTAI_STANDARD_TIME = 'Altai Standard Time'
ARABIAN_STANDARD_TIME = 'Arabian Standard Time'
ARABIC_STANDARD_TIME = 'Arabic Standard Time'
ARAB_STANDARD_TIME = 'Arab Standard Time'
ARGENTINA_STANDARD_TIME = 'Argentina Standard Time'
ASTRAKHAN_STANDARD_TIME = 'Astrakhan Standard Time'
ATLANTIC_STANDARD_TIME = 'Atlantic Standard Time'
AUS_CENTRAL_STANDARD_TIME = 'AUS Central Standard Time'
AUS_CENTRAL_W_STANDARD_TIME = 'Aus Central W. Standard Time'
AUS_EASTERN_STANDARD_TIME = 'AUS Eastern Standard Time'
AZERBAIJAN_STANDARD_TIME = 'Azerbaijan Standard Time'
AZORES_STANDARD_TIME = 'Azores Standard Time'
BAHIA_STANDARD_TIME = 'Bahia Standard Time'
BANGLADESH_STANDARD_TIME = 'Bangladesh Standard Time'
BELARUS_STANDARD_TIME = 'Belarus Standard Time'
BOUGAINVILLE_STANDARD_TIME = 'Bougainville Standard Time'
CANADA_CENTRAL_STANDARD_TIME = 'Canada Central Standard Time'
CAPE_VERDE_STANDARD_TIME = 'Cape Verde Standard Time'
CAUCASUS_STANDARD_TIME = 'Caucasus Standard Time'
CENTRAL_AMERICA_STANDARD_TIME = 'Central America Standard Time'
CENTRAL_ASIA_STANDARD_TIME = 'Central Asia Standard Time'
CENTRAL_BRAZILIAN_STANDARD_TIME = 'Central Brazilian Standard Time'
CENTRAL_EUROPEAN_STANDARD_TIME = 'Central European Standard Time'
CENTRAL_EUROPE_STANDARD_TIME = 'Central Europe Standard Time'
CENTRAL_PACIFIC_STANDARD_TIME = 'Central Pacific Standard Time'
CENTRAL_STANDARD_TIME = 'Central Standard Time'
CENTRAL_STANDARD_TIME_MEXICO = 'Central Standard Time (Mexico)'
CEN_AUSTRALIA_STANDARD_TIME = 'Cen. Australia Standard Time'
CHATHAM_ISLANDS_STANDARD_TIME = 'Chatham Islands Standard Time'
CHINA_STANDARD_TIME = 'China Standard Time'
CUBA_STANDARD_TIME = 'Cuba Standard Time'
DATELINE_STANDARD_TIME = 'Dateline Standard Time'
EASTERN_STANDARD_TIME = 'Eastern Standard Time'
EASTERN_STANDARD_TIME_MEXICO = 'Eastern Standard Time (Mexico)'
EASTER_ISLAND_STANDARD_TIME = 'Easter Island Standard Time'
EGYPT_STANDARD_TIME = 'Egypt Standard Time'
EKATERINBURG_STANDARD_TIME = 'Ekaterinburg Standard Time'
E_AFRICA_STANDARD_TIME = 'E. Africa Standard Time'
E_AUSTRALIAN_STANDARD_TIME = 'E. Australia Standard Time'
E_EUROPE_STANDARD_TIME = 'E. Europe Standard Time'
E_SOUTH_AMERICAN_STANDARD_TIME = 'E. South America Standard Time'
FIJI_STANDARD_TIME = 'Fiji Standard Time'
FLE_STANDARD_TIME = 'FLE Standard Time'
GEORGIAN_STANDARD_TIME = 'Georgian Standard Time'
GMT_STANDARD_TIME = 'GMT Standard Time'
GREENLAND_STANDARD_TIME = 'Greenland Standard Time'
GREENWICH_STANDARD_TIME = 'Greenwich Standard Time'
GTB_STANDARD_TIME = 'GTB Standard Time'
HAITI_STANDARD_TIME = 'Haiti Standard Time'
HAWAIIAN_STANDARD_TIME = 'Hawaiian Standard Time'
INDIA_STANDARD_TIME = 'India Standard Time'
IRAN_STANDARD_TIME = 'Iran Standard Time'
ISRAEL_STANDARD_TIME = 'Israel Standard Time'
JORDAN_STANDARD_TIME = 'Jordan Standard Time'
KALININGRAD_STANDARD_TIME = 'Kaliningrad Standard Time'
KAMCHATKA_STANDARD_TIME = 'Kamchatka Standard Time'
KOREA_STANDARD_TIME = 'Korea Standard Time'
LIBYA_STANDARD_TIME = 'Libya Standard Time'
LINE_ISLANDS_STANDARD_TIME = 'Line Islands Standard Time'
LORDE_HOWE_STANDARD_TIME = 'Lord Howe Standard Time'
MAGADAN_STANDARD_TIME = 'Magadan Standard Time'
MARQUESAS_STANDARD_TIME = 'Marquesas Standard Time'
MAURITIUS_STANDARD_TIME = 'Mauritius Standard Time'
MIDDLE_EAST_STANDARD_TIME = 'Middle East Standard Time'
MID_ATLANTIC_STANDARD_TIME = 'Mid-Atlantic Standard Time'
MONTEVIDEO_STANDARD_TIME = 'Montevideo Standard Time'
MOROCCO_STANDARD_TIME = 'Morocco Standard Time'
MOUNTAIN_STANDARD_TIME = 'Mountain Standard Time'
MOUNTAIN_STANDARD_TIME_MEXICO = 'Mountain Standard Time (Mexico)'
MYANMAR_STANDARD_TIME = 'Myanmar Standard Time'
NAMIBIA_STANDARD_TIME = 'Namibia Standard Time'
NEPAL_STANDARD_TIME = 'Nepal Standard Time'
NEWFOUNDLAND_STANDARD_TIME = 'Newfoundland Standard Time'
NEW_ZEALAND_STANDARD_TIME = 'New Zealand Standard Time'
NORFOLK_STANDARD_TIME = 'Norfolk Standard Time'
NORTH_ASIA_EAST_STANDARD_TIME = 'North Asia East Standard Time'
NORTH_ASIA_STANDARD_TIME = 'North Asia Standard Time'
NORTH_KOREA_STANDARD_TIME = 'North Korea Standard Time'
N_CENTRAL_ASIA_STANDARD_TIME = 'N. Central Asia Standard Time'
PACIFIC_SA_STANDARD_TIME = 'Pacific SA Standard Time'
PACIFIC_STANDARD_TIME = 'Pacific Standard Time'
PACIFIC_STANDARD_TIME_MEXICO = 'Pacific Standard Time (Mexico)'
PAKISTAN_STANDARD_TIME = 'Pakistan Standard Time'
PARAGUAY_STANDARD_TIME = 'Paraguay Standard Time'
ROMANCE_STANDARD_TIME = 'Romance Standard Time'
RUSSIAN_STANDARD_TIME = 'Russian Standard Time'
RUSSIA_TIME_ZONE_10 = 'Russia Time Zone 10'
RUSSIA_TIME_ZONE_11 = 'Russia Time Zone 11'
RUSSIA_TIME_ZONE_3 = 'Russia Time Zone 3'
SAINT_PIERRE_STANDARD_TIME = 'Saint Pierre Standard Time'
SAKHALIN_STANDARD_TIME = 'Sakhalin Standard Time'
SAMOA_STANDARD_TIME = 'Samoa Standard Time'
SA_EASTERN_STANDARD_TIME = 'SA Eastern Standard Time'
SA_PACIFIC_STANDARD_TIME = 'SA Pacific Standard Time'
SA_WESTERN_STANDARD_TIME = 'SA Western Standard Time'
SE_ASIA_STANDARD_TIME = 'SE Asia Standard Time'
SINGAPORE_STANDARD_TIME = 'Singapore Standard Time'
SOUTH_AFRICA_STANDARD_TIME = 'South Africa Standard Time'
SRI_LANKA_STANDARD_TIME = 'Sri Lanka Standard Time'
SYRIA_STANDARD_TIME = 'Syria Standard Time'
TAIPEI_STANDARD_TIME = 'Taipei Standard Time'
TASMANIA_STANDARD_TIME = 'Tasmania Standard Time'
TOCANTINS_STANDARD_TIME = 'Tocantins Standard Time'
TOKYO_STANDARD_TIME = 'Tokyo Standard Time'
TOMSK_STANDARD_TIME = 'Tomsk Standard Time'
TONGA__STANDARD_TIME = 'Tonga Standard Time'
TRANSBAIKAL_STANDARD_TIME = 'Transbaikal Standard Time'
TURKEY_STANDARD_TIME = 'Turkey Standard Time'
TURKS_AND_CAICOS_STANDARD_TIME = 'Turks And Caicos Standard Time'
ULAANBAATAR_STANDARD_TIME = 'Ulaanbaatar Standard Time'
US_EASTERN_STANDARD_TIME = 'US Eastern Standard Time'
US_MOUNTAIN_STANDARD_TIME = 'US Mountain Standard Time'
UTC = 'UTC'
UTC_02 = 'UTC-02'
UTC_08 = 'UTC-08'
UTC_09 = 'UTC-09'
UTC_11 = 'UTC-11'
UTC_12 = 'UTC+12'
VLADIVOSTOK_STANDARD_TIME = 'Vladivostok Standard Time'
VenezuelaStandardTime = 'Venezuela Standard Time'
WEST_ASIA_STANDARD_TIME = 'West Asia Standard Time'
WEST_BANK_STANDARD_TIME = 'West Bank Standard Time'
WEST_PACIFIC_STANDARD_TIME = 'West Pacific Standard Time'
W_AUSTRALIA_STANDARD_TIME = 'W. Australia Standard Time'
W_CENTEAL_AFRICA_STANDARD_TIME = 'W. Central Africa Standard Time'
W_EUROPE_STANDARD_TIME = 'W. Europe Standard Time'
W_MONGOLIA_STANDARD_TIME = 'W. Mongolia Standard Time'
YAKUTSK_STANDARD_TIME = 'Yakutsk Standard Time'
class azure.ai.ml.constants.WorkspaceResourceConstants[source]
ENCRYPTION_STATUS_ENABLED = 'Enabled'
AML_COMPUTE = 'https://aka.ms/ml-cli-v2-compute-aml-yaml-reference'
BATCH_DEPLOYMENT = 'https://aka.ms/ml-cli-v2-deployment-batch-yaml-reference'
BATCH_ENDPOINT = 'https://aka.ms/ml-cli-v2-endpoint-batch-yaml-reference'
COMMAND_COMPONENT = 'https://aka.ms/ml-cli-v2-component-command-yaml-reference'
COMMAND_JOB = 'https://aka.ms/ml-cli-v2-job-command-yaml-reference'
COMPUTE_INSTANCE = 'https://aka.ms/ml-cli-v2-compute-aml-yaml-reference'
DATA = 'https://aka.ms/ml-cli-v2-data-yaml-reference'
DATASET = 'https://aka.ms/ml-cli-v2-dataset-yaml-reference'
DATASTORE_BLOB = 'https://aka.ms/ml-cli-v2-datastore-blob-yaml-reference'
DATASTORE_DATA_LAKE_GEN_1 = 'https://aka.ms/ml-cli-v2-datastore-data-lake-gen1-yaml-reference'
DATASTORE_DATA_LAKE_GEN_2 = 'https://aka.ms/ml-cli-v2-datastore-data-lake-gen2-yaml-reference'
DATASTORE_FILE = 'https://aka.ms/ml-cli-v2-datastore-file-yaml-reference'
ENVIRONMENT = 'https://aka.ms/ml-cli-v2-environment-yaml-reference'
KUBERNETES_ONLINE_DEPLOYMENT = 'https://aka.ms/ml-cli-v2-deployment-kubernetes-online-yaml-reference'
MANAGED_ONLINE_DEPLOYMENT = 'https://aka.ms/ml-cli-v2-deployment-managed-online-yaml-reference'
MODEL = 'https://aka.ms/ml-cli-v2-model-yaml-reference'
ONLINE_ENDPOINT = 'https://aka.ms/ml-cli-v2-endpoint-online-yaml-reference'
PARALLEL_COMPONENT = 'https://aka.ms/ml-cli-v2-component-parallel-yaml-reference'
PARALLEL_JOB = 'https://aka.ms/ml-cli-v2-job-parallel-yaml-reference'
PIPELINE_JOB = 'https://aka.ms/ml-cli-v2-job-pipeline-yaml-reference'
SWEEP_JOB = 'https://aka.ms/ml-cli-v2-job-sweep-yaml-reference'
VIRTUAL_MACHINE_COMPUTE = 'https://aka.ms/ml-cli-v2-compute-vm-yaml-reference'
WORKSPACE = 'https://aka.ms/ml-cli-v2-workspace-yaml-reference'
class azure.ai.ml.constants.YAMLRefDocSchemaNames[source]
AML_COMPUTE = 'AMLCompute'
BATCH_DEPLOYMENT = 'BatchDeployment'
BATCH_ENDPOINT = 'BatchEndpoint'
COMMAND_COMPONENT = 'CommandComponent'
COMMAND_JOB = 'CommandJob'
COMPUTE_INSTANCE = 'ComputeInstance'
DATA = 'Data'
DATASET = 'Dataset'
DATASTORE_BLOB = 'AzureBlobDatastore'
DATASTORE_DATA_LAKE_GEN_1 = 'AzureDataLakeGen1Datastore'
DATASTORE_DATA_LAKE_GEN_2 = 'AzureDataLakeGen2Datastore'
DATASTORE_FILE = 'AzureFileDatastore'
ENVIRONMENT = 'Environment'
KUBERNETES_ONLINE_DEPLOYMENT = 'KubernetesOnlineDeployment'
MANAGED_ONLINE_DEPLOYMENT = 'ManagedOnlineDeployment'
MODEL = 'Model'
ONLINE_ENDPOINT = 'OnlineEndpoint'
PARALLEL_COMPONENT = 'ParallelComponent'
PARALLEL_JOB = 'ParallelJob'
PIPELINE_JOB = 'PipelineJob'
SWEEP_JOB = 'SweepJob'
VIRTUAL_MACHINE_COMPUTE = 'VirtualMachineCompute'
WORKSPACE = 'Workspace'

azure.ai.ml.template_code module

azure.ai.ml.template_code.template_main()[source]