azure.ai.ml.dsl package

azure.ai.ml.dsl.pipeline(*, name: Optional[str] = None, version: Optional[str] = None, display_name: Optional[str] = None, description: Optional[str] = None, experiment_name: Optional[str] = None, tags: Optional[Dict[str, str]] = None, continue_on_step_failure: Optional[bool] = None, **kwargs)[source]

Build a pipeline which contains all component nodes defined in this function. Currently only single layer pipeline is supported.

Note

The following pseudo-code shows how to create a pipeline using this decorator.

# Define a pipeline with decorator
@pipeline(name='sample_pipeline', description='pipeline description')
def sample_pipeline_func(pipeline_input, pipeline_str_param):
        # component1 and component2 will be added into the current pipeline
        component1 = component1_func(input1=pipeline_input, param1='literal')
        component2 = component2_func(input1=dataset, param1=pipeline_str_param)
        # A decorated pipeline function needs to return outputs.
        # In this case, the pipeline has two outputs: component1's output1 and component2's output1,
        # and let's rename them to 'pipeline_output1' and 'pipeline_output2'
        return {
            'pipeline_output1': component1.outputs.output1,
            'pipeline_output2': component2.outputs.output1
        }

# E.g.: This call returns a pipeline job with nodes=[component1, component2],
pipeline_job = sample_pipeline_func(
    pipeline_input=Input(type='uri_folder', path='./local-data'),
    pipeline_str_param='literal'
)
ml_client.jobs.create_or_update(pipeline_job, experiment_name="pipeline_samples")
Parameters
  • name (str) – The name of pipeline component, defaults to function name.

  • version (str) – The version of pipeline component, defaults to “1”.

  • display_name (str) – The display name of pipeline component, defaults to function name.

  • description (str) – The description of the built pipeline.

  • experiment_name (str) – Name of the experiment the job will be created under, if None is provided, experiment will be set to current directory.

  • tags (dict[str, str]) – The tags of pipeline component.

  • continue_on_step_failure (bool) – Flag when set, continue pipeline execution if a step fails.

  • kwargs (dict) – A dictionary of additional configuration parameters.