You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe
KFP Operator supports Pipeline Dependencies, which allows users to split up larger machine learning pipelines into sub-pipelines, which can be then re-used by multiple dependent pipelines. For example, a "data creation" pipeline could ingest and transform a dataset ready for training, and then multiple other pipelines could reuse the dataset produced. This reduces the duration of the pipelines.
The KFP Operator also provides Run Completion Events which lets users react to pipeline events, and includes details about artefacts created by training pipelines.
At the moment the Run Completion Events do not include any detail about whether a run refers to a sub-pipeline or a larger dependent pipeline. Client components that react to events have to either know what pipelines do what, or assume that all pipelines a part of a larger dependent pipeline. Some clients might only want to react to events on larger dependent pipelines, rather than sub-pipelines . For example, a client might continually serve a model produced by a training pipeline by reacting to events when a new model is pushed. They will want to ignore sub-pipelines that do not push a serving model, but at the moment the clients need to have knowledge of which pipelines are sub-pipelines or not.
Describe the solution you would like
Run Completion Events should be populated with a field describing whether the run was for a sub-pipeline or a larger dependent pipeline.
Describe alternative solutions you have considered
There might be a programmatic way to determine whether a run was part of a sub-pipeline or not, which should be investigated.
Acceptance Criteria
Run Completion Events contain details about the type of pipeline the run derives from
Additional context
Can the detection of a sub-pipeline be determined automatically? i.e. somehow traversing using the runConfigurations field from triggers?
Should we add the ability to add a type to artifacts? e.g. servedModel, dataset etc. This might allow us to detect a sub-pipeline
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe
KFP Operator supports Pipeline Dependencies, which allows users to split up larger machine learning pipelines into sub-pipelines, which can be then re-used by multiple dependent pipelines. For example, a "data creation" pipeline could ingest and transform a dataset ready for training, and then multiple other pipelines could reuse the dataset produced. This reduces the duration of the pipelines.
The KFP Operator also provides Run Completion Events which lets users react to pipeline events, and includes details about artefacts created by training pipelines.
At the moment the Run Completion Events do not include any detail about whether a run refers to a sub-pipeline or a larger dependent pipeline. Client components that react to events have to either know what pipelines do what, or assume that all pipelines a part of a larger dependent pipeline. Some clients might only want to react to events on larger dependent pipelines, rather than sub-pipelines . For example, a client might continually serve a model produced by a training pipeline by reacting to events when a new model is pushed. They will want to ignore sub-pipelines that do not push a serving model, but at the moment the clients need to have knowledge of which pipelines are sub-pipelines or not.
Describe the solution you would like
Run Completion Events should be populated with a field describing whether the run was for a sub-pipeline or a larger dependent pipeline.
Describe alternative solutions you have considered
There might be a programmatic way to determine whether a run was part of a sub-pipeline or not, which should be investigated.
Acceptance Criteria
Additional context
runConfigurations
field from triggers?artifacts
? e.g. servedModel, dataset etc. This might allow us to detect a sub-pipelineThe text was updated successfully, but these errors were encountered: