diff --git "a/how-to-guides.txt" "b/how-to-guides.txt" --- "a/how-to-guides.txt" +++ "b/how-to-guides.txt" @@ -1,19 +1,17 @@ === File: docs/book/introduction.md === -# ZenML Documentation Summary +# ZenML Overview -**ZenML** is an open-source MLOps framework designed for creating portable, production-ready machine learning pipelines. It decouples infrastructure from code, facilitating collaboration among developers. +**ZenML** is an open-source MLOps framework designed for building portable, production-ready machine learning pipelines. It separates infrastructure from code, facilitating collaboration among developers. -## Key Features - -### For MLOps Platform Engineers -- **ZenML Pro**: Offers a control plane for managed ZenML instances, including features like CI/CD and RBAC. -- **Self-hosted Deployment**: Deploy on any cloud provider using Terraform. +## For MLOps Platform Engineers +- **ZenML Pro**: Offers a managed instance with features like CI/CD, Model Control Plane, and RBAC. +- **Self-hosted Deployment**: Deploy on any cloud provider using Terraform utilities. ```bash zenml stack register <STACK_NAME> --provider aws zenml stack deploy --provider gcp ``` -- **Standardization**: Standardize MLOps tools across your organization by registering environments as ZenML stacks. +- **Standardization**: Register environments as ZenML stacks for consistent ML workflows. ```bash zenml orchestrator register kfp_orchestrator -f kubeflow zenml stack register production --orchestrator kubeflow ... @@ -23,17 +21,17 @@ zenml stack set gcp python run.py # Run in GCP zenml stack set aws - python run.py # Now in AWS + python run.py # Run in AWS ``` -### For Data Scientists +## For Data Scientists - **Local Development**: Develop models locally and switch to production seamlessly. ```bash python run.py # Local development zenml stack set production python run.py # Production run ``` -- **Pythonic SDK**: Use decorators to create pipelines. +- **Pythonic SDK**: Use decorators to create ZenML pipelines. ```python from zenml import pipeline, step @@ -43,25 +41,25 @@ @step def step_2(input_one: str, input_two: str) -> None: - print(input_one + ' ' + input_two) + print(f"{input_one} {input_two}") @pipeline def my_pipeline(): step_2(input_one="hello", input_two=step_1()) - + my_pipeline() ``` -- **Automatic Metadata Tracking**: Tracks metadata of runs and versions datasets/models. +- **Automatic Metadata Tracking**: ZenML tracks metadata and versions datasets and models. -### For ML Engineers -- **ML Lifecycle Management**: Manage ML workflows and environments easily. +## For ML Engineers +- **ML Lifecycle Management**: Manage ML workflows and environments efficiently. ```bash zenml stack set staging python run.py # Test on staging zenml stack set production python run.py # Run in production ``` -- **Reproducibility**: Automatically tracks and versions all components. +- **Reproducibility**: Automatically track and version stacks, pipelines, and artifacts. - **Automated Deployments**: Define workflows as ZenML pipelines for easy deployment. ```python from zenml.integrations.seldon.steps import seldon_model_deployer_step @@ -74,11 +72,9 @@ ``` ## Additional Resources -- **For MLOps Engineers**: [ZenML Pro](getting-started/zenml-pro/README.md), [Cloud Orchestration](user-guide/production-guide/cloud-orchestration.md) -- **For Data Scientists**: [Core Concepts](getting-started/core-concepts.md), [Starter Guide](user-guide/starter-guide/) -- **For ML Engineers**: [How To](./how-to/pipeline-development/build-pipelines/README.md), [Examples](https://github.com/zenml-io/zenml-projects) - -Explore more at [ZenML Live Demo](https://www.zenml.io/live-demo). +- **For MLOps Engineers**: [Production Guide](user-guide/production-guide/cloud-orchestration.md), [Component Guide](./component-guide/README.md), [FAQ](reference/faq.md). +- **For Data Scientists**: [Core Concepts](getting-started/core-concepts.md), [Starter Guide](user-guide/starter-guide/), [Quickstart in Colab](https://colab.research.google.com/github/zenml-io/zenml/blob/main/examples/quickstart/notebooks/quickstart.ipynb). +- **For ML Engineers**: [Starter Guide](user-guide/starter-guide/), [How To](./how-to/pipeline-development/build-pipelines/README.md), [Examples](https://github.com/zenml-io/zenml-projects). ================================================== @@ -86,78 +82,59 @@ Explore more at [ZenML Live Demo](https://www.zenml.io/live-demo). # Overview of ZenML MLOps Components and Integrations -ZenML categorizes MLOps tools into stack components to streamline understanding and implementation in your pipeline. These stack components serve specific functions and standardize workflows. Key components include: - -| **Type of Stack Component** | **Description** | -|-----------------------------|------------------| -| [Orchestrator](orchestrators/orchestrators.md) | Manages pipeline runs | -| [Artifact Store](artifact-stores/artifact-stores.md) | Stores artifacts from pipelines | -| [Container Registry](container-registries/container-registries.md) | Stores container images | -| [Data Validator](data-validators/data-validators.md) | Validates data and models | -| [Experiment Tracker](experiment-trackers/experiment-trackers.md) | Tracks ML experiments | -| [Model Deployer](model-deployers/model-deployers.md) | Manages online model serving | -| [Step Operator](step-operators/step-operators.md) | Executes pipeline steps in specific environments | -| [Alerter](alerters/alerters.md) | Sends alerts via specified channels | -| [Image Builder](image-builders/image-builders.md) | Builds container images | -| [Annotator](annotators/annotators.md) | Labels and annotates data | -| [Model Registry](model-registries/model-registries.md) | Manages ML models | -| [Feature Store](feature-stores/feature-stores.md) | Manages data/features | - -Each ZenML pipeline requires at least an orchestrator and an artifact store; other components are optional as the pipeline matures. +ZenML categorizes MLOps tools into stack components to streamline understanding and usage in ML pipelines. These stack components standardize workflows and can be implemented through custom components or existing integrations. -## Custom Component Flavors +## Stack Components +ZenML supports the following stack components, each serving a specific role in the MLOps process: -You can create custom components by writing your own component flavors. Refer to the [guide on writing component flavors](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for more details. +| **Component Type** | **Description** | +|--------------------------|----------------------------------------------------| +| [Orchestrator](orchestrators/orchestrators.md) | Manages pipeline runs | +| [Artifact Store](artifact-stores/artifact-stores.md) | Stores artifacts created by pipelines | +| [Container Registry](container-registries/container-registries.md) | Stores container images | +| [Data Validator](data-validators/data-validators.md) | Validates data and models | +| [Experiment Tracker](experiment-trackers/experiment-trackers.md) | Tracks ML experiments | +| [Model Deployer](model-deployers/model-deployers.md) | Handles online model serving | +| [Step Operator](step-operators/step-operators.md) | Executes steps in specialized environments | +| [Alerter](alerters/alerters.md) | Sends alerts through specified channels | +| [Image Builder](image-builders/image-builders.md) | Builds container images | +| [Annotator](annotators/annotators.md) | Labels and annotates data | +| [Model Registry](model-registries/model-registries.md) | Manages ML models | +| [Feature Store](feature-stores/feature-stores.md) | Manages data/features | -## Integrations +Each ZenML pipeline requires at least an orchestrator and an artifact store, while other components are optional based on MLOps maturity. -ZenML enhances MLOps pipelines by integrating with various tools, allowing flexibility and avoiding vendor lock-in. Examples include: +## Custom Component Flavors +Users can create custom components by writing their own component flavors. For guidance, refer to the [general guide](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) or specific component guides. -- Orchestrators: [Airflow](orchestrators/airflow.md), [Kubeflow](orchestrators/kubeflow.md) -- Experiment Trackers: [MLflow Tracking](experiment-trackers/mlflow.md), [Weights & Biases](experiment-trackers/wandb.md) -- Model Deployment: [MLflow](model-deployers/mlflow.md), [Seldon Core](model-deployers/seldon.md) +## Integrations +ZenML enhances MLOps processes by integrating with various tools, allowing flexibility and reducing vendor lock-in. Examples include: -ZenML consolidates MLOps tools, enabling easy transitions between them as requirements change. +- **Orchestrators**: [Airflow](orchestrators/airflow.md), [Kubeflow](orchestrators/kubeflow.md) +- **Experiment Trackers**: [MLflow Tracking](experiment-trackers/mlflow.md), [Weights & Biases](experiment-trackers/wandb.md) +- **Model Deployers**: Transition from local [MLflow](model-deployers/mlflow.md) to [Seldon Core](model-deployers/seldon.md) on Kubernetes. ### Available Integrations +A comprehensive list of supported integrations can be found on the [ZenML integrations page](https://zenml.io/integrations) or in the [GitHub integrations directory](https://github.com/zenml-io/zenml/tree/main/src/zenml/integrations). -A comprehensive list of supported ZenML integrations can be found on the [integrations webpage](https://zenml.io/integrations) or in the [GitHub integrations directory](https://github.com/zenml-io/zenml/tree/main/src/zenml/integrations). - -### Installing ZenML Integrations - +### Installing Integrations To install integrations, use: ```bash zenml integration install kubeflow mlflow seldon -y ``` +This command installs preferred versions via pip. -This command installs preferred versions via pip: - -```bash -pip install kubeflow==<PREFERRED_VERSION> mlflow==<PREFERRED_VERSION> seldon==<PREFERRED_VERSION> -``` - -The `-y` flag confirms installations without prompts. Use `zenml integration --help` for a complete list of CLI commands. - -#### Using `uv` for Package Installation - -You can use [`uv`](https://github.com/astral-sh/uv) as a package manager by adding the `--uv` flag: - -```bash -zenml integration install --uv kubeflow mlflow seldon -``` - -### Upgrading ZenML Integrations - -Upgrade all integrations to their latest versions with: +### Upgrade Integrations +To upgrade integrations, use: ```bash zenml integration upgrade mlflow pytorch -y ``` +This command upgrades specified integrations or all installed ones if none are specified. ### Community Contributions - -ZenML welcomes contributions for new integrations. Check the [roadmap](https://zenml.io/roadmap) and refer to the [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) for more details. +ZenML welcomes community contributions for new integrations. Refer to the [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) and the [External Integration Guide](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/README.md) for more details. ================================================== @@ -165,51 +142,45 @@ ZenML welcomes contributions for new integrations. Check the [roadmap](https://z ### Overview of ZenML Integrations -ZenML enhances MLOps pipelines by integrating with various tools across different categories, allowing for streamlined ML workflows. Users can orchestrate pipelines with tools like [Airflow](orchestrators/airflow.md) or [Kubeflow](orchestrators/kubeflow.md), track experiments using [MLflow Tracking](experiment-trackers/mlflow.md) or [Weights & Biases](experiment-trackers/wandb.md), and deploy models on Kubernetes with [Seldon Core](model-deployers/seldon.md). ZenML facilitates management of MLOps tools in one place, enabling flexibility and avoiding vendor lock-in. +ZenML enhances MLOps pipelines by integrating with various tools across different categories, allowing for streamlined ML workflows. Users can orchestrate pipelines with tools like [Airflow](orchestrators/airflow.md) or [Kubeflow](orchestrators/kubeflow.md), track experiments using [MLflow Tracking](experiment-trackers/mlflow.md) or [Weights & Biases](experiment-trackers/wandb.md), and deploy models on Kubernetes with [Seldon Core](model-deployers/seldon.md). ZenML provides flexibility with no vendor lock-in, enabling easy tool transitions as requirements evolve. ### Available Integrations -A comprehensive list of supported ZenML integrations can be found on the [ZenML integrations webpage](https://zenml.io/integrations) or in the [integrations directory on GitHub](https://github.com/zenml-io/zenml/tree/main/src/zenml/integrations). +A comprehensive list of supported ZenML integrations can be found on the [ZenML integrations webpage](https://zenml.io/integrations) or in the [integrations directory](https://github.com/zenml-io/zenml/tree/main/src/zenml/integrations) on GitHub. ### Installing ZenML Integrations -To install integrations, use the command: +To install integrations, use: ```bash zenml integration install kubeflow mlflow seldon -y ``` -This command installs the preferred versions of the integrations via pip: +This command installs the preferred versions via pip: ```bash pip install kubeflow==<PREFERRED_VERSION> mlflow==<PREFERRED_VERSION> seldon==<PREFERRED_VERSION> ``` -The `-y` flag confirms all installations without prompts. You can view available CLI commands with `zenml integration --help`. +The `-y` flag auto-confirms installation prompts. For a complete list of CLI commands, run `zenml integration --help`. Direct installation of dependencies is possible, but compatibility with ZenML is not guaranteed. ### Using `uv` for Package Installation -You can opt to use [`uv`](https://github.com/astral-sh/uv) as a package manager by adding the `--uv` flag: - -```bash -zenml integration install --uv kubeflow mlflow seldon -``` - -Ensure `uv` is installed, as this is an experimental feature. +You can utilize [`uv`](https://github.com/astral-sh/uv) as a package manager by adding the `--uv` flag to the installation command. Ensure `uv` is installed, as this is an experimental feature. More details on using `uv` with PyTorch are available in the [Astral Docs](https://docs.astral.sh/uv/guides/integration/pytorch/). ### Upgrading ZenML Integrations -To upgrade integrations to their latest versions, use: +Upgrade all integrations to their latest versions with: ```bash zenml integration upgrade mlflow pytorch -y ``` -The `-y` flag confirms upgrades without prompts. If no integrations are specified, all installed integrations will be upgraded. +The `-y` flag confirms upgrade prompts, and if no integrations are specified, all installed integrations will be upgraded. ### Community Contributions -ZenML encourages community contributions for new integrations. Check the public [roadmap](https://zenml.io/roadmap) for prioritized tools and refer to the [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) and [External Integration Guide](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/README.md) for details on contributing. +ZenML is open to community contributions for new integrations. Check the public [roadmap](https://zenml.io/roadmap) for prioritized tools and refer to the [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) and [External Integration Guide](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/README.md) for more information on contributing. ================================================== @@ -217,46 +188,46 @@ ZenML encourages community contributions for new integrations. Check the public # Overview of MLOps Components -MLOps can be overwhelming due to the multitude of tools available. ZenML categorizes these tools into **Stacks and Stack Components** to clarify their roles in the MLOps pipeline. Each stack component serves a specific function and standardizes the workflow for teams. Users can implement custom stack components or utilize built-in integrations. +MLOps can be overwhelming due to the multitude of tools available. ZenML categorizes these tools into **Stacks and Stack Components** to clarify their roles in your MLOps pipeline. These components standardize workflows and can be implemented through custom solutions or built-in integrations. ## Supported Stack Components -| **Type** | **Description** | -|-------------------------|---------------------------------------------------------| -| [Orchestrator](./orchestrators/orchestrators.md) | Manages pipeline runs | -| [Artifact Store](./artifact-stores/artifact-stores.md) | Stores artifacts created by pipelines | -| [Container Registry](./container-registries/container-registries.md) | Stores container images | -| [Step Operator](./step-operators/step-operators.md) | Executes individual steps in runtime environments | -| [Model Deployer](./model-deployers/model-deployers.md) | Handles online model serving | -| [Feature Store](./feature-stores/feature-stores.md) | Manages data/features | -| [Experiment Tracker](./experiment-trackers/experiment-trackers.md) | Tracks ML experiments | -| [Alerter](./alerters/alerters.md) | Sends alerts through specified channels | -| [Annotator](./annotators/annotators.md) | Labels and annotates data | -| [Data Validator](./data-validators/data-validators.md) | Validates data and models | -| [Image Builder](./image-builders/image-builders.md) | Builds container images | -| [Model Registry](./model-registries/model-registries.md) | Manages and interacts with ML models | - -Each ZenML pipeline requires a **stack** that includes at least an orchestrator and an artifact store; other components are optional and can be added as needed. +| **Type of Stack Component** | **Description** | +|------------------------------|-----------------| +| [Orchestrator](./orchestrators/orchestrators.md) | Manages pipeline runs | +| [Artifact Store](./artifact-stores/artifact-stores.md) | Stores artifacts from pipelines | +| [Container Registry](./container-registries/container-registries.md) | Stores container images | +| [Step Operator](./step-operators/step-operators.md) | Executes steps in specific environments | +| [Model Deployer](./model-deployers/model-deployers.md) | Handles online model serving | +| [Feature Store](./feature-stores/feature-stores.md) | Manages data/features | +| [Experiment Tracker](./experiment-trackers/experiment-trackers.md) | Tracks ML experiments | +| [Alerter](./alerters/alerters.md) | Sends alerts via specified channels | +| [Annotator](./annotators/annotators.md) | Labels and annotates data | +| [Data Validator](./data-validators/data-validators.md) | Validates data and models | +| [Image Builder](./image-builders/image-builders.md) | Builds container images | +| [Model Registry](./model-registries/model-registries.md) | Manages ML models | + +Each ZenML pipeline requires a **stack** that includes at least an orchestrator and an artifact store, with other components being optional based on MLOps maturity. ## Custom Component Flavors -Users can create custom components by writing their own component **flavors**. For more details, refer to the [guide on writing component flavors](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) or specific guides for component types, such as the [custom orchestrator guide](orchestrators/custom.md). +You can create custom behaviors in ZenML by writing your own component **flavors**. For guidance, refer to the [general guide on writing component flavors](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) or specific guides for certain component types, such as the [custom orchestrator guide](orchestrators/custom.md). ================================================== === File: docs/book/component-guide/model-registries/custom.md === -### Custom Model Registry Development +### Developing a Custom Model Registry in ZenML #### Overview -This documentation provides guidance on developing a custom model registry in ZenML. Familiarity with ZenML's component flavor concepts is recommended before proceeding. +To create a custom model registry in ZenML, it's essential to understand the general concepts of custom component flavors. Refer to the [general guide](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational knowledge. #### Important Notes -- The Model Registry component is new and may undergo API changes. -- Feedback on the base abstraction is encouraged via [Slack](https://zenml.io/slack) or [GitHub](https://github.com/zenml-io/zenml/issues/new/choose). +- The `BaseModelRegistry` is an abstract class that must be subclassed to create a custom model registry. +- The API is still evolving; feedback is encouraged via [Slack](https://zenml.io/slack) or [GitHub](https://github.com/zenml-io/zenml/issues/new/choose). #### Base Abstraction -The `BaseModelRegistry` is an abstract class for creating custom model registries. It provides a basic interface for model registration and versioning. +The `BaseModelRegistry` class provides a basic interface for model registration and versioning: ```python from abc import ABC, abstractmethod @@ -277,19 +248,19 @@ class BaseModelRegistry(StackComponent, ABC): @abstractmethod def delete_model(self, name: str) -> None: - """Deletes a registered model.""" + """Deletes a model.""" @abstractmethod def update_model(self, name: str, description: Optional[str] = None, tags: Optional[Dict[str, str]] = None) -> RegisteredModel: - """Updates a registered model.""" + """Updates a model.""" @abstractmethod def get_model(self, name: str) -> RegisteredModel: - """Retrieves a registered model.""" + """Retrieves a model.""" @abstractmethod def list_models(self, name: Optional[str] = None, tags: Optional[Dict[str, str]] = None) -> List[RegisteredModel]: - """Lists registered models.""" + """Lists models.""" @abstractmethod def register_model_version(self, name: str, version: Optional[str] = None, **kwargs: Any) -> RegistryModelVersion: @@ -320,21 +291,21 @@ class BaseModelRegistry(StackComponent, ABC): """Gets the URI for a model version.""" ``` -#### Creating a Custom Model Registry -1. Understand core concepts of model registries. -2. Inherit from `BaseModelRegistry` and implement abstract methods. -3. Create a `ModelRegistryConfig` class extending `BaseModelRegistryConfig` for additional parameters. +#### Steps to Build a Custom Model Registry +1. Understand core concepts of model registries [here](./model-registries.md#model-registry-concepts-and-terminology). +2. Subclass `BaseModelRegistry` and implement its abstract methods. +3. Create a `ModelRegistryConfig` class inheriting from `BaseModelRegistryConfig` for additional parameters. 4. Combine implementation and configuration by inheriting from `BaseModelRegistryFlavor`. -Register your custom model registry using: +Register your custom model registry with: ```shell zenml model-registry flavor register <IMAGE-BUILDER-FLAVOR-SOURCE-PATH> ``` -#### Workflow Integration -- **CustomModelRegistryFlavor** is used during flavor creation. -- **CustomModelRegistryConfig** is utilized for validating user inputs during registration. -- **CustomModelRegistry** is invoked when the component is in use, allowing separation of configuration and implementation. +#### Key Considerations +- The `CustomModelRegistryFlavor` is used during flavor creation. +- The `CustomModelRegistryConfig` is utilized for validating user input during registration. +- The `CustomModelRegistry` is employed when the component is in use, separating configuration from implementation. For a complete implementation example, refer to the [MLFlowModelRegistry](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_registry.MLFlowModelRegistry). @@ -342,25 +313,25 @@ For a complete implementation example, refer to the [MLFlowModelRegistry](https: === File: docs/book/component-guide/model-registries/mlflow.md === -# MLflow Model Registry Summary +# MLflow Model Registry Overview -## Overview -MLflow is a tool for tracking experiments, managing models, and deploying them across environments. The MLflow model registry allows for managing and tracking ML models and artifacts, providing a user interface for browsing. +MLflow is a tool for tracking experiments, managing models, and deploying them across environments. ZenML integrates with MLflow, providing an Experiment Tracker and Model Deployer. The MLflow model registry manages and tracks ML models and artifacts, offering a user interface for browsing. ## Use Cases -- Track different model versions during development and deployment. -- Manage model deployments across various environments. -- Monitor and compare model performance over time. -- Simplify model deployment to production or staging environments. +The MLflow model registry is beneficial for: +- Tracking different model versions during development and deployment. +- Managing model deployments across various environments. +- Monitoring and comparing model performance over time. +- Simplifying model deployment to production or staging environments. -## Installation +## Deployment To use the MLflow model registry, install the MLflow integration: ```shell zenml integration install mlflow -y ``` -Register the MLflow model registry component in your stack: +Register the MLflow model registry component: ```shell zenml model-registry register mlflow_model_registry --flavor=mlflow @@ -370,6 +341,8 @@ zenml stack register custom_stack -r mlflow_model_registry ... --set **Note:** The MLflow model registry uses the same configuration as the MLflow Experiment Tracker. Use MLflow version 2.2.1 or higher due to a critical vulnerability in older versions. ## Usage +You can register models in ZenML pipelines or manually via the CLI. + ### Register Models in a Pipeline Use the `mlflow_register_model_step` to register a logged model: @@ -383,13 +356,13 @@ def mlflow_registry_training_pipeline(): mlflow_register_model_step(model=model, name="tensorflow-mnist-model") ``` -**Parameters:** +**Parameters for `mlflow_register_model_step`:** - `name`: Required model name. - `version`: Model version. -- `trained_model_name`: Name of the model artifact in MLflow. +- `trained_model_name`: Name of the model artifact. - `model_source_uri`: Path to the model. - `description`: Model version description. -- `metadata`: Metadata for the model version. +- `metadata`: Metadata list for the model version. ### Register Models via CLI To manually register a model version: @@ -398,38 +371,43 @@ To manually register a model version: zenml model-registry models register-version Tensorflow-model \ --description="A new version with accuracy 98.88%" \ -v 1 \ - --model-uri="file:///.../model" \ + --model-uri="file:///.../mlruns/.../artifacts/model" \ -m key1 value1 -m key2 value2 \ --zenml-pipeline-name="mlflow_training_pipeline" \ --zenml-step-name="trainer" ``` -### Deploy a Registered Model -After registration, models can be deployed as prediction services. Refer to the MLflow model deployer documentation for details. +### Deploy Registered Models +After registration, deploy models as prediction services. Refer to the MLflow model deployer documentation for details. ### Interact with Registered Models -- List all registered models: +List all registered models: + ```shell zenml model-registry models list ``` -- List all versions of a specific model: +List all versions of a specific model: + ```shell zenml model-registry models list-versions tensorflow-mnist-model ``` -- Get details of a specific model version: +Get details of a specific model version: + ```shell zenml model-registry models get-version tensorflow-mnist-model -v 1 ``` -- Delete a registered model or specific version: +### Deleting Models +To delete a registered model or a specific version: + ```shell zenml model-registry models delete REGISTERED_MODEL_NAME zenml model-registry models delete-version REGISTERED_MODEL_NAME -v VERSION ``` -For more details, refer to the [ZenML SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_registry.MLFlowModelRegistry). +For further details, refer to the [ZenML SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_registry.MLFlowModelRegistry). ================================================== @@ -437,44 +415,49 @@ For more details, refer to the [ZenML SDK documentation](https://sdkdocs.zenml.i # Model Registries -Model registries are centralized solutions for managing and tracking machine learning models throughout their development and deployment stages. They facilitate version control and reproducibility by storing metadata such as version, configuration, and metrics. In ZenML, model registries are Stack Components that simplify the retrieval, loading, and deployment of trained models, along with providing pipeline information for reproduction. +Model registries are centralized solutions for managing and tracking machine learning models throughout their development and deployment stages. They store metadata such as version, configuration, and metrics, facilitating reproducibility and streamlined management of trained models. In ZenML, model registries are Stack Components that enable easy retrieval, loading, and deployment of models, along with information on the training pipeline. ### Key Concepts -- **RegisteredModel**: A logical grouping of models tracking different versions, including metadata like name, description, and tags. It can be user-created or automatically generated upon logging a new model. +- **RegisteredModel**: A logical grouping of models for tracking different versions, including metadata like name, description, and tags. It can be user-created or automatically generated. -- **RegistryModelVersion**: A specific model version identified by a unique version number or string, containing metadata and a reference to the logged model artifact. It also includes references to the pipeline name, run ID, and step name. +- **RegistryModelVersion**: A specific version of a model with a unique identifier, containing metadata and a reference to the model artifact. It also includes pipeline-related information (pipeline name, run ID, step name). -- **ModelVersionStage**: Represents the lifecycle state of a model version, which can be `None`, `Staging`, `Production`, or `Archived`. +- **ModelVersionStage**: Represents the state of a model version, which can be `None`, `Staging`, `Production`, or `Archived`, tracking the model's lifecycle. ### When to Use -ZenML's Artifact Store manages pipeline artifacts programmatically, but model registries provide a visual interface for tracking model metadata, especially with remote orchestrators. They are ideal for managing model states centrally and facilitating easy retrieval and deployment. +ZenML's Artifact Store manages pipeline artifacts but lacks a visual interface. Model registries provide a visual way to manage model metadata, especially when using a remote orchestrator. They are ideal for centralized management of model states and easy retrieval and deployment. -### Model Registry Integration +### Architecture -Model registries are optional stack components integrated with various flavors: -| Model Registry | Flavor | Integration | Notes | -|----------------|--------|-------------|-------| -| [MLflow](mlflow.md) | `mlflow` | `mlflow` | Add MLflow as Model Registry | -| [Custom Implementation](custom.md) | _custom_ | | _custom_ | +Model registries fit into the ZenML stack, enhancing interaction with logged models and their states. -To list available flavors, use: +#### Model Registry Flavors + +Model Registries are optional components with various integrations: + +| Model Registry | Flavor | Integration | Notes | +|-------------------------|------------|-------------|--------------------------------------| +| [MLflow](mlflow.md) | `mlflow` | `mlflow` | Add MLflow as Model Registry | +| [Custom Implementation](custom.md) | _custom_ | | Custom implementation available | + +To view available flavors, use: ```shell zenml model-registry flavor list ``` ### Usage -Model registries require an experiment tracker. If not using one, models can still be stored in ZenML, but retrieval must be manual. To use model registries: -1. Register a model registry in your stack matching the experiment tracker flavor. +Model registries are optional and require an experiment tracker. Without an experiment tracker, models can still be stored, but retrieval must be manual. To use model registries: + +1. Register a model registry in your stack, matching the flavor of your experiment tracker. 2. Register trained models via: - Built-in pipeline step - ZenML CLI - Model registry UI -3. Retrieve and load models for deployment or experimentation. -For further details, refer to the [documentation on fetching runs](../../how-to/pipeline-development/build-pipelines/fetching-pipelines.md). +You can then retrieve and load models for deployment or further experimentation. ================================================== @@ -482,18 +465,20 @@ For further details, refer to the [documentation on fetching runs](../../how-to/ ### Develop a Custom Model Deployer -ZenML provides a `Model Deployer` stack component for deploying and managing trained machine-learning models. It interacts with deployment tools and can serve as a model registry, allowing users to list, suspend, resume, or delete models. +ZenML provides a `Model Deployer` stack component for deploying and managing machine-learning models. It interacts with deployment tools and can serve as a model registry, allowing users to list, suspend, resume, or delete deployed models. #### Base Abstraction -The model deployer is built on three key criteria: + +The `Model Deployer` is built on three main criteria: 1. **Efficient Deployment**: It manages model deployment according to the serving infrastructure's requirements, holding necessary configuration attributes. -2. **Continuous Deployment Logic**: It updates existing model servers instead of creating new ones for each model version (via the `deploy_model` method), usable in ZenML pipeline steps or ad-hoc deployments. -3. **BaseService Registry**: It acts as a registry for remote model servers, capable of recreating `BaseService` instances from external configurations. +2. **Continuous Deployment**: It implements logic to update existing model servers instead of creating new ones for each model version, using the `deploy_model` method. +3. **BaseService Registry**: It acts as a registry for `BaseService` instances, allowing the recreation of model server configurations, especially for Kubernetes resources. + +The model deployer includes methods for lifecycle management of remote model servers: `stop_model_server`, `start_model_server`, and `delete_model_server`. -The model deployer includes lifecycle management methods for remote servers (`stop_model_server`, `start_model_server`, `delete_model_server`). +#### Interface Example -#### Interface Code ```python from abc import ABC, abstractmethod from typing import Dict, Optional, Type @@ -511,20 +496,20 @@ class BaseModelDeployer(StackComponent, ABC): @abstractmethod def perform_deploy_model(self, id: UUID, config: ServiceConfig, timeout: int = DEFAULT_DEPLOYMENT_TIMEOUT) -> BaseService: """Deploy a model.""" - + @staticmethod @abstractmethod def get_model_server_info(service: BaseService) -> Dict[str, Optional[str]]: """Extract model server properties.""" - + @abstractmethod def perform_stop_model(self, service: BaseService, timeout: int = DEFAULT_DEPLOYMENT_TIMEOUT, force: bool = False) -> BaseService: """Stop a model server.""" - + @abstractmethod def perform_start_model(self, service: BaseService, timeout: int = DEFAULT_DEPLOYMENT_TIMEOUT) -> BaseService: """Start a model server.""" - + @abstractmethod def perform_delete_model(self, service: BaseService, timeout: int = DEFAULT_DEPLOYMENT_TIMEOUT, force: bool = False) -> None: """Delete a model server.""" @@ -533,104 +518,127 @@ class BaseModelDeployerFlavor(Flavor): @property @abstractmethod def name(self): - """Returns the flavor name.""" - + """Flavor name.""" + @property def type(self) -> StackComponentType: return StackComponentType.MODEL_DEPLOYER - + @property def config_class(self) -> Type[BaseModelDeployerConfig]: return BaseModelDeployerConfig - + @property @abstractmethod def implementation_class(self) -> Type[BaseModelDeployer]: - """The class implementing the model deployer.""" + """Implementation class.""" ``` #### Building Custom Model Deployers -To create a custom model deployer flavor: -1. Inherit from `BaseModelDeployer` and implement abstract methods. +To create a custom model deployer: + +1. Inherit from `BaseModelDeployer` and implement the abstract methods. 2. Create a configuration class inheriting from `BaseModelDeployerConfig`. -3. Combine both by inheriting from `BaseModelDeployerFlavor` and provide a `name`. +3. Combine both in a class inheriting from `BaseModelDeployerFlavor`, providing a name. 4. Create a service class inheriting from `BaseService`. -Register the flavor using: +Register the flavor via CLI: + ```shell zenml model-deployer flavor register <path.to.MyModelDeployerFlavor> ``` -Example: + +Example registration: + ```shell zenml model-deployer flavor register flavors.my_flavor.MyModelDeployerFlavor ``` +Ensure ZenML is initialized at the repository root for proper flavor resolution. + +After registration, list available flavors: + +```shell +zenml model-deployer flavor list +``` + #### Important Notes -- Ensure ZenML is initialized at the root of your repository for proper flavor resolution. -- The `CustomModelDeployerFlavor` is used during flavor creation, while `CustomModelDeployerConfig` is used for registration and validation. -- The `CustomModelDeployer` is utilized when the component is in use, allowing for separation of configuration and implementation. + +- The `CustomModelDeployerFlavor` is used during flavor creation. +- The `CustomModelDeployerConfig` is utilized for stack component registration and validation. +- The `CustomModelDeployer` is invoked when the component is in use, allowing separation of configuration and implementation. + +This structure enables flexibility in registering flavors and components independently of their implementation dependencies. ================================================== === File: docs/book/component-guide/model-deployers/model-deployers.md === -# Model Deployers +# Model Deployers Overview -Model Deployment involves making machine learning models available for predictions on real-world data, either through batch or real-time predictions. Model deployers serve as components in the ZenML stack, enabling model serving via APIs (HTTP or GRPC) for real-time inference or processing batches of data for offline inference. +Model deployment makes machine learning models available for predictions on real-world data. There are two primary types of predictions: batch predictions (for large datasets) and real-time predictions (for individual data points). Model deployers are components in the ZenML stack responsible for serving models in either mode. -## Use Cases -Model deployers are optional in ZenML and can be used in both development and production environments. They are primarily designed for real-time inference, facilitating the continuous training and deployment of models. +## Key Concepts + +- **Online Serving**: Hosting models as a managed web service accessible via API endpoints (HTTP or GRPC). +- **Batch Inference**: Making predictions on a batch of observations, typically storing results in files or databases. -## Architecture -Model deployers integrate into the ZenML stack, allowing seamless deployment to various environments (local servers, Kubernetes, cloud). +## Usage + +Model deployers are optional in the ZenML stack, primarily used for real-time inference in development or production environments (local, Kubernetes, or cloud). They enable continuous training and deployment pipelines. -### Types of Model Deployers -ZenML includes a `local` MLflow model deployer and supports various integrations for production environments: +## Model Deployer Flavors -| Model Deployer | Flavor | Integration | Notes | -|----------------|----------|----------------|-------------------------------------| -| MLflow | `mlflow` | `mlflow` | Deploys ML Model locally | -| BentoML | `bentoml`| `bentoml` | Deploys locally or for production | -| Seldon Core | `seldon` | `seldon Core` | Deploys models on Kubernetes | -| Hugging Face | `huggingface` | `huggingface` | Deploys on Hugging Face Endpoints | -| Databricks | `databricks` | `databricks` | Deploys to Databricks Inference | -| vLLM | `vllm` | `vllm` | Deploys LLMs locally | -| Custom | _custom_ | | Custom implementation available | +ZenML provides various model deployers: + +| Model Deployer | Flavor | Integration | Notes | +|----------------|----------|-------------------|----------------------------------------| +| MLflow | `mlflow` | `mlflow` | Deploys ML Model locally | +| BentoML | `bentoml`| `bentoml` | Deploys models locally or in production| +| Seldon Core | `seldon` | `seldon Core` | Deploys models in Kubernetes | +| Hugging Face | `huggingface` | `huggingface` | Deploys models on Hugging Face | +| Databricks | `databricks` | `databricks` | Deploys models to Databricks | +| vLLM | `vllm` | `vllm` | Deploys LLMs locally | +| Custom | _custom_ | | Custom implementation possible | ### Configuration Example -Model deployers require specific configurations for interaction with the serving tool: -```shell -# Configure MLflow model deployer +To configure MLflow and Seldon Core deployers: + +```bash +# MLflow zenml model-deployer register mlflow --flavor=mlflow -# Configure Seldon Core model deployer +# Seldon Core zenml model-deployer register seldon --flavor=seldon \ --kubernetes_context=zenml-eks --kubernetes_namespace=zenml-workloads \ ---base_url=http://your-url-here +--base_url=http://your-url ``` -## Model Deployer Functions -Model deployers manage the lifecycle of model servers, allowing actions such as starting, stopping, and deleting servers. Key methods include: +## Role in ZenML Stack -- `deploy_model`: Deploys a model and returns a Service object. -- `find_model_server`: Lists deployed model servers. -- `stop_model_server`: Stops a running model server. -- `start_model_server`: Starts a stopped model server. -- `delete_model_server`: Deletes a model server. +- **Seamless Deployment**: Facilitates model deployment to various environments, managing configuration attributes (hostnames, URLs, credentials). +- **Lifecycle Management**: Manages model server lifecycle (start, stop, delete, update). Key methods include: + - `deploy_model`: Deploys a model and returns a Service object. + - `find_model_server`: Lists deployed model servers. + - `stop_model_server`, `start_model_server`, `delete_model_server`: Manage server states. ### Service Object -The Service object represents a deployed model server, containing `config` (deployment attributes) and `status` (operational status). -### Interaction Example -To interact with a model deployer: +Represents a deployed model server, containing: +- `config`: Deployment configuration attributes. +- `status`: Operational status (last error, prediction URL, deployment status). + +### Example of Interacting with a Model Deployer ```python from zenml.integrations.huggingface.model_deployers import HuggingFaceModelDeployer model_deployer = HuggingFaceModelDeployer.get_active_model_deployer() -services = model_deployer.find_model_server(pipeline_name="LLM_pipeline", pipeline_step_name="huggingface_model_deployer_step", model_name="LLAMA-7B") +services = model_deployer.find_model_server(pipeline_name="LLM_pipeline", + pipeline_step_name="huggingface_model_deployer_step", + model_name="LLAMA-7B") if services: if services[0].is_running: @@ -638,22 +646,33 @@ if services: else: model_deployer.start_model_server(services[0]) else: - service = model_deployer.deploy_model(pipeline_name="LLM_pipeline", pipeline_step_name="huggingface_model_deployer_step", model_name="LLAMA-7B", model_uri="s3://your-model-uri", ...) + service = model_deployer.deploy_model(pipeline_name="LLM_pipeline", + pipeline_step_name="huggingface_model_deployer_step", + model_name="LLAMA-7B", + model_uri="s3://your-uri", + task="text-classification") print(f"Model server {service.config['model_name']} is deployed at {service.status['prediction_url']}") ``` -## CLI Interaction -You can manage model servers via the CLI: +## Interacting with Deployed Models via CLI -```shell -$ zenml model-deployer models list -$ zenml model-deployer models describe <UUID> -$ zenml model-deployer models get-url <UUID> -$ zenml model-deployer models delete <UUID> +You can manage deployed models using CLI commands: + +```bash +# List models +zenml model-deployer models list + +# Describe a model +zenml model-deployer models describe <UUID> + +# Get prediction URL +zenml model-deployer models get-url <UUID> + +# Delete a model +zenml model-deployer models delete <UUID> ``` -## Accessing Prediction URL -The prediction URL can also be accessed programmatically: +In Python, you can retrieve the prediction URL from the metadata of the deployment step: ```python from zenml.client import Client @@ -663,48 +682,48 @@ deployer_step = pipeline_run.steps["<NAME_OF_MODEL_DEPLOYER_STEP>"] deployed_model_url = deployer_step.run_metadata["deployed_model_url"].value ``` -ZenML integrations provide standard pipeline steps for continuous model deployment, ensuring efficient management and re-creation of model serving conditions. +ZenML integrations provide standard pipeline steps for continuous model deployment, ensuring efficient management of model serving configurations. ================================================== === File: docs/book/component-guide/model-deployers/bentoml.md === -### Summary of BentoML Documentation for Local Model Deployment +### Summary: Deploying Models Locally with BentoML **BentoML Overview** -BentoML is an open-source framework for serving machine learning models, enabling deployment locally, in the cloud, or on Kubernetes. The BentoML Model Deployer allows for local HTTP server deployment and management of BentoML models. +BentoML is an open-source framework for serving machine learning models, allowing deployment locally, in the cloud, or on Kubernetes. The BentoML Model Deployer facilitates the deployment and management of models and Bentos on a local HTTP server. -**Deployment Options** -- **Local Development**: Deploy models for testing and production. -- **Containerized Services**: Deploy models in a containerized environment. -- **Cloud Deployment**: Use tools like Yatai and `bentoctl` (deprecated) for cloud deployment. +**Deployment Paths** +- **Local HTTP Server**: For development and production. +- **Containerized Service**: For more complex deployments. +- **Yatai and `bentoctl`**: Tools for deploying Bentos to Kubernetes and cloud platforms, though `bentoctl` is deprecated. **When to Use BentoML Model Deployer** - Standardize model deployment within an organization. -- Simplify the deployment process while preparing for production. +- Simplify the deployment process while preparing for production-ready solutions. -**Getting Started with Deployment** -1. **Install BentoML Integration**: +**Getting Started** +1. Install the required integration: ```bash zenml integration install bentoml -y ``` -2. **Register Model Deployer**: +2. Register the BentoML model deployer: ```bash zenml model-deployer register bentoml_deployer --flavor=bentoml ``` -**Using the Model Deployer** -1. **Create a BentoML Service**: Define how your model will be served. +**Using BentoML** +1. **Create a BentoML Service**: Define how the model will be served. ```python import bentoml from bentoml.validators import DType, Shape import numpy as np import torch - @bentoml.service(name="MNISTService") + @bentoml.service(name=SERVICE_NAME) class MNISTService: def __init__(self): - self.model = bentoml.pytorch.load_model("MODEL_NAME") + self.model = bentoml.pytorch.load_model(MODEL_NAME) self.model.eval() @bentoml.api() @@ -714,70 +733,78 @@ BentoML is an open-source framework for serving machine learning models, enablin return to_numpy(output_tensor) ``` -2. **Build Your Own Bento**: Use the `bento_builder_step` or manually build the Bento. +2. **Build Your Own Bento**: Use the `bento_builder_step` or create a custom function to build the Bento. ```python - context = get_step_context() - labels = {"model_uri": model.uri, "bento_uri": os.path.join(context.get_output_artifact_uri(), "DEFAULT_BENTO_FILENAME")} - model = load_artifact_from_response(model) - bentoml.pytorch.save_model(model_name, model, labels=labels) - bento = bentos.build(service=service, models=[model_name]) + from zenml import step + + @step + def my_bento_builder(model) -> bento.Bento: + model = load_artifact_from_response(model) + bentoml.pytorch.save_model(model_name, model) + bento = bentos.build(service=service, models=[model_name]) + return bento ``` -3. **Deploy the Bento**: - - **Local HTTP Server**: - ```python - @pipeline - def bento_deployer_pipeline(): - deployed_model = bentoml_model_deployer_step(bento=bento, model_name="pytorch_mnist", port=3001) - ``` - - **Containerized Service**: - ```python - @pipeline - def bento_deployer_pipeline(): - deployed_model = bentoml_model_deployer_step(bento=bento, model_name="pytorch_mnist", port=3001, deployment_type="container", image="my-custom-image") - ``` +3. **Bento Builder Step**: Integrate the built-in step in your ZenML pipeline. + ```python + from zenml import pipeline + from zenml.integrations.bentoml.steps import bento_builder_step + + @pipeline + def bento_builder_pipeline(): + bento = bento_builder_step(model=model, model_name="pytorch_mnist", service="service.py:CLASS_NAME") + ``` + +4. **BentoML Deployer Step**: Deploy the bento bundle locally or as a container. + ```python + from zenml import pipeline + from zenml.integrations.bentoml.steps import bentoml_model_deployer_step + + @pipeline + def bento_deployer_pipeline(): + deployed_model = bentoml_model_deployer_step(bento=bento, model_name="pytorch_mnist", port=3001) + ``` **Predicting with Deployed Model** -Use the BentoML client to send requests: +Use the BentoML client to send requests to the deployed model: ```python @step def predictor(inference_data: Dict[str, List], service: BentoMLDeploymentService) -> None: service.start(timeout=10) for img, data in inference_data.items(): prediction = service.predict("predict_ndarray", np.array(data)) - result = to_labels(prediction[0]) ``` **From Local to Cloud with `bentoctl`** -`bentoctl` (deprecated) was a CLI tool for deploying models to cloud services like AWS Lambda, Google Cloud Run, etc. For more details, refer to the [BentoML documentation](https://docs.bentoml.org). +`bentoctl` is deprecated but was used for deploying models to cloud services like AWS Lambda, Google Cloud, and Azure. -This summary provides a concise overview of deploying models locally with BentoML, including installation, service creation, bento building, and deployment steps. +For more detailed attributes and configurations, refer to the [BentoML documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-bentoml/#zenml.integrations.bentoml.model_deployers.bentoml_model_deployer). ================================================== === File: docs/book/component-guide/model-deployers/seldon.md === -### Summary: Deploying Models to Kubernetes with Seldon Core +### Summary of Seldon Core Documentation for Kubernetes Model Deployment **Overview:** -Seldon Core is a source-available model serving platform designed for deploying machine learning models as REST/GRPC microservices. It offers features such as monitoring, logging, model explainers, outlier detection, and advanced deployment strategies (A/B testing, canary deployments). It simplifies real-time inference with built-in model server implementations. +Seldon Core is a production-grade, source-available model serving platform designed for deploying machine learning models as REST/GRPC microservices. It offers features like monitoring, logging, model explainers, outlier detection, and deployment strategies (A/B testing, canary deployments). It simplifies real-time inference with built-in model server implementations. -**Important Notes:** -- **MacOS Support:** Currently, Seldon Core model deployer integration is not supported on MacOS. +**Platform Limitations:** +- **MacOS**: Seldon Core model deployer integration is not supported. **When to Use Seldon Core:** -- For deploying models on Kubernetes. -- To manage model lifecycle with zero downtime (updates, scaling, monitoring). -- To utilize advanced API endpoints (REST/GRPC). -- For complex deployment processes with custom transformers and routers. +- Deploy models on Kubernetes. +- Manage model lifecycle with zero downtime. +- Utilize advanced API endpoints (REST/GRPC). +- Implement complex deployment strategies and custom inference graphs. **Deployment Prerequisites:** 1. Access to a Kubernetes cluster (recommended to use a Service Connector). -2. Seldon Core must be installed and running in the Kubernetes cluster. +2. Seldon Core must be preinstalled in the cluster. 3. Models should be stored in persistent shared storage (e.g., AWS S3, GCS). -**Installation Steps for Seldon Core on EKS:** -1. Configure EKS cluster access: +**Installation Steps for EKS:** +1. Configure EKS access: ```bash aws eks --region us-east-1 update-kubeconfig --name zenml-cluster --alias zenml-eks ``` @@ -803,69 +830,42 @@ Seldon Core is a source-available model serving platform designed for deploying ```bash kubectl apply -f iris.yaml ``` - Example `iris.yaml`: - ```yaml - apiVersion: machinelearning.seldon.io/v1 - kind: SeldonDeployment - metadata: - name: iris-model - namespace: default - spec: - name: iris - predictors: - - graph: - implementation: SKLEARN_SERVER - modelUri: gs://seldon-models/v1.14.0-dev/sklearn/iris - name: classifier - name: default - replicas: 1 - ``` -6. Extract the prediction API URL: - ```bash - export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') - ``` -7. Send a test prediction request: - ```bash - curl -X POST http://$INGRESS_HOST/seldon/default/iris-model/api/v1.0/predictions \ - -H 'Content-Type: application/json' \ - -d '{ "data": { "ndarray": [[1,2,3,4]] } }' - ``` **Service Connector Setup:** -- Use Service Connectors for authentication to Kubernetes clusters. -- Options include AWS, GCP, Azure, or generic Kubernetes connectors. +- Use Service Connectors for authentication and resource management. - Register a Service Connector: + ```bash + zenml service-connector register -i + ``` +- Example for AWS: ```bash zenml service-connector register <CONNECTOR_NAME> --type aws --resource-type kubernetes-cluster --resource-name <EKS_CLUSTER_NAME> --auto-configure ``` **Model Deployer Registration:** -1. Register the Seldon Core Model Deployer: +- Register the Seldon Core Model Deployer: ```bash zenml model-deployer register <MODEL_DEPLOYER_NAME> --flavor=seldon \ --kubernetes_namespace=<KUBERNETES-NAMESPACE> \ --base_url=http://$INGRESS_HOST ``` -2. Connect to the Kubernetes cluster: +- Connect to the Kubernetes cluster: ```bash - zenml model-deployer connect <MODEL_DEPLOYER_NAME> --connector <CONNECTOR_ID> --resource-id <CLUSTER_NAME> + zenml model-deployer connect <MODEL_DEPLOYER_NAME> -i ``` -**Managing Authentication:** -- Use explicit credentials for the Artifact Store to ensure the Seldon Core Model Deployer can authenticate. -- Configure a ZenML secret for custom storage: - ```bash - zenml secret create s3-seldon-secret --rclone_config_s3_type="s3" --rclone_config_s3_access_key_id="<AWS-ACCESS-KEY-ID>" --rclone_config_s3_secret_access_key="<AWS-SECRET-ACCESS-KEY>" - ``` +**Authentication Management:** +- Seldon Core requires access to persistent storage for models. +- Use explicit credentials for Artifact Stores to ensure access. +- Custom secrets can be created for specific storage services. **Custom Code Deployment:** -- Define a custom prediction function and deploy it with the model: +- Define a custom prediction function: ```python def custom_predict(model: Any, request: Array_Like) -> Array_Like: # Custom prediction logic - ... ``` -- Use the `seldon_custom_model_deployer_step` to deploy: +- Use `seldon_custom_model_deployer_step` to deploy: ```python seldon_custom_model_deployer_step( model=model, @@ -882,7 +882,10 @@ Seldon Core is a source-available model serving platform designed for deploying ) ``` -This summary captures the essential steps and configurations needed to deploy models using Seldon Core on Kubernetes, ensuring that critical technical details are retained. +**Configuration Options:** +- `model_name`, `replicas`, `implementation`, `parameters`, `resources`, and `serviceAccount` can be configured in `SeldonDeploymentConfig`. + +For detailed configurations and advanced setups, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-seldon/#zenml.integrations.seldon.model_deployers). ================================================== @@ -891,11 +894,11 @@ This summary captures the essential steps and configurations needed to deploy mo ### Summary: Deploying Models Locally with MLflow **MLflow Model Deployer Overview** -- The MLflow Model Deployer is part of the ZenML Model Deployer stack, enabling local deployment and management of MLflow models on a running MLflow server. -- **Warning**: Currently, it is not production-ready and is intended for local development only. +- The MLflow Model Deployer is part of the ZenML stack for deploying and managing MLflow models on a local MLflow server. +- Currently, it is intended for local development only and is not production-ready. -**Use Cases** -- Ideal for easy local model deployment and real-time predictions without complex infrastructure (e.g., Kubernetes). +**When to Use MLflow Model Deployer** +- Ideal for local model deployment and real-time predictions without complex infrastructure (e.g., Kubernetes). - For more complex deployments, consider other Model Deployer flavors. **Installation and Registration** @@ -909,73 +912,71 @@ This summary captures the essential steps and configurations needed to deploy mo ``` **Deployment Process** -- **Deploying a Logged Model**: - - Ensure the model is logged in MLflow. Use the model URI from the artifact path or model registry. - -Example code for deploying a known model URI: -```python -from zenml import step, get_step_context -from zenml.client import Client +- Models must be logged in the MLflow experiment tracker before deployment. +- Use the model URI from the artifact path or registered model name/version. -@step -def deploy_model() -> Optional[MLFlowDeploymentService]: - zenml_client = Client() - model_deployer = zenml_client.active_stack.model_deployer - mlflow_deployment_config = MLFlowDeploymentConfig( - name="mlflow-model-deployment-example", - description="An example of deploying a model using the MLflow Model Deployer", - pipeline_name=get_step_context().pipeline_name, - pipeline_step_name=get_step_context().step_name, - model_uri="runs:/<run_id>/model" or "models:/<model_name>/<model_version>", - model_name="model", - workers=1, - mlserver=False, - timeout=DEFAULT_SERVICE_START_STOP_TIMEOUT - ) - service = model_deployer.deploy_model(config=mlflow_deployment_config) - return service -``` +**Example Code for Deployment** +1. **Deploying a Known Model URI:** + ```python + from zenml import step, get_step_context + from zenml.client import Client -- **Deploying Without Known URI**: - - Retrieve the model URI from the current run using the MLflow client. + @step + def deploy_model() -> Optional[MLFlowDeploymentService]: + zenml_client = Client() + model_deployer = zenml_client.active_stack.model_deployer + mlflow_deployment_config = MLFlowDeploymentConfig( + name="mlflow-model-deployment-example", + description="Example of deploying a model", + pipeline_name=get_step_context().pipeline_name, + pipeline_step_name=get_step_context().step_name, + model_uri="runs:/<run_id>/model", + model_name="model", + workers=1, + mlserver=False, + timeout=DEFAULT_SERVICE_START_STOP_TIMEOUT + ) + service = model_deployer.deploy_model(config=mlflow_deployment_config) + return service + ``` -Example code: -```python -from zenml import step, get_step_context -from zenml.client import Client -from mlflow.tracking import MlflowClient, artifact_utils +2. **Deploying an Unknown Model URI:** + ```python + from zenml import step, get_step_context + from zenml.client import Client + from mlflow.tracking import MlflowClient, artifact_utils -@step -def deploy_model() -> Optional[MLFlowDeploymentService]: - zenml_client = Client() - model_deployer = zenml_client.active_stack.model_deployer - experiment_tracker = zenml_client.active_stack.experiment_tracker - mlflow_run_id = experiment_tracker.get_run_id( - experiment_name=get_step_context().pipeline_name, - run_name=get_step_context().run_name, - ) - client = MlflowClient() - model_uri = artifact_utils.get_artifact_uri(run_id=mlflow_run_id, artifact_path="model") - mlflow_deployment_config = MLFlowDeploymentConfig( - name="mlflow-model-deployment-example", - description="An example of deploying a model using the MLflow Model Deployer", - pipeline_name=get_step_context().pipeline_name, - pipeline_step_name=get_step_context().step_name, - model_uri=model_uri, - model_name="model", - workers=1, - mlserver=False, - timeout=300, - ) - service = model_deployer.deploy_model(config=mlflow_deployment_config) - return service -``` + @step + def deploy_model() -> Optional[MLFlowDeploymentService]: + zenml_client = Client() + model_deployer = zenml_client.active_stack.model_deployer + experiment_tracker = zenml_client.active_stack.experiment_tracker + mlflow_run_id = experiment_tracker.get_run_id( + experiment_name=get_step_context().pipeline_name, + run_name=get_step_context().run_name, + ) + client = MlflowClient() + model_uri = artifact_utils.get_artifact_uri(run_id=mlflow_run_id, artifact_path="model") + mlflow_deployment_config = MLFlowDeploymentConfig( + name="mlflow-model-deployment-example", + description="Example of deploying a model", + pipeline_name=get_step_context().pipeline_name, + pipeline_step_name=get_step_context().step_name, + model_uri=model_uri, + model_name="model", + workers=1, + mlserver=False, + timeout=300, + ) + service = model_deployer.deploy_model(config=mlflow_deployment_config) + return service + ``` **Configuration Options for `MLFlowDeploymentService`** - `name`, `description`, `pipeline_name`, `pipeline_step_name`, `model_name`, `model_uri`, `workers`, `mlserver`, `timeout`. -**Running Inference** -1. **Load a Prediction Service**: +**Running Inference on Deployed Models** +1. **Load Prediction Service:** ```python import json import requests @@ -985,16 +986,16 @@ def deploy_model() -> Optional[MLFlowDeploymentService]: @step(enable_cache=False) def prediction_service_loader(pipeline_name: str, pipeline_step_name: str, model_name: str = "model") -> None: model_deployer = MLFlowModelDeployer.get_active_model_deployer() - existing_services = model_deployer.find_model_server(pipeline_name, pipeline_step_name, model_name) + existing_services = model_deployer.find_model_server(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name, model_name=model_name) if not existing_services: raise RuntimeError("No running service found.") service = existing_services[0] - payload = json.dumps({"inputs": {"messages": [{"role": "user", "content": "Tell a joke!"}]}}) + payload = json.dumps({"inputs": {"messages": [{"role": "user", "content": "Tell a joke!"}]}, "params": {"temperature": 0.5, "max_tokens": 20}}) response = requests.post(url=service.get_prediction_url(), data=payload, headers={"Content-Type": "application/json"}) return response.json() ``` -2. **Use the Service for Inference**: +2. **Using Service for Inference:** ```python from typing_extensions import Annotated import numpy as np @@ -1003,7 +1004,8 @@ def deploy_model() -> Optional[MLFlowDeploymentService]: @step def predictor(service: MLFlowDeploymentService, data: np.ndarray) -> Annotated[np.ndarray, "predictions"]: - return service.predict(data).argmax(axis=-1) + prediction = service.predict(data).argmax(axis=-1) + return prediction ``` For further details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_deployers). @@ -1014,78 +1016,95 @@ For further details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/int ### Summary: Deploying Models to Hugging Face Inference Endpoints -**Hugging Face Inference Endpoints** offers a secure and managed solution for deploying models from the Hugging Face Hub, including `transformers`, `sentence-transformers`, and `diffusers`. This service eliminates the need for managing containers and GPUs, providing dedicated autoscaling infrastructure. +Hugging Face Inference Endpoints offers a managed solution to deploy `transformers`, `sentence-transformers`, and `diffusers` models on secure, autoscaling infrastructure. This service simplifies deployment without requiring management of containers or GPUs. -#### When to Use Hugging Face Model Deployer: -- Deploy models on secure infrastructure. -- Prefer a fully-managed production solution without container management. +#### When to Use Hugging Face Model Deployer +- Deploy models on dedicated infrastructure. +- Prefer a fully-managed production solution for inference. - Aim to create production-ready APIs with minimal MLOps involvement. -- Seek cost-effectiveness by paying only for used compute resources. -- Require enterprise security with offline endpoints linked to VPCs. +- Require cost-effective solutions, paying only for used compute resources. +- Need enterprise security for offline endpoints connected to Virtual Private Clouds (VPCs). -#### Deployment Steps: -1. **Install Hugging Face Integration**: +For local deployment, consider using the [MLflow Model Deployer](mlflow.md). + +#### Deployment Steps +1. **Install Hugging Face ZenML Integration:** ```bash zenml integration install huggingface -y ``` -2. **Register the Model Deployer**: +2. **Register the Model Deployer:** ```bash zenml model-deployer register <MODEL_DEPLOYER_NAME> --flavor=huggingface --token=<YOUR_HF_TOKEN> --namespace=<YOUR_HF_NAMESPACE> ``` - - `token`: Hugging Face authentication token. - - `namespace`: Username or organization name for endpoint creation. -3. **Update Stack**: + - `token`: Hugging Face authentication token (manage via [Hugging Face settings](https://huggingface.co/settings/tokens)). + - `namespace`: Username, organization name, or `*` for endpoint creation. + +3. **Update Your Stack:** ```bash zenml stack update <CUSTOM_STACK_NAME> --model-deployer=<MODEL_DEPLOYER_NAME> ``` -#### Using the Model Deployer: -- **Deploying a Model**: - Utilize the `huggingface_model_deployer_step` with `HuggingFaceServiceConfig`: - ```python - from zenml import pipeline - from zenml.config import DockerSettings - from zenml.integrations.huggingface.services import HuggingFaceServiceConfig - from zenml.integrations.huggingface.steps import huggingface_model_deployer_step +#### Using the Model Deployer +- Use the pre-built `huggingface_model_deployer_step` for deployment. +- Run batch inference using `HuggingFaceDeploymentService`. - @pipeline(enable_cache=True) - def huggingface_deployment_pipeline(model_name: str = "hf", timeout: int = 1200): - service_config = HuggingFaceServiceConfig(model_name=model_name) - huggingface_model_deployer_step(service_config=service_config, timeout=timeout) - ``` +##### Example: Deploying a Model +```python +from zenml import pipeline +from zenml.config import DockerSettings +from zenml.integrations.huggingface.services import HuggingFaceServiceConfig +from zenml.integrations.huggingface.steps import huggingface_model_deployer_step - **Configurable Attributes**: - - `model_name`, `endpoint_name`, `repository`, `framework`, `accelerator`, `instance_size`, `instance_type`, `region`, `vendor`, `token`, `account_id`, `min_replica`, `max_replica`, `revision`, `task`, `custom_image`, `namespace`, `endpoint_type`. +docker_settings = DockerSettings(required_integrations=["huggingface"]) -- **Running Inference**: - Example of loading a prediction service and making predictions: - ```python - from zenml import step, pipeline - from zenml.integrations.huggingface.model_deployers import HuggingFaceModelDeployer - from zenml.integrations.huggingface.services import HuggingFaceDeploymentService +@pipeline(enable_cache=True, settings={"docker": docker_settings}) +def huggingface_deployment_pipeline(model_name: str = "hf", timeout: int = 1200): + service_config = HuggingFaceServiceConfig(model_name=model_name) + huggingface_model_deployer_step(service_config=service_config, timeout=timeout) +``` - @step - def prediction_service_loader(pipeline_name: str, pipeline_step_name: str, running: bool = True, model_name: str = "default") -> HuggingFaceDeploymentService: - model_deployer = HuggingFaceModelDeployer.get_active_model_deployer() - existing_services = model_deployer.find_model_server(pipeline_name, pipeline_step_name, model_name, running) - if not existing_services: - raise RuntimeError("No running service found.") - return existing_services[0] +##### Configurable Attributes in `HuggingFaceServiceConfig` +- `model_name`: Model name. +- `endpoint_name`: Inference endpoint name (prefixed with `zenml-`). +- `repository`: User or organization namespace. +- `framework`: ML framework (e.g., `"pytorch"`). +- `accelerator`: Hardware for inference (e.g., `"cpu"`). +- `instance_size`: Size of the hosting instance. +- `region`: Cloud region for the endpoint. +- `vendor`: Cloud provider (e.g., `"aws"`). +- `token`: Hugging Face authentication token. +- `min_replica`/`max_replica`: Scaling configuration. +- `task`: Supported ML task (e.g., `"text-classification"`). +- `endpoint_type`: Type of endpoint (`"protected"`, `"public"`, or `"private"`). - @step - def predictor(service: HuggingFaceDeploymentService, data: str) -> str: - return service.predict(data) +#### Running Inference on a Provisioned Endpoint +```python +from zenml import step, pipeline +from zenml.integrations.huggingface.model_deployers import HuggingFaceModelDeployer +from zenml.integrations.huggingface.services import HuggingFaceDeploymentService - @pipeline - def huggingface_deployment_inference_pipeline(pipeline_name: str): - inference_data = ... - model_service = prediction_service_loader(pipeline_name) - predictions = predictor(model_service, inference_data) - ``` +@step(enable_cache=False) +def prediction_service_loader(pipeline_name: str, pipeline_step_name: str, running: bool = True, model_name: str = "default") -> HuggingFaceDeploymentService: + model_deployer = HuggingFaceModelDeployer.get_active_model_deployer() + existing_services = model_deployer.find_model_server(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name, model_name=model_name, running=running) + if not existing_services: + raise RuntimeError(f"No running endpoint found.") + return existing_services[0] + +@step +def predictor(service: HuggingFaceDeploymentService, data: str) -> str: + return service.predict(data) -For further details and a complete list of attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-huggingface/) and the Hugging Face endpoint [code](https://github.com/huggingface/huggingface_hub/blob/5e3b603ccc7cd6523d998e75f82848215abf9415/src/huggingface_hub/hf_api.py#L6957). +@pipeline +def huggingface_deployment_inference_pipeline(pipeline_name: str, pipeline_step_name: str = "huggingface_model_deployer_step"): + inference_data = ... + model_deployment_service = prediction_service_loader(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name) + predictions = predictor(model_deployment_service, inference_data) +``` + +For detailed configuration options, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-huggingface/). ================================================== @@ -1094,42 +1113,50 @@ For further details and a complete list of attributes, refer to the [SDK Docs](h ### Summary of Databricks Model Serving Documentation **Overview:** -Databricks Model Serving provides a unified interface for deploying, governing, and querying AI models as REST APIs. It offers managed, autoscaling infrastructure, eliminating the need for users to manage containers and GPUs. +Databricks Model Serving (or Mosaic AI Model Serving) provides a unified interface for deploying, governing, and querying AI models as REST APIs. It offers dedicated, autoscaling infrastructure managed by Databricks, eliminating the need to handle containers and GPUs directly. -**When to Use:** -- If already utilizing Databricks for data and ML workloads. -- To deploy AI models without managing infrastructure. -- For enterprise security with offline endpoints connected to Virtual Private Clouds (VPCs). -- To create production-ready APIs with minimal MLOps involvement. +**When to Use Databricks Model Deployer:** +- You are using Databricks for data and ML workloads. +- You prefer deploying AI models without managing containers and GPUs. +- You require dedicated, autoscaling infrastructure for model deployment. +- Enterprise security is essential, with models needing to be deployed to secure offline endpoints. +- You aim to create production-ready APIs with minimal infrastructure or MLOps involvement. -**Deployment Steps:** -1. Install Databricks integration: - ```bash - zenml integration install databricks -y - ``` -2. Register the model deployer: - ```bash - zenml model-deployer register <MODEL_DEPLOYER_NAME> --flavor=databricks --host=<HOST> --client_id={{databricks.client_id}} --client_secret={{databricks.client_secret}} - ``` - - It is recommended to create a Databricks service account for authentication. +**Installation and Registration:** +To deploy models using Databricks Model Deployer, install the ZenML integration: -3. Update your stack to use the model deployer: - ```bash - zenml stack update <CUSTOM_STACK_NAME> --model-deployer=<MODEL_DEPLOYER_NAME> - ``` +```bash +zenml integration install databricks -y +``` -**Configuration:** -In `DatabricksServiceConfig`, you can configure: -- `model_name`: Name of the model in the Databricks Model Registry. -- `model_version`: Version of the model. -- `workload_size`: Can be `Small`, `Medium`, or `Large`. -- `scale_to_zero_enabled`: Enables/disables scaling to zero. +Register the model deployer: + +```bash +zenml model-deployer register <MODEL_DEPLOYER_NAME> --flavor=databricks --host=<HOST> --client_id={{databricks.client_id}} --client_secret={{databricks.client_secret}} +``` + +**Service Account Recommendation:** +Create a Databricks service account with necessary permissions to generate `client_id` and `client_secret` for authentication. + +Update your ZenML stack to use the model deployer: + +```bash +zenml stack update <CUSTOM_STACK_NAME> --model-deployer=<MODEL_DEPLOYER_NAME> +``` + +**Configuration Options:** +Within `DatabricksServiceConfig`, you can configure: +- `model_name`: Identifier for the model in the Databricks Model Registry. +- `model_version`: Version identifier for the model. +- `workload_size`: Size of the workload (`Small`, `Medium`, `Large`). +- `scale_to_zero_enabled`: Boolean to enable/disable scaling to zero. - `env_vars`: Environment variables for the model serving container. -- `workload_type`: Types include `CPU`, `GPU_LARGE`, `GPU_MEDIUM`, `GPU_SMALL`, or `MULTIGPU_MEDIUM`. -- `endpoint_secret_name`: Secret for endpoint security. +- `workload_type`: Type of workload (`CPU`, `GPU_LARGE`, etc.). +- `endpoint_secret_name`: Secret name for endpoint security. **Inference Example:** -To run inference on a provisioned endpoint: +To run inference on a provisioned endpoint, use the following code structure: + ```python from zenml import step, pipeline from zenml.integrations.databricks.model_deployers import DatabricksModelDeployer @@ -1138,9 +1165,11 @@ from zenml.integrations.databricks.services import DatabricksDeploymentService @step(enable_cache=False) def prediction_service_loader(pipeline_name: str, pipeline_step_name: str, running: bool = True, model_name: str = "default") -> DatabricksDeploymentService: model_deployer = DatabricksModelDeployer.get_active_model_deployer() - existing_services = model_deployer.find_model_server(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name, model_name=model_name, running=running) + existing_services = model_deployer.find_model_server(pipeline_name, pipeline_step_name, model_name, running) + if not existing_services: - raise RuntimeError(f"No running inference endpoint found.") + raise RuntimeError(f"No running Databricks inference endpoint found.") + return existing_services[0] @step @@ -1150,11 +1179,11 @@ def predictor(service: DatabricksDeploymentService, data: str) -> str: @pipeline def databricks_deployment_inference_pipeline(pipeline_name: str, pipeline_step_name: str = "databricks_model_deployer_step"): inference_data = ... - model_deployment_service = prediction_service_loader(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name) + model_deployment_service = prediction_service_loader(pipeline_name, pipeline_step_name) predictions = predictor(model_deployment_service, inference_data) ``` -For further details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-databricks/#zenml.integrations.databricks.model_deployers). +For additional details and attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-databricks/#zenml.integrations.databricks.model_deployers). ================================================== @@ -1162,15 +1191,14 @@ For further details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/int ### vLLM Documentation Summary -**vLLM Overview** -[vLLM](https://docs.vllm.ai/en/latest/) is a library designed for efficient LLM inference and serving, offering features such as: -- High throughput for large language models with an OpenAI-compatible API -- Continuous batching of requests -- Quantization options: GPTQ, AWQ, INT4, INT8, FP8 -- Advanced features: PagedAttention, Speculative decoding, Chunked pre-fill +**vLLM** is a library for efficient LLM inference and serving, ideal for: -**Deployment Steps** -To deploy models using vLLM, follow these steps: +- Deploying large language models with high throughput and an OpenAI-compatible API server. +- Continuous request batching. +- Model quantization (GPTQ, AWQ, INT4, INT8, FP8). +- Advanced features like PagedAttention, Speculative decoding, and Chunked pre-fill. + +#### Deployment Steps 1. **Install vLLM Integration**: ```bash @@ -1182,13 +1210,11 @@ To deploy models using vLLM, follow these steps: zenml model-deployer register vllm_deployer --flavor=vllm ``` -This sets up a local vLLM deployment server as a daemon process. +This creates a local vLLM deployment server running as a daemon. -**Usage Example** -For practical implementation, refer to the [deployment pipeline example](https://github.com/zenml-io/zenml-projects/blob/79f67ea52c3908b9b33c9a41eef18cb7d72362e8/llm-vllm-deployer/pipelines/deploy_pipeline.py#L25). +#### Usage -**Deploying an LLM** -Use the `vllm_model_deployer_step` in your pipeline as shown below: +To deploy an LLM, use the `vllm_model_deployer_step` within a ZenML pipeline. Here’s a concise example: ```python from zenml import pipeline @@ -1198,19 +1224,22 @@ from zenml.integrations.vllm.services.vllm_deployment import VLLMDeploymentServi @pipeline() def deploy_vllm_pipeline(model: str, timeout: int = 1200) -> Annotated[VLLMDeploymentService, "GPT2"]: - service = vllm_model_deployer_step(model=model, timeout=timeout) - return service + return vllm_model_deployer_step(model=model, timeout=timeout) ``` -**Configuration Options** +#### Configuration Options + Within `VLLMDeploymentService`, you can configure: -- `model`: Hugging Face model name or path -- `tokenizer`: Hugging Face tokenizer name or path (default: model name) -- `served_model_name`: API model name (default: same as `model`) -- `trust_remote_code`: Trust code from Hugging Face -- `tokenizer_mode`: Options: ['auto', 'slow', 'mistral'] -- `dtype`: Data type for weights/activations: ['auto', 'half', 'float16', 'bfloat16', 'float', 'float32'] -- `revision`: Specific model version (branch, tag, or commit id; defaults to latest) + +- `model`: Hugging Face model name or path. +- `tokenizer`: Hugging Face tokenizer name or path (defaults to model name). +- `served_model_name`: API model name (defaults to model argument). +- `trust_remote_code`: Trust code from Hugging Face. +- `tokenizer_mode`: Options: ['auto', 'slow', 'mistral']. +- `dtype`: Data type for weights/activations: ['auto', 'half', 'float16', 'bfloat16', 'float', 'float32']. +- `revision`: Specific model version (branch name, tag, or commit id; defaults to latest). + +For a practical example, refer to the [deployment pipeline](https://github.com/zenml-io/zenml-projects/blob/79f67ea52c3908b9b33c9a41eef18cb7d72362e8/llm-vllm-deployer/pipelines/deploy_pipeline.py#L25) and running a GPT-2 model using vLLM. ================================================== @@ -1220,9 +1249,10 @@ Within `VLLMDeploymentService`, you can configure: Before creating a custom alerter, review the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational concepts. -#### Base Abstraction +### Base Abstraction The base alerter class defines two abstract methods: + - `post(message: str, params: Optional[BaseAlerterStepParameters]) -> bool`: Posts a message to a chat service, returning `True` if successful. - `ask(question: str, params: Optional[BaseAlerterStepParameters]) -> bool`: Posts a message and waits for approval, returning `True` only if approved. @@ -1235,9 +1265,9 @@ class BaseAlerter(StackComponent, ABC): return True ``` -#### Building Your Custom Alerter +### Building Your Own Custom Alerter -1. **Create a Custom Class**: Inherit from `BaseAlerter` and implement `post()` and `ask()`. +1. **Create a Custom Alerter Class**: Inherit from `BaseAlerter` and implement `post()` and `ask()`. ```python from typing import Optional @@ -1287,9 +1317,9 @@ class MyAlerterFlavor(BaseAlerterFlavor): return MyAlerter ``` -#### Registering the Flavor +### Registering Your Custom Alerter -Register your new flavor via the CLI: +Register your flavor via the CLI: ```shell zenml alerter flavor register <path.to.MyAlerterFlavor> @@ -1301,23 +1331,23 @@ For example: zenml alerter flavor register flavors.my_flavor.MyAlerterFlavor ``` -**Note**: Ensure ZenML is initialized at the root of your repository to avoid resolution issues. +**Note**: Ensure ZenML is initialized at the root of your repository for proper flavor resolution. -#### Listing Available Alerter Flavors +### Verifying the Registration -To view registered alerter flavors: +List available alerter flavors: ```shell zenml alerter flavor list ``` -#### Important Considerations +### Important Considerations -- The **MyAlerterFlavor** is used during flavor creation. -- The **MyAlerterConfig** is utilized during stack component registration for validation. -- The **MyAlerter** is invoked when the component is in use, allowing separation of configuration and implementation. +- **MyAlerterFlavor** is used during flavor creation. +- **MyAlerterConfig** is utilized during stack component registration for validation. +- **MyAlerter** is invoked when the component is in use, allowing separation of configuration and implementation. -This design enables registration of flavors and components even if their dependencies are not installed locally. +This design enables registration of flavors and components independently of their implementation dependencies. ================================================== @@ -1325,29 +1355,32 @@ This design enables registration of flavors and components even if their depende ### Discord Alerter Overview -The `DiscordAlerter` allows sending automated messages to a Discord channel from ZenML pipelines. It includes two main steps: +The `DiscordAlerter` allows sending messages to a Discord channel from ZenML pipelines. It includes two key steps: 1. **`discord_alerter_post_step`**: Sends a message to a Discord channel and returns success status. -2. **`discord_alerter_ask_step`**: Sends a message and waits for user feedback, returning `True` only if the user approves the action in Discord. +2. **`discord_alerter_ask_step`**: Sends a message and waits for user feedback, returning `True` only if the user approves. #### Use Cases - Immediate notifications for failures (e.g., model performance issues). -- Human-in-the-loop integration for critical pipeline steps (e.g., model deployment). +- Human-in-the-loop integration for critical steps (e.g., model deployment). ### Requirements -To use `DiscordAlerter`, install the Discord integration: + +To use the `DiscordAlerter`, install the Discord integration: ```shell zenml integration install discord -y ``` ### Setting Up a Discord Bot + 1. Create a Discord workspace and channel. -2. Create a Discord App with a bot. -3. Obtain the bot token (reset if necessary) and ensure it has permissions to send/receive messages. +2. Create a Discord App with a bot in your server. +3. Copy the bot token (reset if necessary) and ensure it has permissions to send and receive messages. + +### Registering a Discord Alerter -### Registering the Discord Alerter -Register the `discord` alerter in ZenML: +Register the `discord` alerter with the following command: ```shell zenml alerter register discord_alerter \ @@ -1356,152 +1389,171 @@ zenml alerter register discord_alerter \ --default_discord_channel_id=<DISCORD_CHANNEL_ID> ``` -Add the alerter to your stack: +Add it to your stack: ```shell zenml stack register ... -al discord_alerter ``` -**Parameters:** -- **`DISCORD_CHANNEL_ID`**: Copy from the channel settings (enable Developer Mode if not visible). -- **`DISCORD_TOKEN`**: Found during bot setup. +#### Channel ID and Token +- **DISCORD_CHANNEL_ID**: Right-click the text channel and select 'Copy Channel ID' (enable Developer Mode if not visible). +- **DISCORD_TOKEN**: Find instructions for setting up the bot and inviting it [here](https://discordpy.readthedocs.io/en/latest/discord.html). -**Permissions Required:** +**Permissions Needed**: - Read Messages/View Channels - Send Messages -- Send Messages in Threads ### Using the Discord Alerter -Import and use the steps in your pipeline: + +Import and use the steps in your pipeline. A formatter step is typically needed to generate the message string. Example usage: ```python from zenml.integrations.discord.steps.discord_alerter_ask_step import discord_alerter_ask_step from zenml import step, pipeline @step -def my_formatter_step(artifact_to_be_communicated) -> str: - return f"Here is my artifact {artifact_to_be_communicated}!" +def my_formatter_step(artifact) -> str: + return f"Here is my artifact {artifact}!" @pipeline def my_pipeline(...): ... - artifact_to_be_communicated = ... - message = my_formatter_step(artifact_to_be_communicated) + artifact = ... + message = my_formatter_step(artifact) approved = discord_alerter_ask_step(message) - ... # Behavior based on `approved` + ... # Conditional behavior based on `approved` if __name__ == "__main__": my_pipeline() ``` -For more details and configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-discord/#zenml.integrations.discord.alerters.discord_alerter.DiscordAlerter). +For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-discord/#zenml.integrations.discord.alerters.discord_alerter.DiscordAlerter). ================================================== === File: docs/book/component-guide/alerters/slack.md === -### Slack Alerter Documentation Summary +# Slack Alerter Documentation Summary -**Overview**: The `SlackAlerter` allows sending messages and questions to a Slack channel from ZenML pipelines. +The `SlackAlerter` allows sending messages and questions to a Slack channel from ZenML pipelines. -#### Setup Instructions +## Setup Instructions -1. **Create a Slack App**: - - Set up a Slack workspace and create a Slack App with a bot. - - Grant the following permissions in the `OAuth & Permissions` tab: - - `chat:write` - - `channels:read` - - `channels:history` - - Invite the app to the desired channel using `/invite` or through channel settings. +### Create a Slack App +1. **Create a Slack App** in your workspace via [Slack API](https://api.slack.com/apps?new_app=1). +2. **Set Permissions** in the `OAuth & Permissions` tab: + - `chat:write` + - `channels:read` + - `channels:history` +3. **Invite the App** to your channel using `/invite` or through channel settings. -2. **Registering a Slack Alerter in ZenML**: - - Install the Slack integration: - ```shell - zenml integration install slack -y - ``` - - Create a secret and register the alerter: - ```shell - zenml secret create slack_token --oauth_token=<SLACK_TOKEN> - zenml alerter register slack_alerter \ - --flavor=slack \ - --slack_token={{slack_token.oauth_token}} \ - --slack_channel_id=<SLACK_CHANNEL_ID> - ``` - - Add the `slack_alerter` to your stack: - ```shell - zenml stack register ... -al slack_alerter --set - ``` +### Registering a Slack Alerter in ZenML +1. **Install the Slack Integration**: + ```shell + zenml integration install slack -y + ``` +2. **Create a Secret and Register the Alerter**: + ```shell + zenml secret create slack_token --oauth_token=<SLACK_TOKEN> + zenml alerter register slack_alerter \ + --flavor=slack \ + --slack_token={{slack_token.oauth_token}} \ + --slack_channel_id=<SLACK_CHANNEL_ID> + ``` + - `<SLACK_CHANNEL_ID>`: Found in channel details (starts with `C...`). + - `<SLACK_TOKEN>`: Found in app settings under `OAuth & Permissions`. -#### Usage +3. **Add Alerter to Stack**: + ```shell + zenml stack register ... -al slack_alerter --set + ``` -1. **Direct Methods**: - - Use `post()` and `ask()` methods from the active alerter: - ```python - from zenml import pipeline, step - from zenml.client import Client +## Usage in ZenML - @step - def post_statement() -> None: - Client().active_stack.alerter.post("Step finished!") +### Direct Methods: `post()` and `ask()` +```python +from zenml import pipeline, step +from zenml.client import Client - @step - def ask_question() -> bool: - return Client().active_stack.alerter.ask("Should I continue?") +@step +def post_statement() -> None: + Client().active_stack.alerter.post("Step finished!") - @pipeline(enable_cache=False) - def my_pipeline(): - post_statement() - ask_question() +@step +def ask_question() -> bool: + return Client().active_stack.alerter.ask("Should I continue?") - if __name__ == "__main__": - my_pipeline() - ``` +@pipeline(enable_cache=False) +def my_pipeline(): + post_statement() + ask_question() -2. **Custom Settings**: - - Modify channel ID at runtime: - ```python - @step(settings={"alerter": {"slack_channel_id": <SLACK_CHANNEL_ID>}}) - def post_statement() -> None: - Client().active_stack.alerter.post("Posting to another channel!") - ``` +if __name__ == "__main__": + my_pipeline() +``` +*Note: `ask()` defaults to `False` on error.* -3. **Using `SlackAlerterParameters` and `SlackAlerterPayload`**: - - Customize messages with additional information: - ```python - from zenml import pipeline, step, get_step_context - from zenml.client import Client - from zenml.integrations.slack.alerters.slack_alerter import ( - SlackAlerterParameters, SlackAlerterPayload - ) +### Custom Settings +```python +@step(settings={"alerter": {"slack_channel_id": <SLACK_CHANNEL_ID>}}) +def post_statement() -> None: + Client().active_stack.alerter.post("Posting to another channel!") +``` - @step - def post_statement() -> None: - params = SlackAlerterParameters( - payload=SlackAlerterPayload( - pipeline_name=get_step_context().pipeline.name, - step_name=get_step_context().step_run.name, - stack_name=Client().active_stack.name, - ), - ) - Client().active_stack.alerter.post("Message with pipeline info.", params=params) - ``` +### Using `SlackAlerterParameters` and `SlackAlerterPayload` +```python +from zenml import pipeline, step, get_step_context +from zenml.client import Client +from zenml.integrations.slack.alerters.slack_alerter import ( + SlackAlerterParameters, SlackAlerterPayload +) -4. **Predefined Steps**: - - Use built-in steps for simplicity: - ```python - from zenml import pipeline - from zenml.integrations.slack.steps import slack_alerter_post_step, slack_alerter_ask_step +@step +def post_statement() -> None: + params = SlackAlerterParameters( + payload=SlackAlerterPayload( + pipeline_name=get_step_context().pipeline.name, + step_name=get_step_context().step_run.name, + stack_name=Client().active_stack.name, + ), + ) + Client().active_stack.alerter.post( + message="Message with pipeline info.", + params=params + ) + +@step +def ask_question() -> bool: + message = ":tada: Should I continue? (Y/N)" + blocks = [{"type": "header", "text": {"type": "plain_text", "text": message, "emoji": True}}] + params = SlackAlerterParameters(blocks=blocks, approve_msg_options=["Y"], disapprove_msg_options=["N"]) + return Client().active_stack.alerter.ask(question=message, params=params) - @pipeline(enable_cache=False) - def my_pipeline(): - slack_alerter_post_step("Posting a statement.") - slack_alerter_ask_step("Asking a question. Should I continue?") +@pipeline(enable_cache=False) +def my_pipeline(): + post_statement() + ask_question() - if __name__ == "__main__": - my_pipeline() - ``` +if __name__ == "__main__": + my_pipeline() +``` + +### Predefined Steps +```python +from zenml import pipeline +from zenml.integrations.slack.steps.slack_alerter_post_step import slack_alerter_post_step +from zenml.integrations.slack.steps.slack_alerter_ask_step import slack_alerter_ask_step + +@pipeline(enable_cache=False) +def my_pipeline(): + slack_alerter_post_step("Posting a statement.") + slack_alerter_ask_step("Asking a question. Should I continue?") + +if __name__ == "__main__": + my_pipeline() +``` -For detailed attributes and configurations, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-slack/#zenml.integrations.slack.alerters.slack_alerter.SlackAlerter). +For further details and configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-slack/#zenml.integrations.slack.alerters.slack_alerter.SlackAlerter). ================================================== @@ -1509,38 +1561,32 @@ For detailed attributes and configurations, refer to the [SDK Docs](https://sdkd ### Alerters Overview -**Alerters** enable sending messages to chat services (e.g., Slack, Discord, Mattermost) from ZenML pipelines, facilitating immediate notifications for failures, monitoring, and human-in-the-loop ML. +**Alerters** enable sending messages to chat services (e.g., Slack, Discord) from pipelines for notifications on failures, monitoring, and human-in-the-loop ML. -### Available Alerter Integrations +#### Available Alerter Integrations - **SlackAlerter**: Interacts with Slack channels. - **DiscordAlerter**: Interacts with Discord channels. - **Custom Implementation**: Extend the alerter abstraction for other chat services. -| Alerter | Flavor | Integration | Notes | -|---------|----------|-------------|---------------------------------------------| -| Slack | `slack` | `slack` | Interacts with a Slack channel | -| Discord | `discord`| `discord` | Interacts with a Discord channel | -| Custom | _custom_ | | Provide your own implementation | - To view available alerter flavors, use: ```shell zenml alerter flavor list ``` -### Using Alerters in ZenML +#### Using Alerters with ZenML -1. Register an alerter component: +1. **Register an Alerter**: ```shell zenml alerter register <ALERTER_NAME> ... ``` -2. Add the alerter to your stack: +2. **Add to Stack**: ```shell zenml stack register ... -al <ALERTER_NAME> ``` -3. Import and use the standard steps from the respective integration in your pipelines. +3. **Import and Use Standard Steps**: After registration, import the standard steps from the respective integration for use in your pipelines. ================================================== @@ -1548,89 +1594,103 @@ zenml alerter flavor list ### Azure Container Registry Overview -The Azure Container Registry (ACR) is integrated with ZenML for storing container images. It is suitable for scenarios where components of your stack need to pull or push images, and you have access to Azure. +The Azure Container Registry (ACR) is integrated with ZenML for storing container images. It is suitable for scenarios where components of your stack need to pull or push container images and when you have access to Azure. ### Deployment Steps -1. **Create Registry**: - - Navigate to [Azure Portal](https://portal.azure.com/#create/Microsoft.ContainerRegistry). - - Select subscription, resource group, location, and registry name, then click `Review + Create`. -2. **Registry URI Format**: - ``` - <REGISTRY_NAME>.azurecr.io - ``` - - Example: `zenmlregistry.azurecr.io` +1. **Create ACR**: + - Go to [Azure Portal](https://portal.azure.com/#create/Microsoft.ContainerRegistry). + - Select a subscription, resource group, location, and registry name. + - Click `Review + Create`. -3. **Find Registry URI**: - - Search for `container registries` in Azure Portal and select your registry. +2. **Find Registry URI**: + - Format: `<REGISTRY_NAME>.azurecr.io` + - Access via Azure Portal: Search for `container registries`, select your registry, and use the name to construct the URI. -### Usage Requirements -- **Docker**: Must be installed and running. -- **Registry URI**: Obtainable from the previous section. +### Usage + +To use the ACR, ensure you have: +- Docker installed and running. +- The registry URI from the previous step. -### Registering the Container Registry +**Register the ACR**: ```shell zenml container-registry register <NAME> --flavor=azure --uri=<REGISTRY_URI> zenml stack update -c <NAME> ``` ### Authentication Methods -Authentication is essential for using ACR in pipelines. -#### Local Authentication (Quick Start) -- Requires Azure CLI installed. -- Log in to the registry: -```shell -az acr login --name=<REGISTRY_NAME> -``` -- **Note**: Local authentication is not portable across environments. +Authentication is necessary for using ACR in pipelines: -#### Azure Service Connector (Recommended) -- Provides auto-configuration and enhanced security. -- Register a connector: -```sh -zenml service-connector register --type azure -i -``` -- Non-interactive registration example: -```sh -zenml service-connector register <CONNECTOR_NAME> --type azure --auth-method service-principal --tenant_id=<AZURE_TENANT_ID> --client_id=<AZURE_CLIENT_ID> --client_secret=<AZURE_CLIENT_SECRET> --resource-type docker-registry --resource-id <REGISTRY_URI> -``` +1. **Local Authentication** (quick setup): + - Requires Azure CLI installed. + - Log in using: + ```shell + az acr login --name=<REGISTRY_NAME> + ``` + - **Note**: Not portable across environments. + +2. **Azure Service Connector** (recommended): + - Provides auto-configuration and security for Azure resources. + - Register using: + ```sh + zenml service-connector register --type azure -i + ``` + - Non-interactive setup with Service Principal: + ```sh + zenml service-connector register <CONNECTOR_NAME> --type azure --auth-method service-principal --tenant_id=<AZURE_TENANT_ID> --client_id=<AZURE_CLIENT_ID> --client_secret=<AZURE_CLIENT_SECRET> --resource-type docker-registry --resource-id <REGISTRY_URI> + ``` ### Connecting ACR to Service Connector + +After setting up a Service Connector, register and connect the ACR: ```sh +zenml container-registry register <CONTAINER_REGISTRY_NAME> -f azure --uri=<REGISTRY_URL> zenml container-registry connect <CONTAINER_REGISTRY_NAME> -i ``` -- Non-interactive version: +Non-interactive connection: ```sh zenml container-registry connect <CONTAINER_REGISTRY_NAME> --connector <CONNECTOR_ID> ``` -### Using ACR in ZenML Stack +### Final Steps + +To use the ACR in a ZenML Stack: ```sh zenml stack register <STACK_NAME> -c <CONTAINER_REGISTRY_NAME> ... --set ``` -### Local Docker Client Authentication -To temporarily authenticate your local Docker client: +**Local Docker Authentication**: +If needed, temporarily authenticate your local Docker client: ```sh zenml service-connector login <CONNECTOR_NAME> --resource-type docker-registry --resource-id <CONTAINER_REGISTRY_URI> ``` ### Additional Resources -For more details on configurable attributes of the Azure container registry, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.azure_container_registry.AzureContainerRegistry). + +For more details on configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.azure_container_registry.AzureContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/custom.md === -### Develop a Custom Container Registry +### Developing a Custom Container Registry in ZenML #### Overview -Before creating a custom container registry, review the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational concepts. +To develop a custom container registry in ZenML, it's essential to understand the base abstractions and the process involved in creating and registering a new flavor. #### Base Abstraction -ZenML's container registries have a basic abstraction with a configuration that includes a `uri` and a non-abstract method `prepare_image_push` for image validation. +ZenML's container registries are defined by a basic abstraction that includes: +- **Base Configuration**: Contains a `uri`. +- **Base Class**: Implements a non-abstract `prepare_image_push` method for validation. + +**Key Classes:** +1. `BaseContainerRegistryConfig`: Holds configuration with a `uri`. +2. `BaseContainerRegistry`: Contains methods for preparing and pushing Docker images. +3. `BaseContainerRegistryFlavor`: Defines the structure for flavors, including properties for `name`, `type`, `config_class`, and `implementation_class`. +**Code Snippet:** ```python from abc import abstractmethod from typing import Type @@ -1640,111 +1700,108 @@ from zenml.stack.authentication_mixin import AuthenticationConfigMixin, Authenti from zenml.utils import docker_utils class BaseContainerRegistryConfig(AuthenticationConfigMixin): - """Base config for a container registry.""" uri: str class BaseContainerRegistry(AuthenticationMixin): - """Base class for all ZenML container registries.""" def prepare_image_push(self, image_name: str) -> None: - """Prepare for image push.""" - + pass + def push_image(self, image_name: str) -> str: - """Push a Docker image.""" if not image_name.startswith(self.config.uri): - raise ValueError(f"Image `{image_name}` does not belong to registry `{self.config.uri}`.") + raise ValueError(f"Docker image `{image_name}` does not belong to registry `{self.config.uri}`.") self.prepare_image_push(image_name) return docker_utils.push_image(image_name) class BaseContainerRegistryFlavor(Flavor): - """Base flavor for container registries.""" @property @abstractmethod def name(self) -> str: - """Returns the flavor name.""" - + pass + @property def type(self) -> StackComponentType: - """Returns the flavor type.""" return StackComponentType.CONTAINER_REGISTRY @property def config_class(self) -> Type[BaseContainerRegistryConfig]: - """Config class for this flavor.""" return BaseContainerRegistryConfig @property def implementation_class(self) -> Type[BaseContainerRegistry]: - """Implementation class.""" return BaseContainerRegistry ``` -#### Building Your Own Container Registry -To create a custom flavor for a container registry: -1. Inherit from `BaseContainerRegistry` and implement `prepare_image_push` for any pre-push validation. -2. If additional configuration is needed, inherit from `BaseContainerRegistryConfig`. -3. Combine both by inheriting from `BaseContainerRegistryFlavor`. - -Register your implementation via the CLI: +#### Steps to Build Your Custom Container Registry +1. **Create a Custom Class**: Inherit from `BaseContainerRegistry` and implement `prepare_image_push` for any necessary validations. +2. **Define Configuration**: Create a class inheriting from `BaseContainerRegistryConfig` for additional settings. +3. **Combine Implementation and Configuration**: Inherit from `BaseContainerRegistryFlavor`. +**Registering the Flavor**: +Use the CLI to register your custom flavor: ```shell zenml container-registry flavor register <path.to.MyContainerRegistryFlavor> ``` - -For example: - +Example: ```shell zenml container-registry flavor register flavors.my_flavor.MyContainerRegistryFlavor ``` -#### Important Notes -- Ensure ZenML is initialized at the root of your repository for proper flavor resolution. -- After registration, verify the new flavor with: +**Important Note**: Initialize ZenML at the root of your repository to ensure proper path resolution. +#### Verifying Registration +To confirm your flavor is registered, list available flavors: ```shell zenml container-registry flavor list ``` #### Workflow Integration -- The **CustomContainerRegistryFlavor** is used during flavor creation. -- The **CustomContainerRegistryConfig** is utilized for validating stack component registration. -- The **CustomContainerRegistry** is invoked when the component is in use, allowing separation of configuration and implementation. +- **CustomContainerRegistryFlavor**: Used during flavor creation. +- **CustomContainerRegistryConfig**: Validates user input during registration. +- **CustomContainerRegistry**: Engaged when the component is utilized. -This design enables registration of flavors and components independently of their implementation dependencies. +This design separates flavor configuration from implementation, allowing for registration without local dependencies on the implementation. ================================================== === File: docs/book/component-guide/container-registries/dockerhub.md === -### DockerHub Container Registry in ZenML +### DockerHub Container Registry Overview -**Overview:** -DockerHub is a built-in container registry in ZenML for storing container images. +**DockerHub** is a built-in container registry in ZenML for storing container images. + +#### When to Use DockerHub +- If components of your stack require pulling or pushing container images. +- If you have a DockerHub account. -**When to Use:** -- If components of your stack need to pull/push container images. -- If you have a DockerHub account. If not, consider other container registry options. +For alternatives, refer to other [container registry flavors](./container-registries.md#container-registry-flavors). -**Deployment Steps:** -1. Create a DockerHub account. -2. By default, images are published in a **public** repository. For **private** repositories, create one on DockerHub before running the pipeline. -3. The repository name is based on the remote orchestrator or step operator in your stack. +#### Deployment Steps +1. **Create a DockerHub Account**: Required to use the registry. +2. **Repository Types**: + - **Public**: Default for images built in ZenML. + - **Private**: Must create a private repository on DockerHub before running the pipeline. -**Finding the Registry URI:** -The URI format is: +#### Finding the Registry URI +The DockerHub registry URI can be in one of the following formats: ```shell <ACCOUNT_NAME> # or docker.io/<ACCOUNT_NAME> ``` -**Examples:** +**Examples**: - `zenml` - `my-username` - `docker.io/zenml` - `docker.io/my-username` -**Usage:** -1. Ensure Docker is installed and running. -2. Use the registry URI to register the container registry: +To determine your URI, use your DockerHub account name in the format `docker.io/<ACCOUNT_NAME>`. + +#### Using DockerHub in ZenML +Prerequisites: +- **Docker** installed and running. +- Registry URI obtained from the previous section. + +**Register the Container Registry**: ```shell zenml container-registry register <NAME> \ --flavor=dockerhub \ @@ -1753,81 +1810,94 @@ zenml container-registry register <NAME> \ # Update the active stack zenml stack update -c <NAME> ``` -3. Log in to DockerHub for image access: + +**Login to DockerHub**: ```shell docker login ``` -**Credentials:** Use your DockerHub account name and password or a personal access token. +Use your DockerHub account name and either your password or a [personal access token](https://docs.docker.com/docker-hub/access-tokens/). -**Further Information:** -For detailed attributes of the DockerHub container registry, refer to the [SDK Docs](https://apidocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.dockerhub_container_registry.DockerHubContainerRegistry). +For more details, refer to the [SDK Docs](https://apidocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.dockerhub_container_registry.DockerHubContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/default.md === -### Summary of Default Container Registry Documentation +### Summary: Storing Container Images Locally with ZenML -#### Overview -The Default Container Registry in ZenML allows for local or remote container registry usage. It supports any URI format. +#### Default Container Registry +The Default Container Registry in ZenML supports any container registry URI format, ideal for local or unsupported remote registries. -#### When to Use -Use the Default Container Registry for a **local** registry or a remote registry not covered by other flavors. +#### Usage +- **When to Use**: For local container registries or unsupported remote registries. +- **Local Registry URI Format**: Use `localhost:<PORT>`, e.g., `localhost:5000`. -#### Local Registry URI Format -To specify a local registry, use: -```shell -localhost:<PORT> -# Examples: -localhost:5000 -localhost:8000 -localhost:9999 -``` +#### Setup Requirements +1. **Docker**: Must be installed and running. +2. **Registry URI**: Follow the local registry URI format. + +#### Registering the Container Registry +To register and use the Default Container Registry in your active stack: -#### Usage Steps -1. Ensure Docker is installed and running. -2. Register the container registry: ```shell zenml container-registry register <NAME> --flavor=default --uri=<REGISTRY_URI> zenml stack update -c <NAME> ``` -3. Set up authentication if using a private registry. #### Authentication Methods -- **Local Authentication**: Quick setup using Docker client credentials. - ```shell - docker login --username <USERNAME> --password-stdin <REGISTRY_URI> - ``` - *Note: Not portable across environments.* +- **Private Registries**: Configure authentication to log in. +- **Local Authentication**: Quick setup using local Docker credentials. Use: -- **Docker Service Connector (Recommended)**: Allows for better credential management. - - Register using: - ```sh - zenml service-connector register --type docker -i - ``` - - Non-interactive example: - ```sh - zenml service-connector register <CONNECTOR_NAME> --type docker --username=<USERNAME> --password=<PASSWORD_OR_TOKEN> - ``` +```shell +docker login --username <USERNAME> --password-stdin <REGISTRY_URI> +``` + +**Note**: Local authentication is not portable across environments. For portability, use a Docker Service Connector. + +#### Docker Service Connector (Recommended) +For private registries, leverage the Docker Service Connector for authentication: + +1. **Register**: + +```sh +zenml service-connector register --type docker -i +``` + +2. **Non-Interactive Registration**: + +```sh +zenml service-connector register <CONNECTOR_NAME> --type docker --username=<USERNAME> --password=<PASSWORD_OR_TOKEN> +``` + +3. **List Resources**: + +```sh +zenml service-connector list-resources --connector-type docker --resource-id <REGISTRY_URI> +``` + +4. **Connect Container Registry**: -#### Connecting to the Container Registry -1. Register the container registry: ```sh zenml container-registry register <CONTAINER_REGISTRY_NAME> -f default --uri=<REGISTRY_URL> +zenml container-registry connect <CONTAINER_REGISTRY_NAME> -i ``` -2. Connect via Docker Service Connector: + +**Non-Interactive Connection**: + ```sh zenml container-registry connect <CONTAINER_REGISTRY_NAME> --connector <CONNECTOR_ID> ``` #### Final Steps To use the Default Container Registry in a ZenML Stack: + ```sh zenml stack register <STACK_NAME> -c <CONTAINER_REGISTRY_NAME> ... --set ``` -#### Local Docker Client Authentication -If the Default Container Registry is linked to a Service Connector, you can temporarily authenticate your local Docker client: +#### Local Client Authentication +After connecting to a Service Connector, local Docker client authentication may be needed: + ```sh zenml service-connector login <CONNECTOR_NAME> ``` @@ -1840,107 +1910,79 @@ For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integr ### Google Cloud Container Registry Overview -The Google Cloud Container Registry (GCP Container Registry) is integrated with ZenML and utilizes the Google Artifact Registry. **Important:** Google Container Registry is being phased out in favor of Artifact Registry, which will fully replace it by March 18, 2025. - -### When to Use +The Google Cloud Container Registry (GCP) utilizes the Google Artifact Registry, which is replacing the Google Container Registry (GCR). Users are encouraged to transition to Artifact Registry as GCR will be shut down after March 18, 2025. -Use the GCP Container Registry if: -- Your stack components require pulling or pushing container images. -- You have access to GCP. +### When to Use GCP Container Registry +- If components of your stack need to pull or push container images. +- If you have access to GCP. ### Deployment Steps - -1. **Enable Google Artifact Registry**: [Enable here](https://console.cloud.google.com/marketplace/product/google/artifactregistry.googleapis.com). -2. **Create a Docker Repository**: [Create here](https://console.cloud.google.com/artifacts). +1. Enable Google Artifact Registry [here](https://console.cloud.google.com/marketplace/product/google/artifactregistry.googleapis.com). +2. Create a `Docker` repository [here](https://console.cloud.google.com/artifacts). ### Registry URI Format - -The GCP Container Registry URI format is: - +The URI format for the GCP container registry is: ```shell <REGION>-docker.pkg.dev/<PROJECT_ID>/<REPOSITORY_NAME> ``` - **Examples:** - `europe-west1-docker.pkg.dev/zenml/my-repo` - `southamerica-east1-docker.pkg.dev/zenml/zenml-test` -To find your registry URI, select the repository in the Google Cloud Console and copy the URL. - -### Using the GCP Container Registry - +### Using GCP Container Registry Prerequisites: -- **Docker** installed and running. +- Docker installed and running. - Obtain the registry URI as described above. To register the container registry: - ```shell zenml container-registry register <NAME> --flavor=gcp --uri=<REGISTRY_URI> zenml stack update -c <NAME> ``` ### Authentication Methods +Authentication is required to use the GCP Container Registry: -Authentication is required to use the GCP Container Registry. The recommended method is through a **GCP Service Connector** for better security and convenience. Alternatively, you can use **Local Authentication** for quick local setups. - -#### Local Authentication - -1. Install and configure the GCP CLI. -2. Configure Docker for Google Container Registry: - +#### Local Authentication (Quick Start) +- Install and set up GCP CLI. +- Configure Docker for GCR: ```shell gcloud auth configure-docker ``` - -For Google Artifact Registry: - +- For Artifact Registry: ```shell gcloud auth configure-docker <REGION>-docker.pkg.dev ``` - **Note:** Local authentication is not portable across environments. #### GCP Service Connector (Recommended) - -Register a GCP Service Connector: - +For better security and convenience, use a GCP Service Connector: ```sh zenml service-connector register --type gcp -i ``` - -Or non-interactively: - +To auto-configure: ```sh zenml service-connector register <CONNECTOR_NAME> --type gcp --resource-type docker-registry --auto-configure ``` -### Connecting GCP Container Registry to Service Connector - -To connect the GCP Container Registry to a Service Connector: - +### Connecting GCP Container Registry +To connect the GCP Container Registry to a GCR registry: ```sh zenml container-registry register <CONTAINER_REGISTRY_NAME> -f gcp --uri=<REGISTRY_URL> zenml container-registry connect <CONTAINER_REGISTRY_NAME> -i ``` - For non-interactive connection: - ```sh zenml container-registry connect <CONTAINER_REGISTRY_NAME> --connector <CONNECTOR_ID> ``` -### Using the GCP Container Registry in ZenML Stack - +### Using the Container Registry in ZenML Stack To register and set a stack with the new container registry: - ```sh zenml stack register <STACK_NAME> -c <CONTAINER_REGISTRY_NAME> ... --set ``` -### Additional Resources - -For more detailed configurations and attributes of the GCP container registry, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.gcp_container_registry.GCPContainerRegistry). +For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.gcp_container_registry.GCPContainerRegistry). ================================================== @@ -1948,90 +1990,90 @@ For more detailed configurations and attributes of the GCP container registry, r ### Amazon Elastic Container Registry (ECR) Overview -Amazon ECR is a container registry service integrated with ZenML's AWS integration for storing container images. Use it when components of your stack need to pull or push images and you have access to AWS ECR. +Amazon ECR is the container registry used with the ZenML `aws` integration to store container images. -### Deployment Steps +#### When to Use +Use AWS ECR if: +- Your stack components need to pull or push container images. +- You have access to AWS ECR. -1. **Create a Repository**: - - Go to the [ECR website](https://console.aws.amazon.com/ecr). +#### Deployment Steps +1. Create an AWS account to automatically activate ECR. +2. Create a repository: + - Visit the [ECR website](https://console.aws.amazon.com/ecr). - Select the correct region. - Click on `Create repository` and create a private repository. -2. **URI Format**: - The ECR URI format is: - ``` - <ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com - ``` - Example: - ``` - 123456789.dkr.ecr.eu-west-2.amazonaws.com - ``` +#### URI Format +The ECR URI format is: +``` +<ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com +``` +Example: +``` +123456789.dkr.ecr.eu-west-2.amazonaws.com +``` +To determine your URI: +- Get your `Account ID` from the AWS console. +- Choose a region from [AWS regional endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#regional-endpoints). -3. **Get Your URI**: - - Find your `Account ID` in the AWS console. - - Choose a region from [AWS regional endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#regional-endpoints). - - Construct your URI using the format above. +#### Usage +Prerequisites: +- Install the ZenML `aws` integration: + ```shell + zenml integration install aws + ``` +- Install and run Docker. +- Obtain the registry URI. -### Using the AWS Container Registry +To register the container registry: +```shell +zenml container-registry register <NAME> --flavor=aws --uri=<REGISTRY_URI> +zenml stack update -c <NAME> +``` -1. **Install ZenML AWS Integration**: +#### Authentication Methods +Authentication is required to use AWS ECR: +1. **Local Authentication** (quick setup): + - Install and configure AWS CLI. + - Log in to the ECR: ```shell - zenml integration install aws + aws ecr get-login-password --region <REGION> | docker login --username AWS --password-stdin <REGISTRY_URI> ``` + **Note**: This method is not portable across environments. -2. **Install Docker**. - -3. **Register the Container Registry**: +2. **AWS Service Connector** (recommended): + - Register a service connector: ```shell - zenml container-registry register <NAME> --flavor=aws --uri=<REGISTRY_URI> - zenml stack update -c <NAME> - ``` - -### Authentication Methods - -- **Local Authentication** (quick setup): - - Requires AWS CLI installed and configured. - - Log in to the container registry: - ```shell - aws ecr get-login-password --region <REGION> | docker login --username AWS --password-stdin <REGISTRY_URI> - ``` - -- **AWS Service Connector** (recommended): - - Register a service connector: - ```sh - zenml service-connector register --type aws -i - ``` - - Auto-configure an AWS Service Connector: - ```sh - zenml service-connector register <CONNECTOR_NAME> --type aws --resource-type docker-registry --auto-configure - ``` - -### Connecting the AWS Container Registry - -1. **Connect to ECR**: - ```sh - zenml container-registry connect <CONTAINER_REGISTRY_NAME> -i - ``` - Or non-interactively: - ```sh - zenml container-registry connect <CONTAINER_REGISTRY_NAME> --connector <CONNECTOR_ID> + zenml service-connector register --type aws -i ``` - -2. **Register a ZenML Stack**: - ```sh - zenml stack register <STACK_NAME> -c <CONTAINER_REGISTRY_NAME> ... --set + - Auto-configure a connector: + ```shell + zenml service-connector register <CONNECTOR_NAME> --type aws --resource-type docker-registry --auto-configure ``` -### Local Docker Client Authentication +#### Connecting to ECR +After setting up a service connector, register the AWS container registry: +```shell +zenml container-registry register <CONTAINER_REGISTRY_NAME> -f aws --uri=<REGISTRY_URL> +zenml container-registry connect <CONTAINER_REGISTRY_NAME> -i +``` +For non-interactive connection: +```shell +zenml container-registry connect <CONTAINER_REGISTRY_NAME> --connector <CONNECTOR_ID> +``` -To manually interact with the remote registry: -```sh +#### Final Steps +Use the AWS Container Registry in a ZenML Stack: +```shell +zenml stack register <STACK_NAME> -c <CONTAINER_REGISTRY_NAME> ... --set +``` +To authenticate your local Docker client temporarily: +```shell zenml service-connector login <CONNECTOR_NAME> --resource-type docker-registry ``` -### Additional Resources - -For more details on configurable attributes of the AWS container registry, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.container_registries.aws_container_registry.AWSContainerRegistry). +For further details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.container_registries.aws_container_registry.AWSContainerRegistry). ================================================== @@ -2039,17 +2081,18 @@ For more details on configurable attributes of the AWS container registry, refer ### GitHub Container Registry Overview -The GitHub Container Registry is integrated with ZenML for storing container images. It is ideal for projects using GitHub where components need to pull or push container images. +The GitHub Container Registry, integrated with ZenML, allows for the storage of container images. #### When to Use -- If components of your stack need to interact with container images. -- If your projects are hosted on GitHub. +Utilize the GitHub Container Registry if: +- Your stack components require pulling or pushing container images. +- You are using GitHub for your projects. For alternatives, refer to other container registry flavors. #### Deployment -The GitHub container registry is enabled by default upon creating a GitHub account. +The registry is enabled by default upon creating a GitHub account. #### Registry URI Format -The URI follows this format: +The URI follows this structure: ```shell ghcr.io/<USER_OR_ORGANIZATION_NAME> ``` @@ -2058,51 +2101,56 @@ ghcr.io/<USER_OR_ORGANIZATION_NAME> - `ghcr.io/my-username` - `ghcr.io/my-organization` -To find your registry URI, replace `<USER_OR_ORGANIZATION_NAME>` with your GitHub username or organization name. +To determine your registry URI, replace `<USER_OR_ORGANIZATION_NAME>` with your GitHub username or organization name. #### Usage Requirements -- **Docker**: Must be installed and running. -- **Registry URI**: Refer to the URI format above. -- **Docker Client Configuration**: Authenticate using a personal access token. Follow the [authentication guide](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-to-the-container-registry). +To use the GitHub Container Registry, ensure you have: +- Docker installed and running. +- The correct registry URI (see format above). +- A configured Docker client for pulling and pushing images, including creating a personal access token for authentication. #### Registering the Container Registry -To register and update your active stack, use: +To register and use the GitHub container registry in your active stack: ```shell -zenml container-registry register <NAME> --flavor=github --uri=<REGISTRY_URI> +zenml container-registry register <NAME> \ + --flavor=github \ + --uri=<REGISTRY_URI> + +# Update the active stack zenml stack update -c <NAME> ``` -For detailed attributes and configuration options, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.github_container_registry.GitHubContainerRegistry). +For detailed attributes and configurations, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.github_container_registry.GitHubContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/container-registries.md === -# Container Registries +### Container Registries Container registries are crucial for storing Docker images used in remote MLOps stacks, enabling the containerization of machine learning pipeline code for isolated execution. -### When to Use +#### When to Use -A container registry is necessary when components of your stack need to push or pull container images, particularly for ZenML's remote orchestrators, step operators, and some model deployers. Check the documentation of the specific component for registry requirements. +A container registry is necessary when components of your stack need to push or pull container images, particularly for ZenML's remote orchestrators, step operators, and model deployers. Check the documentation of the specific component to determine if a container registry is required. -### Container Registry Flavors +#### Container Registry Flavors ZenML offers several container registry flavors: -- **Default Flavor**: Accepts any URI without validation, suitable for local or unsupported remote registries. -- **Specific Flavors**: Validate the URI and ensure push capabilities. +- **Default Flavor**: Accepts any URI without validation; suitable for local or unsupported remote registries. +- **Specific Flavors**: Validates URIs and performs checks for push permissions. -**Recommendation**: Use specific container registry flavors for enhanced URI validation. +**Recommendation**: Use specific flavors for enhanced URI validation. -| Container Registry | Flavor | Integration | URI Example | -|--------------------|--------|-------------|-------------| -| [DefaultContainerRegistry](default.md) | `default` | _built-in_ | - | -| [DockerHubContainerRegistry](dockerhub.md) | `dockerhub` | _built-in_ | docker.io/zenml | -| [GCPContainerRegistry](gcp.md) | `gcp` | _built-in_ | gcr.io/zenml | -| [AzureContainerRegistry](azure.md) | `azure` | _built-in_ | zenml.azurecr.io | -| [GitHubContainerRegistry](github.md) | `github` | _built-in_ | ghcr.io/zenml | -| [AWSContainerRegistry](aws.md) | `aws` | `aws` | 123456789.dkr.ecr.us-east-1.amazonaws.com | +| Container Registry | Flavor | Integration | URI Example | +|-----------------------------------|----------|--------------|-----------------------------------------| +| [DefaultContainerRegistry](default.md) | `default` | _built-in_ | - | +| [DockerHubContainerRegistry](dockerhub.md) | `dockerhub` | _built-in_ | docker.io/zenml | +| [GCPContainerRegistry](gcp.md) | `gcp` | _built-in_ | gcr.io/zenml | +| [AzureContainerRegistry](azure.md) | `azure` | _built-in_ | zenml.azurecr.io | +| [GitHubContainerRegistry](github.md) | `github` | _built-in_ | ghcr.io/zenml | +| [AWSContainerRegistry](aws.md) | `aws` | `aws` | 123456789.dkr.ecr.us-east-1.amazonaws.com | To view available container registry flavors, use the command: @@ -2116,11 +2164,13 @@ zenml container-registry flavor list ### Develop a Custom Feature Store -**Overview**: Feature stores enable data teams to serve data through an offline store and an online low-latency store, maintaining synchronization between them. They also provide a centralized registry for features and feature schemas for team or organizational use. +**Overview**: Feature stores enable data teams to serve data through both an offline store and an online low-latency store, maintaining synchronization between them. They also provide a centralized registry for features and feature schemas for team or organizational use. + +**Prerequisites**: Familiarize yourself with the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) to understand ZenML's component flavor concepts. -**Important Note**: The base abstraction for feature stores is currently in development, and extension is not possible at this time. Users should refer to the list of available feature stores for integration into their stack. +**Current Status**: The base abstraction for feature stores is under development and will be available soon. Currently, extension of feature stores is not possible. For immediate use, refer to the list of available feature stores. -**Recommendation**: Before implementing a custom feature store, review the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational knowledge on component flavor concepts. + ================================================== @@ -2128,63 +2178,59 @@ zenml container-registry flavor list # Feature Stores -Feature stores enable data teams to manage data through an offline store and an online low-latency store, ensuring synchronization between them. They provide a centralized registry for features and feature schemas, addressing the issue of train-serve skew, where training and serving data diverge. +Feature stores enable data teams to manage data through an offline store and an online low-latency store, ensuring synchronization between the two. They provide a centralized registry for features and feature schemas, facilitating access for teams and organizations. Feast addresses the issue of train-serve skew, where training and serving data diverge. ### When to Use It -Feature stores are optional in the ZenML Stack and should be utilized for: +Feature stores are optional components in the ZenML Stack, ideal for: - Productionalizing new features -- Reusing existing features across pipelines and models +- Reusing existing features across multiple pipelines and models - Ensuring consistency between training and serving data - Providing a central registry of features and feature schemas ### Available Feature Stores -ZenML integrates with various feature stores, notably: -| Feature Store | Flavor | Integration | Notes | -|-----------------------------|----------|-------------|------------------------------------------| -| [FeastFeatureStore](feast.md) | `feast` | `feast` | Connects ZenML with existing Feast | -| [Custom Implementation](custom.md) | _custom_ | | Allows for custom feature store solutions | +ZenML integrates with various feature stores, including: +| Feature Store | Flavor | Integration | Notes | +|-----------------------------|---------|-------------|-----------------------------------------------| +| [FeastFeatureStore](feast.md) | `feast` | `feast` | Connect ZenML with existing Feast | +| [Custom Implementation](custom.md) | _custom_ | | Extend the feature store abstraction | -To view available feature store flavors, use: +To view available feature store flavors, use the command: ```shell zenml feature-store flavor list ``` ### How to Use It -The feature store implementation in ZenML is based on the Feast integration. Refer to the [Feast documentation](feast.md#how-do-you-use-it) for usage details. +The feature store implementation in ZenML is based on the Feast integration, following the usage guidelines outlined on the [Feast page](feast.md#how-do-you-use-it). ================================================== === File: docs/book/component-guide/feature-stores/feast.md === -### Summary of Feast Feature Store Documentation - -**Feast Overview** -- Feast (Feature Store) is a system for managing and serving machine learning features for production models. -- It supports low-latency online data serving for real-time predictions and offline data serving for batch scoring and model training. +### Feast Feature Store Overview -**Use Cases** -- Access offline/batch data for model training. -- Access online data during inference. - -**Deployment** -- Users must have a Feast feature store to connect with ZenML. If not, refer to the [Feast Documentation](https://docs.feast.dev/how-to-guides/feast-snowflake-gcp-aws/deploy-a-feature-store) for deployment. -- Install the Feast integration in ZenML: - -```shell -zenml integration install feast -``` +Feast (Feature Store) is a system for managing and serving machine learning features to production models. It supports low-latency online feature serving for real-time predictions and offline feature serving for batch scoring or model training. -- Register the feature store as a ZenML stack component: +### Use Cases +- **Training**: Access offline/batch data for model training. +- **Inference**: Access online data during model inference. -```shell -zenml feature-store register feast_store --flavor=feast --feast_repo="<PATH/TO/FEAST/REPO>" -zenml stack register ... -f feast_store -``` +### Deployment +To deploy Feast with ZenML: +1. Ensure you have a Feast feature store. If not, follow the [Feast Documentation](https://docs.feast.dev/how-to-guides/feast-snowflake-gcp-aws/deploy-a-feature-store). +2. Install the Feast integration in ZenML: + ```shell + zenml integration install feast + ``` +3. Register the feature store in your ZenML stack: + ```shell + zenml feature-store register feast_store --flavor=feast --feast_repo="<PATH/TO/FEAST/REPO>" + zenml stack register ... -f feast_store + ``` -**Usage** -- Online data retrieval is currently limited to local settings and not supported in deployed models. +### Usage +**Note**: Online data retrieval is supported locally but not in deployed models. -To retrieve features from a registered feature store, create a step: +To retrieve features from a registered feature store, create a step that interfaces with it: ```python from datetime import datetime @@ -2196,7 +2242,7 @@ from zenml.client import Client def get_historical_features(entity_dict, features, full_feature_names=False) -> pd.DataFrame: feature_store = Client().active_stack.feature_store if not feature_store: - raise DoesNotExistException("Feast feature store not available.") + raise DoesNotExistException("Feast feature store component not available.") entity_dict["event_timestamp"] = [datetime.fromisoformat(val) for val in entity_dict["event_timestamp"]] entity_df = pd.DataFrame.from_dict(entity_dict) @@ -2222,13 +2268,11 @@ features = [ @pipeline def my_pipeline(): my_features = get_historical_features(entity_dict, features) - ... ``` -**Note** -- ZenML uses Pydantic for serialization, which limits data types to basic types; thus, conversion is necessary for complex types like `DataFrame` or `datetime`. - -For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-feast/#zenml.integrations.feast.feature_stores.feast_feature_store.FeastFeatureStore). +### Important Notes +- ZenML uses Pydantic for input serialization, which limits data types to basic types (e.g., cannot handle `DataFrame` or `datetime` directly). +- For detailed configuration options, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-feast/#zenml.integrations.feast.feature_stores.feast_feature_store.FeastFeatureStore). ================================================== @@ -2236,34 +2280,35 @@ For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integr # Step Operators -The step operator allows execution of individual pipeline steps in specialized environments optimized for specific workloads, providing access to resources like GPUs or distributed processing frameworks (e.g., Spark). +The step operator allows execution of individual pipeline steps in specialized environments optimized for specific workloads, such as those requiring GPUs or distributed processing frameworks like [Spark](https://spark.apache.org/). ### Comparison to Orchestrators -The orchestrator is a mandatory component that executes all pipeline steps in order and manages features like scheduling. In contrast, the step operator executes individual steps in separate environments when the orchestrator's environment is insufficient. +The orchestrator is a mandatory component that executes all pipeline steps in order and manages scheduling. In contrast, the step operator executes individual steps in separate environments when the orchestrator's environment is insufficient. ### When to Use It -Use a step operator when pipeline steps require resources unavailable in the orchestrator's runtime. For example, if a step needs a GPU for training a model and the orchestrator runs on a cluster without GPU nodes, a step operator like SageMaker, Vertex, or AzureML should be used. +Use a step operator when pipeline steps require resources unavailable in the orchestrator's runtime environment. For example, if a step needs GPU resources for training a computer vision model, but the orchestrator (e.g., a [Kubeflow orchestrator](../orchestrators/kubeflow.md)) lacks GPU nodes, a step operator like [SageMaker](sagemaker.md), [Vertex](vertex.md), or [AzureML](azureml.md) should be used. ### Step Operator Flavors -ZenML provides integrations for executing steps on major cloud providers: - -| Step Operator | Flavor | Integration | Notes | -|----------------|-------------|-------------|-------------------------------------| -| AzureML | `azureml` | `azure` | Executes steps using AzureML | -| Kubernetes | `kubernetes`| `kubernetes`| Executes steps using Kubernetes Pods| -| Modal | `modal` | `modal` | Executes steps using Modal | -| SageMaker | `sagemaker` | `aws` | Executes steps using SageMaker | -| Spark | `spark` | `spark` | Executes steps in a distributed manner using Spark on Kubernetes | -| Vertex | `vertex` | `gcp` | Executes steps using Vertex AI | -| Custom | _custom_ | | Allows custom step operator implementation | - -To view available flavors, run: +ZenML provides the following step operators for major cloud providers: + +| Step Operator | Flavor | Integration | Notes | +|---------------|-------------|-------------|-----------------------------------------| +| [AzureML](azureml.md) | `azureml` | `azure` | Executes steps using AzureML | +| [Kubernetes](kubernetes.md) | `kubernetes` | `kubernetes` | Executes steps using Kubernetes Pods | +| [Modal](modal.md) | `modal` | `modal` | Executes steps using Modal | +| [SageMaker](sagemaker.md) | `sagemaker` | `aws` | Executes steps using SageMaker | +| [Spark](spark-kubernetes.md) | `spark` | `spark` | Executes steps using Spark on Kubernetes | +| [Vertex](vertex.md) | `vertex` | `gcp` | Executes steps using Vertex AI | +| [Custom Implementation](custom.md) | _custom_ | | Allows custom step operator implementation | + +To view available flavors, use: + ```shell zenml step-operator flavor list ``` ### How to Use It -You don't need to interact directly with ZenML step operators. Simply specify the desired step operator in the `@step` decorator of your step: +You don't need to directly interact with ZenML step operators in your code. Simply specify the desired step operator in the `@step` decorator of your step, as shown below: ```python from zenml import step @@ -2274,10 +2319,10 @@ def my_step(...) -> ...: ``` #### Specifying Per-Step Resources -For additional hardware resources, specify them in your steps as described in the relevant documentation. +For additional hardware resources, specify them in your steps as described [here](../../how-to/pipeline-development/training-with-gpus/README.md). #### Enabling CUDA for GPU-Backed Hardware -To run steps on a GPU, follow the instructions to enable CUDA for full acceleration, which requires additional settings customization. +To run steps on a GPU, follow [these instructions](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA for full GPU acceleration. ================================================== @@ -2289,12 +2334,7 @@ To run steps on a GPU, follow the instructions to enable CUDA for full accelerat To create a custom step operator in ZenML, familiarize yourself with the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). #### Base Abstraction -The `BaseStepOperator` is the abstract class for implementing step operators, requiring subclasses to define the `launch` method. This method executes a synchronous job with the provided `entrypoint_command`. - -**Key Classes:** -- `BaseStepOperatorConfig`: Base configuration for step operators. -- `BaseStepOperator`: Abstract class requiring the `launch` method. -- `BaseStepOperatorFlavor`: Base class for step operator flavors, defining properties like `name`, `type`, and `config_class`. +The `BaseStepOperator` is the abstract class for running pipeline steps in a separate environment. It provides a basic interface: ```python from abc import ABC, abstractmethod @@ -2307,67 +2347,48 @@ class BaseStepOperatorConfig(StackComponentConfig): """Base config for step operators.""" class BaseStepOperator(StackComponent, ABC): - """Base class for all ZenML step operators.""" + """Base class for ZenML step operators.""" @abstractmethod def launch(self, info: StepRunInfo, entrypoint_command: List[str]) -> None: - """Execute a step.""" - -class BaseStepOperatorFlavor(Flavor): - """Base class for all ZenML step operator flavors.""" - - @property - @abstractmethod - def name(self) -> str: - """Flavor name.""" - - @property - def type(self) -> StackComponentType: - return StackComponentType.STEP_OPERATOR - - @property - def config_class(self) -> Type[BaseStepOperatorConfig]: - return BaseStepOperatorConfig - - @property - @abstractmethod - def implementation_class(self) -> Type[BaseStepOperator]: - """Implementation class for this flavor.""" + """Executes a step synchronously.""" ``` -#### Steps to Create a Custom Step Operator -1. **Subclass `BaseStepOperator`**: Implement the `launch` method to set up the execution environment and run the entrypoint command. - - Ensure required `pip` dependencies are installed. - - Make source code available in the execution environment. - +#### Creating a Custom Step Operator +To build a custom flavor: + +1. **Subclass `BaseStepOperator`**: Implement the `launch` method to prepare the execution environment and run the entrypoint command. 2. **Handle Resources**: Manage resources defined in `info.config.resource_settings`. +3. **Create Configuration Class**: Inherit from `BaseStepOperatorConfig` to add custom parameters. +4. **Combine Implementation and Configuration**: Inherit from `BaseStepOperatorFlavor`, providing a name for the flavor. -3. **Configuration Class**: Create a class inheriting from `BaseStepOperatorConfig` for custom parameters. +Register the flavor using: -4. **Flavor Class**: Inherit from `BaseStepOperatorFlavor`, providing a name for the flavor. +```shell +zenml step-operator flavor register <path.to.MyStepOperatorFlavor> +``` -5. **Register the Flavor**: Use the CLI to register the flavor: - ```shell - zenml step-operator flavor register <path.to.MyStepOperatorFlavor> - ``` - Example: - ```shell - zenml step-operator flavor register flavors.my_flavor.MyStepOperatorFlavor - ``` +Example registration: + +```shell +zenml step-operator flavor register flavors.my_flavor.MyStepOperatorFlavor +``` #### Important Notes - Ensure ZenML is initialized at the root of your repository for proper flavor resolution. -- After registration, list available flavors: - ```shell - zenml step-operator flavor list - ``` +- After registration, list available flavors with: + +```shell +zenml step-operator flavor list +``` -#### Additional Considerations -- The `CustomStepOperatorFlavor` is used during flavor creation, while `CustomStepOperatorConfig` validates user input during registration. -- The actual `CustomStepOperator` is utilized when the component is in use, allowing separation of configuration and implementation. +#### Workflow Interaction +- **CustomStepOperatorFlavor** is used during flavor creation. +- **CustomStepOperatorConfig** validates user input during registration. +- **CustomStepOperator** is utilized when the component is in use, allowing separation of configuration and implementation. -#### Enabling GPU Support -For GPU execution, follow the [GPU training instructions](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA for acceleration. +#### GPU Support +To run steps on GPU, follow the [GPU training instructions](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA for acceleration. ================================================== @@ -2376,72 +2397,54 @@ For GPU execution, follow the [GPU training instructions](../../how-to/pipeline- ### Executing Individual Steps on Spark #### Overview -The `spark` integration provides two step operators for executing tasks on Spark: -- **`SparkStepOperator`**: Base class for Spark-related step operators. -- **`KubernetesSparkStepOperator`**: Launches ZenML steps as Spark applications on Kubernetes. +The `spark` integration offers two step operators: +- **SparkStepOperator**: Base class for Spark-related step operators. +- **KubernetesSparkStepOperator**: Launches ZenML steps as Spark applications on Kubernetes. -#### `SparkStepOperator` Configuration -The configuration class includes: -```python -class SparkStepOperatorConfig(BaseStepOperatorConfig): - master: str # Master URL for the cluster (supports Kubernetes) - deploy_mode: str = "cluster" # 'cluster' (default) or 'client' - submit_kwargs: Optional[Dict[str, Any]] = None # Additional parameters -``` +#### SparkStepOperator -#### Implementation -The `SparkStepOperator` class includes methods for configuring Spark: -- `_resource_configuration`: Maps ZenML resource settings to Spark. -- `_backend_configuration`: Configures Spark for cluster managers (YARN, Mesos, Kubernetes). -- `_io_configuration`: Configures input/output sources. -- `_additional_configuration`: Appends user-defined parameters. -- `_launch_spark_job`: Executes the Spark job using `spark-submit`. +**Configuration Parameters:** +- `master`: URL for the Spark cluster (supports Kubernetes, Mesos, YARN). +- `deploy_mode`: 'cluster' (default) or 'client' to determine driver node location. +- `submit_kwargs`: JSON string of additional parameters for Spark. -#### Important Notes -- `_io_configuration` is effective only with `S3ArtifactStore` initially; other stores may need extra configuration via `submit_args`. +**Key Methods:** +1. `_resource_configuration`: Configures Spark resource settings. +2. `_backend_configuration`: Configures Spark for specific cluster managers. +3. `_io_configuration`: Configures input/output sources; critical for additional filesystem packages. +4. `_additional_configuration`: Appends user-defined parameters. +5. `_launch_spark_job`: Executes a Spark job using `spark-submit`. -#### `KubernetesSparkStepOperator` -This operator extends `SparkStepOperator` and includes: -```python -class KubernetesSparkStepOperatorConfig(SparkStepOperatorConfig): - namespace: Optional[str] = None # Namespace for pods - service_account: Optional[str] = None # Service account for Spark components -``` +**Note**: `_io_configuration` is effective with `S3ArtifactStore` and may require additional parameters for other stores. -The `_backend_configuration` method is tailored for Kubernetes, building and pushing Docker images for Spark. +#### KubernetesSparkStepOperator -#### Use Cases -Utilize the Spark step operator for: -- Large datasets. +**Configuration Parameters:** +- `namespace`: Kubernetes namespace for driver and executor pods. +- `service_account`: Service account for Spark components. + +**Key Method:** +- `_backend_configuration`: Configures Spark for Kubernetes, builds and pushes Docker images. + +#### Usage Scenarios +Use the Spark step operator for: +- Large data processing. - Steps benefiting from distributed computing. #### Deployment Steps -1. **Remote ZenML Server**: Refer to the deployment guide. -2. **Kubernetes Cluster**: Set up using various cloud providers or custom infrastructure. For AWS, follow the Spark EKS Setup Guide. +1. **Remote ZenML Server**: Follow the [deployment guide](../../getting-started/deploying-zenml/README.md). +2. **Kubernetes Cluster**: Set up using cloud providers or custom infrastructure. For AWS, refer to the [Spark EKS Setup Guide](spark-kubernetes.md#spark-eks-setup-guide). -#### Spark EKS Setup Guide -1. Create an EKS cluster role and node role. -2. Attach `AmazonRDSFullAccess` and `AmazonS3FullAccess` policies. -3. Create the cluster and note the name and API server endpoint: - ```bash - EKS_CLUSTER_NAME=<EKS_CLUSTER_NAME> - EKS_API_SERVER_ENDPOINT=<API_SERVER_ENDPOINT> - ``` -4. Add a node group with recommended instance type `t3a.xlarge`. +**EKS Setup Steps:** +- Create an EKS cluster and node role. +- Attach `AmazonRDSFullAccess` and `AmazonS3FullAccess` policies. +- Note the cluster name and API server endpoint. -#### Docker Image for Spark -Choose a base image for Spark driver and executor pods. Use the `docker-image-tool` to build a custom image: -```bash -cd $SPARK_HOME -./bin/docker-image-tool.sh -t <SPARK_IMAGE_TAG> -p kubernetes/dockerfiles/spark/bindings/python/Dockerfile -u 0 build -``` -For M1 Macs, use: -```bash -./bin/docker-image-tool.sh -X -t <SPARK_IMAGE_TAG> -p kubernetes/dockerfiles/spark/bindings/python/Dockerfile -u 0 build -``` +**Docker Image for Spark:** +- Use base images from [Spark’s Docker Hub](https://hub.docker.com/r/apache/spark-py/tags) or build your own using the `docker-image-tool`. -#### RBAC Configuration -Create `rbac.yaml` for Kubernetes resources: +**RBAC Configuration:** +Create `rbac.yaml` for Spark access: ```yaml apiVersion: v1 kind: Namespace @@ -2468,18 +2471,19 @@ roleRef: name: edit apiGroup: rbac.authorization.k8s.io ``` -Execute: +Run: ```bash aws eks --region=$REGION update-kubeconfig --name=$EKS_CLUSTER_NAME kubectl create -f rbac.yaml ``` -#### Using `KubernetesSparkStepOperator` -1. Install the ZenML `spark` integration: +#### Using KubernetesSparkStepOperator +1. Install ZenML `spark` integration: ```bash zenml integration install spark ``` -2. Register the step operator: +2. Ensure Docker and a remote artifact store are set up. +3. Register the step operator: ```bash zenml step-operator register spark_step_operator \ --flavor=spark-kubernetes \ @@ -2487,7 +2491,8 @@ kubectl create -f rbac.yaml --namespace=<SPARK_KUBERNETES_NAMESPACE> \ --service_account=<SPARK_KUBERNETES_SERVICE_ACCOUNT> ``` -3. Register the stack: + +4. Register the stack: ```bash zenml stack register spark_stack \ -o default \ @@ -2498,70 +2503,74 @@ kubectl create -f rbac.yaml --set ``` -#### Executing Steps -Use the step operator in your pipeline: -```python -@step(step_operator=<STEP_OPERATOR_NAME>) -def step_on_spark(...) -> ...: - ... -``` -To dynamically use the active stack's step operator: -```python -from zenml.client import Client -step_operator = Client().active_stack.step_operator -@step(step_operator=step_operator.name) -def step_on_spark(...) -> ...: - ... -``` +5. Define a step using the operator: + ```python + from zenml import step + + @step(step_operator=<STEP_OPERATOR_NAME>) + def step_on_spark(...) -> ...: + ... + ``` + +6. Verify Spark driver pod creation with: + ```bash + kubectl get pods -n $KUBERNETES_NAMESPACE + ``` #### Additional Configuration -For more settings, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-spark/#zenml.integrations.spark.flavors.spark_step_operator_flavor.SparkStepOperatorSettings). +For further configuration, pass `SparkStepOperatorSettings` when defining or running your pipeline. Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-spark/#zenml.integrations.spark.flavors.spark_step_operator_flavor.SparkStepOperatorSettings) for available attributes. ================================================== === File: docs/book/component-guide/step-operators/sagemaker.md === -### Summary of Executing Individual Steps in SageMaker +### Amazon SageMaker Step Operator Overview -**Overview**: Amazon SageMaker provides specialized compute instances for training jobs, and ZenML's SageMaker step operator allows submission of individual steps to these instances. +Amazon SageMaker provides specialized compute instances for training jobs and a UI for managing models and logs. ZenML's SageMaker step operator allows submission of individual steps to run on SageMaker compute instances. #### When to Use -Use the SageMaker step operator if: -- Your pipeline steps require additional computing resources not available in your orchestrator. -- You have access to SageMaker. For other cloud providers, refer to Vertex or AzureML step operators. +- When pipeline steps require computing resources not provided by your orchestrator. +- When you have access to SageMaker. #### Deployment Requirements -1. Create an IAM role in the AWS console with `AmazonS3FullAccess` and `AmazonSageMakerFullAccess` policies. -2. Install the ZenML AWS integration: +1. **IAM Role**: Create a role in the IAM console with at least `AmazonS3FullAccess` and `AmazonSageMakerFullAccess` policies. [Setup Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html#sagemaker-roles-create-execution-role). +2. **ZenML AWS Integration**: Install with: ```shell zenml integration install aws ``` -3. Ensure Docker is installed and running. -4. Set up an AWS container registry and a remote artifact store for reading/writing artifacts. -5. Choose an instance type for executing steps. Refer to the [available instance types](https://docs.aws.amazon.com/sagemaker/latest/dg/notebooks-available-instance-types.html). -6. (Optional) Create an experiment to group SageMaker runs. +3. **Docker**: Must be installed and running. +4. **AWS Container Registry**: Required for your stack. [Setup Guide](../container-registries/aws.md#how-to-deploy-it). +5. **Remote Artifact Store**: Needed for reading/writing step artifacts. Refer to the specific artifact store documentation for setup. +6. **Instance Type**: Choose an instance type for executing steps. [Available Types](https://docs.aws.amazon.com/sagemaker/latest/dg/notebooks-available-instance-types.html). +7. **(Optional) Experiment**: Group SageMaker runs. [Creation Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments-create.html). #### Authentication Methods -**1. Service Connector (Recommended)** -- Register a service connector with the necessary permissions. +1. **Service Connector (Recommended)**: + - Register a service connector and connect it to the SageMaker step operator: ```shell zenml service-connector register <CONNECTOR_NAME> --type aws -i - zenml step-operator register <STEP_OPERATOR_NAME> --flavor=sagemaker --role=<SAGEMAKER_ROLE> --instance_type=<INSTANCE_TYPE> + zenml step-operator register <STEP_OPERATOR_NAME> \ + --flavor=sagemaker \ + --role=<SAGEMAKER_ROLE> \ + --instance_type=<INSTANCE_TYPE> \ zenml step-operator connect <STEP_OPERATOR_NAME> --connector <CONNECTOR_NAME> zenml stack register <STACK_NAME> -s <STEP_OPERATOR_NAME> ... --set ``` -**2. Implicit Authentication** -- For local orchestrators, ZenML uses the `default` AWS profile. -- For remote orchestrators, ensure they can authenticate to AWS and assume the IAM role specified. +2. **Implicit Authentication**: + - For local orchestrators, ZenML uses the `default` profile in your AWS configuration. + - For remote orchestrators, ensure it can authenticate to AWS and assume the specified IAM role. ```shell - zenml step-operator register <NAME> --flavor=sagemaker --role=<SAGEMAKER_ROLE> --instance_type=<INSTANCE_TYPE> + zenml step-operator register <NAME> \ + --flavor=sagemaker \ + --role=<SAGEMAKER_ROLE> \ + --instance_type=<INSTANCE_TYPE> \ zenml stack register <STACK_NAME> -s <STEP_OPERATOR_NAME> ... --set python run.py # Authenticates with `default` profile ``` -#### Executing Steps +#### Using the Step Operator To execute steps in SageMaker, specify the step operator in the `@step` decorator: ```python from zenml import step @@ -2571,11 +2580,13 @@ def trainer(...) -> ...: """Train a model.""" ``` +ZenML builds a Docker image `<CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME>` for running steps in SageMaker. + #### Additional Configuration -- Use `SagemakerStepOperatorSettings` for further configuration. Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.flavors.sagemaker_step_operator_flavor.SagemakerStepOperatorSettings) for attributes. -- For GPU usage, follow [these instructions](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA. +For further configuration, pass `SagemakerStepOperatorSettings` when defining or running your pipeline. Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.flavors.sagemaker_step_operator_flavor.SagemakerStepOperatorSettings) for attributes and [configuration files](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md) for more details. -ZenML builds a Docker image `<CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME>` for running steps in SageMaker. For customization details, refer to the ZenML documentation on Docker builds. +#### Enabling CUDA for GPU +To run steps on GPU, follow [these instructions](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA for full acceleration. ================================================== @@ -2583,39 +2594,35 @@ ZenML builds a Docker image `<CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME>` for ### Modal Step Operator Overview -**Modal** is a cloud infrastructure platform optimized for fast execution, particularly in building Docker images and provisioning hardware. The **ZenML Modal step operator** enables submission of individual steps to Modal compute instances. +**Modal** is a cloud infrastructure platform optimized for fast execution of code, particularly for building Docker images and provisioning hardware. The **ZenML Modal step operator** allows users to submit individual steps to Modal compute instances. #### When to Use -Use the Modal step operator when you need: - Fast execution for resource-intensive steps (CPU, GPU, memory). -- Precise hardware specifications for each step. -- Access to Modal. +- Specify exact hardware requirements for each step. +- Access to Modal is required. #### Deployment Steps -1. **Sign Up**: Create a Modal account [here](https://modal.com/signup). -2. **Install Modal CLI**: - ```shell - pip install modal - modal setup - ``` +1. **Sign Up**: Create a Modal account. +2. **Install CLI**: Run `pip install modal` or `zenml integration install modal`. +3. **Authenticate**: Execute `modal setup` in the terminal. #### Usage Requirements - Install ZenML's Modal integration: - ```shell - zenml integration install modal - ``` + ```shell + zenml integration install modal + ``` - Ensure Docker is installed and running. -- Set up a cloud artifact store and a cloud container registry compatible with ZenML. +- Set up a cloud artifact store and a cloud container registry supported by ZenML. #### Registering the Step Operator -Register the Modal step operator: +Register the Modal step operator and update your stack: ```shell zenml step-operator register <NAME> --flavor=modal zenml stack update -s <NAME> ... ``` #### Executing Steps -To execute a step in Modal, use the `@step` decorator: +Use the registered step operator in the `@step` decorator: ```python from zenml import step @@ -2623,7 +2630,7 @@ from zenml import step def trainer(...) -> ...: """Train a model.""" ``` -ZenML will build a Docker image containing your code for execution. +ZenML builds a Docker image with your code for execution in Modal. #### Additional Configuration Specify hardware requirements using `ResourceSettings`: @@ -2644,56 +2651,64 @@ resource_settings = ResourceSettings(cpu=2, memory="32GB") def my_modal_step(): ... ``` -- The `cpu` parameter in `ResourceSettings` accepts a single integer, indicating a soft minimum limit. -- Example cost for 2 CPUs and 32GB memory is approximately $1.03 per hour. +- The `cpu` parameter accepts a single integer, indicating a soft minimum limit. +- Example cost for 2 CPUs and 32GB memory is approximately $1.03/hour. -This configuration runs `my_modal_step` on a Modal instance with 1 A100 GPU, 2 CPUs, and 32GB memory. For supported GPU types, refer to the [Modal docs](https://modal.com/docs/reference/modal.gpu) and for more settings details, see the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-modal/#zenml.integrations.modal.flavors.modal_step_operator_flavor.ModalStepOperatorSettings). +This configuration runs `my_modal_step` on a Modal instance with 1 A100 GPU, 2 CPUs, and 32GB memory. For supported GPU types and additional settings, refer to the [Modal docs](https://modal.com/docs/reference/modal.gpu) and [ZenML SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-modal/#zenml.integrations.modal.flavors.modal_step_operator_flavor.ModalStepOperatorSettings). -**Note**: Region and cloud provider settings are available only for Modal Enterprise and Team plan customers. It’s recommended to use looser settings to prevent execution failures. Modal provides detailed error messages for troubleshooting. For more on region selection, visit the [Modal docs](https://modal.com/docs/guide/region-selection). +#### Important Notes +- Region and cloud provider settings are available for Modal Enterprise and Team plan customers. +- Use looser settings to prevent execution failures; Modal provides detailed error messages for troubleshooting. +- For more on region selection, see the [Modal docs](https://modal.com/docs/guide/region-selection). ================================================== === File: docs/book/component-guide/step-operators/azureml.md === -### AzureML Step Operator Documentation Summary +### AzureML Step Operator Overview -**Overview**: AzureML provides specialized compute instances for training jobs and a UI for model management. ZenML's AzureML step operator allows submission of individual pipeline steps to AzureML compute instances. +AzureML provides compute instances for training jobs and a UI for model management. ZenML's AzureML step operator allows submission of individual steps to AzureML compute instances. -**When to Use**: -- If your pipeline steps require compute resources not available from your orchestrator. -- If you have access to AzureML. +#### When to Use +- Use the AzureML step operator if: + - Your pipeline steps require additional computing resources (CPU, GPU, memory). + - You have access to AzureML. -**Deployment Steps**: -1. Create an Azure Machine Learning workspace, including a container registry and storage account. -2. (Optional) Create a compute instance or cluster in AzureML. -3. (Optional) Create a Service Principal for authentication if using a service connector. +#### Deployment Steps +1. **Create AzureML Workspace**: Set up a workspace with an Azure container registry and storage account. +2. **(Optional)** Create a compute instance or cluster in AzureML Studio. +3. **(Optional)** Create a Service Principal for authentication if using a service connector. -**Usage Requirements**: +#### Usage Requirements - Install ZenML Azure integration: - ```shell - zenml integration install azure - ``` -- Install and run Docker. + ```shell + zenml integration install azure + ``` +- Ensure Docker is installed and running. - Set up an Azure container registry and artifact store. - Have an AzureML workspace and optional compute cluster. -**Authentication Methods**: +#### Authentication Methods 1. **Service Connector** (Recommended): - - Register a service connector and connect it to the AzureML step operator. - - Example commands: - ```shell - zenml service-connector register <CONNECTOR_NAME> --type azure -i - zenml step-operator register <STEP_OPERATOR_NAME> --flavor=azureml --subscription_id=<AZURE_SUBSCRIPTION_ID> --resource_group=<AZURE_RESOURCE_GROUP> --workspace_name=<AZURE_WORKSPACE_NAME> - zenml step-operator connect <STEP_OPERATOR_NAME> --connector <CONNECTOR_NAME> - zenml stack register <STACK_NAME> -s <STEP_OPERATOR_NAME> ... --set - ``` + - Register a service connector with Azure permissions. + - Register the step operator: + ```shell + zenml service-connector register <CONNECTOR_NAME> --type azure -i + zenml step-operator register <STEP_OPERATOR_NAME> \ + --flavor=azureml \ + --subscription_id=<AZURE_SUBSCRIPTION_ID> \ + --resource_group=<AZURE_RESOURCE_GROUP> \ + --workspace_name=<AZURE_WORKSPACE_NAME> + zenml step-operator connect <STEP_OPERATOR_NAME> --connector <CONNECTOR_NAME> + zenml stack register <STACK_NAME> -s <STEP_OPERATOR_NAME> ... --set + ``` 2. **Implicit Authentication**: - - For local orchestrators, ZenML uses the Azure CLI for authentication. + - For local orchestrators, ZenML uses Azure CLI for authentication. - For remote orchestrators, ensure they can authenticate to Azure. -**Step Execution**: -To execute steps in AzureML, use the `@step` decorator: +#### Executing Steps +To execute a step in AzureML, specify the step operator in the `@step` decorator: ```python from zenml import step @@ -2703,13 +2718,13 @@ def trainer(...) -> ...: ``` ZenML builds a Docker image for execution. -**Configuration**: -Use `AzureMLStepOperatorSettings` to configure compute resources. Modes include: -1. **Serverless Compute**: Default mode. -2. **Compute Instance**: Requires `compute_name` and can specify `compute_size`. -3. **Compute Cluster**: Similar to compute instance but for clusters. +#### Additional Configuration +Use `AzureMLStepOperatorSettings` to configure compute resources: +- **Serverless**: Default mode. +- **Compute Instance**: Requires `compute_name`. +- **Compute Cluster**: Requires `compute_name`. -Example configuration: +Example configuration for a compute instance: ```python from zenml.integrations.azure.flavors import AzureMLStepOperatorSettings @@ -2725,53 +2740,53 @@ def my_azureml_step(): ... ``` -**CUDA for GPU**: For GPU usage, follow specific instructions to enable CUDA for full acceleration. +#### GPU Configuration +To enable CUDA for GPU usage, follow specific instructions to customize settings for full acceleration. -For further details, refer to the [AzureMLStepOperatorSettings SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-azure/#zenml.integrations.azure.flavors.azureml_step_operator_flavor.AzureMLStepOperatorSettings). +For more details, refer to the [AzureMLStepOperatorSettings SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-azure/#zenml.integrations.azure.flavors.azureml_step_operator_flavor.AzureMLStepOperatorSettings). ================================================== === File: docs/book/component-guide/step-operators/kubernetes.md === -# Kubernetes Step Operator +### Kubernetes Step Operator Overview -ZenML's Kubernetes step operator allows submission of individual pipeline steps to Kubernetes pods. +ZenML's Kubernetes step operator allows submission of individual steps to run on Kubernetes pods. -## When to Use -- Use when pipeline steps require additional computing resources (CPU, GPU, memory) not provided by the orchestrator. -- Requires access to a Kubernetes cluster. +#### When to Use +- If pipeline steps require computing resources (CPU, GPU, memory) not provided by your orchestrator. +- If you have access to a Kubernetes cluster. -## Deployment Requirements -- A Kubernetes cluster (deployment methods vary; refer to the cloud guide). -- Install ZenML Kubernetes integration: +#### Deployment Requirements +- A Kubernetes cluster (refer to the cloud guide for deployment options). +- ZenML `kubernetes` integration installed: ```shell zenml integration install kubernetes ``` - Either Docker installed or a remote image builder in your stack. - A remote artifact store for reading/writing step artifacts. -**Recommendation:** Set up a Service Connector for connecting the Kubernetes step operator to the cluster, especially for managed cloud providers (AWS, GCP, Azure). +**Recommendation:** Set up a Service Connector for connecting the Kubernetes step operator to the cluster, especially for cloud-managed clusters (AWS, GCP, Azure). -## Usage -1. **Register the Step Operator:** +#### Usage +1. **Registering the Step Operator:** - Using a Service Connector: ```shell zenml step-operator register <NAME> --flavor kubernetes zenml service-connector list-resources --resource-type kubernetes-cluster -e zenml step-operator connect <NAME> --connector <CONNECTOR_NAME> ``` - - Using `kubectl`: + - Using local `kubectl` client: ```shell zenml step-operator register <NAME> --flavor=kubernetes --kubernetes_context=<KUBERNETES_CONTEXT> ``` -2. **Update Active Stack:** +2. **Updating the Active Stack:** ```shell zenml stack update -s <NAME> ``` -3. **Define Steps:** - Use the registered step operator in your pipeline: +3. **Defining Steps:** ```python from zenml import step @@ -2780,35 +2795,32 @@ ZenML's Kubernetes step operator allows submission of individual pipeline steps """Train a model.""" ``` -**Note:** ZenML builds Docker images for running steps in Kubernetes. For customization, refer to the Docker builds documentation. +#### Interacting with Pods +For debugging, you can interact with Kubernetes pods via `kubectl`. Pods are labeled with: +- `run`: ZenML run name. +- `pipeline`: ZenML pipeline name. -## Interacting with Pods -You can interact with Kubernetes pods via `kubectl` using labels: +Example to delete pods related to a specific pipeline: ```shell kubectl delete pod -n zenml -l pipeline=kubernetes_example_pipeline ``` -## Additional Configuration -Use `KubernetesStepOperatorSettings` for further customization: -- `pod_settings`: Configure node selectors, labels, affinity, tolerations, image pull secrets. -- `service_account_name`: Specify the service account for Kubernetes Pods. +#### Additional Configuration +You can configure the Kubernetes step operator using `KubernetesStepOperatorSettings`: +- **pod_settings:** Node selectors, labels, affinity, tolerations, image pull secrets. +- **service_account_name:** Service account for Kubernetes Pods. -Example: +Example configuration: ```python from zenml.integrations.kubernetes.flavors import KubernetesStepOperatorSettings kubernetes_settings = KubernetesStepOperatorSettings( pod_settings={ "node_selectors": {"cloud.google.com/gke-nodepool": "ml-pool"}, - "affinity": {...}, - "tolerations": [...], - "resources": {...}, - "annotations": {...}, - "volumes": [...], - "volume_mounts": [...], - "host_ipc": True, - "image_pull_secrets": ["regcred"], - "labels": {"app": "ml-pipeline"} + "resources": { + "requests": {"cpu": "2", "memory": "4Gi"}, + "limits": {"cpu": "4", "memory": "8Gi"}, + }, }, kubernetes_namespace="ml-pipelines", service_account_name="zenml-pipeline-runner" @@ -2819,64 +2831,73 @@ def my_kubernetes_step(): ... ``` -For a complete list of attributes and settings, refer to the SDK documentation. +For a full list of attributes and further details, refer to the SDK documentation. -## Enabling CUDA for GPU -To run steps on a GPU, follow specific instructions to enable CUDA for full acceleration. +#### Enabling CUDA for GPU +To run steps on GPU, follow the instructions to enable CUDA for full acceleration. ================================================== === File: docs/book/component-guide/step-operators/vertex.md === -### Summary of Executing Individual Steps in Vertex AI +### Summary of Executing Steps in Vertex AI with ZenML -**Overview**: Google Cloud's Vertex AI provides specialized compute instances for training jobs, along with a UI for model management. ZenML's Vertex AI step operator allows submission of individual steps to these compute instances. +**Overview**: Google Cloud's Vertex AI provides specialized compute instances for training jobs and a UI for model management. ZenML's Vertex AI step operator allows submission of individual steps to Vertex AI compute instances. -### When to Use -Utilize the Vertex step operator if: -- Your pipeline steps require compute resources beyond your orchestrator's capabilities. -- You have access to Vertex AI (for other cloud providers, consider SageMaker or AzureML). +#### When to Use +- Use the Vertex step operator if: + - Your pipeline steps require additional computing resources not provided by your orchestrator. + - You have access to Vertex AI (for other cloud providers, see SageMaker or AzureML step operators). -### Deployment Steps -1. **Enable Vertex AI** via the Google Cloud Console. -2. **Create a Service Account** with permissions for Vertex AI jobs (`roles/aiplatform.admin`) and container registry (`roles/storage.admin`). +#### Deployment Steps +1. **Enable Vertex AI**: [Enable here](https://console.cloud.google.com/vertex-ai). +2. **Create Service Account**: Grant permissions for Vertex AI jobs (`roles/aiplatform.admin`) and container registry (`roles/storage.admin`). -### Usage Requirements +#### Usage Requirements - Install ZenML GCP integration: ```shell zenml integration install gcp ``` - Ensure Docker is installed and running. -- Enable Vertex AI and obtain a service account file. +- Enable Vertex AI and have a service account file. - Set up a GCR container registry. - (Optional) Specify a machine type (default: `n1-standard-4`). -- Configure a remote artifact store for read/write access. +- Configure a remote artifact store for reading/writing step artifacts. -### Authentication Options +#### Authentication Methods 1. **Using `gcloud` CLI**: ```shell gcloud auth login - zenml step-operator register <STEP_OPERATOR_NAME> --flavor=vertex --project=<GCP_PROJECT> --region=<REGION> + zenml step-operator register <STEP_OPERATOR_NAME> \ + --flavor=vertex \ + --project=<GCP_PROJECT> \ + --region=<REGION> ``` -2. **Using a Service Account Key File**: +2. **Using Service Account Key File**: ```shell - zenml step-operator register <STEP_OPERATOR_NAME> --flavor=vertex --project=<GCP_PROJECT> --region=<REGION> --service_account_path=<SERVICE_ACCOUNT_PATH> + zenml step-operator register <STEP_OPERATOR_NAME> \ + --flavor=vertex \ + --project=<GCP_PROJECT> \ + --region=<REGION> \ + --service_account_path=<SERVICE_ACCOUNT_PATH> ``` -3. **Using a GCP Service Connector** (recommended): +3. **Using GCP Service Connector** (recommended): ```shell zenml service-connector register <CONNECTOR_NAME> --type gcp --auth-method=service-account --project_id=<PROJECT_ID> --service_account_json=@<SERVICE_ACCOUNT_PATH> - zenml step-operator register <STEP_OPERATOR_NAME> --flavor=vertex --region=<REGION> + zenml step-operator register <STEP_OPERATOR_NAME> \ + --flavor=vertex \ + --region=<REGION> zenml step-operator connect <STEP_OPERATOR_NAME> --connector <CONNECTOR_NAME> ``` -### Registering the Step Operator -Add the step operator to your active stack: +#### Registering the Step Operator +Add the step operator to the active stack: ```shell zenml stack update -s <NAME> ``` -### Using the Step Operator -Specify the step operator in the `@step` decorator: +#### Defining Steps +Use the registered step operator in your pipeline: ```python from zenml import step @@ -2884,18 +2905,23 @@ from zenml import step def trainer(...) -> ...: """Train a model.""" ``` -ZenML builds a Docker image for execution. +ZenML builds a Docker image for running steps in Vertex AI. -### Additional Configuration -You can specify service account, network, and reserved IP ranges: +#### Additional Configuration +Specify service account, network, and reserved IP ranges: ```shell -zenml step-operator register <STEP_OPERATOR_NAME> --flavor=vertex --project=<GCP_PROJECT> --region=<REGION> --service_account=<SERVICE_ACCOUNT> --network=<NETWORK> --reserved_ip_ranges=<RESERVED_IP_RANGES> +zenml step-operator register <STEP_OPERATOR_NAME> \ + --flavor=vertex \ + --project=<GCP_PROJECT> \ + --region=<REGION> \ + --service_account=<SERVICE_ACCOUNT> \ + --network=<NETWORK> \ + --reserved_ip_ranges=<RESERVED_IP_RANGES> ``` -### VertexStepOperatorSettings +#### Using VertexStepOperatorSettings Customize settings for the step operator: ```python -from zenml import step from zenml.integrations.gcp.flavors.vertex_step_operator_flavor import VertexStepOperatorSettings @step(step_operator=<STEP_OPERATOR_NAME>, settings={"step_operator": VertexStepOperatorSettings( @@ -2909,14 +2935,14 @@ def trainer(...) -> ...: """Train a model.""" ``` -### Enabling CUDA for GPU +#### Enabling GPU Support Follow specific instructions to enable CUDA for GPU acceleration. -### Using Persistent Resources -To speed up development with Vertex AI: -1. Create a persistent resource via GCP UI. -2. Ensure the step operator is configured with the correct service account. -3. Use the persistent resource in your code: +#### Using Persistent Resources +To speed up development: +1. Create a persistent resource in GCP. +2. Ensure the step operator is configured with a service account that has access to the resource. +3. Configure the step to use the persistent resource: ```python @step(step_operator=<STEP_OPERATOR_NAME>, settings={"step_operator": VertexStepOperatorSettings( persistent_resource_id="my-persistent-resource", @@ -2927,7 +2953,7 @@ To speed up development with Vertex AI: def trainer(...) -> ...: """Train a model.""" ``` -**Note**: Persistent resources incur costs even when idle; monitor usage accordingly. +**Note**: Persistent resources incur costs even when idle; monitor usage and set idle timeouts accordingly. ================================================== @@ -2936,15 +2962,15 @@ def trainer(...) -> ...: ### Develop a Custom Experiment Tracker #### Overview -To create a custom experiment tracker in ZenML, familiarize yourself with the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). Note that the base abstraction for the Experiment Tracker is under development, and extensions are not currently recommended. You can use existing flavors or implement your own, but be prepared for potential refactoring once the base abstraction is released. +To create a custom experiment tracker in ZenML, familiarize yourself with the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). Note that the base abstraction for Experiment Trackers is currently in development, and extending them is not recommended until its release. You can use existing flavors or implement your own, but be prepared for potential refactoring later. #### Steps to Build a Custom Experiment Tracker -1. **Create a Class**: Inherit from `BaseExperimentTracker` and implement the required abstract methods. -2. **Configuration Class**: If needed, create a class inheriting from `BaseExperimentTrackerConfig` to define configuration parameters. -3. **Combine Implementation and Configuration**: Inherit from `BaseExperimentTrackerFlavor`. +1. **Create a Class**: Inherit from `BaseExperimentTracker` and implement the abstract methods. +2. **Configuration Class**: If needed, inherit from `BaseExperimentTrackerConfig` to add configuration parameters. +3. **Combine Classes**: Inherit from `BaseExperimentTrackerFlavor` to integrate both implementation and configuration. #### Registering Your Flavor -Use the CLI to register your custom flavor with dot notation: +Register your custom flavor via the CLI using dot notation: ```shell zenml experiment-tracker flavor register <path.to.MyExperimentTrackerFlavor> @@ -2957,48 +2983,48 @@ zenml experiment-tracker flavor register flavors.my_flavor.MyExperimentTrackerFl ``` #### Best Practices -- Initialize ZenML at the root of your repository using `zenml init` to ensure proper resolution of the flavor class. -- After registration, verify the flavor is listed: +- Initialize ZenML at the root of your repository with `zenml init` to ensure proper flavor resolution. +- After registration, verify your flavor is available: ```shell zenml experiment-tracker flavor list ``` #### Important Notes -- The **CustomExperimentTrackerFlavor** is used during flavor creation via CLI. -- The **CustomExperimentTrackerConfig** is utilized during stack component registration to validate user-provided values. -- The **CustomExperimentTracker** is activated when the component is in use, allowing separation of configuration from implementation. +- The `CustomExperimentTrackerFlavor` is used during flavor creation. +- The `CustomExperimentTrackerConfig` is utilized for validating values during stack component registration. +- The `CustomExperimentTracker` is engaged when the component is in use, allowing separation of configuration from implementation. This design enables registration even if major dependencies are not installed locally. -This design enables registration of flavors and components even if major dependencies are not installed locally, as long as the flavor and config classes are implemented in a different module. +This concise guide provides the essential steps and considerations for developing a custom experiment tracker in ZenML. ================================================== === File: docs/book/component-guide/experiment-trackers/vertexai.md === -# Vertex AI Experiment Tracker Summary +# Vertex AI Experiment Tracker Overview -The Vertex AI Experiment Tracker is a component of the ZenML integration for Google Cloud's Vertex AI, designed for logging and visualizing machine learning experiments. It utilizes the Vertex AI tracking service to manage pipeline step data such as models, parameters, and metrics. +The Vertex AI Experiment Tracker, part of the ZenML integration, utilizes the Vertex AI tracking service to log and visualize pipeline step information (models, parameters, metrics). ## Use Cases -- Ideal for iterative ML experimentation and transitioning to production-oriented workflows. -- Recommended for users already familiar with Vertex AI or those building ML workflows within the Google Cloud ecosystem. -- Not suitable for users unfamiliar with Vertex AI or those using other cloud providers. +- Ideal for iterative ML experimentation and automated pipeline result tracking. +- Best for users already utilizing Vertex AI for experiment tracking or those building ML workflows in the Google Cloud ecosystem. +- Consider alternative Experiment Tracker flavors if unfamiliar with Vertex AI or using other cloud providers. ## Configuration -To configure the Vertex AI Experiment Tracker, install the GCP ZenML integration: +To use the Vertex AI Experiment Tracker, install the GCP ZenML integration: ```shell zenml integration install gcp -y ``` ### Configuration Options -Key configuration options include: -- `project`: GCP project name (inferred if not provided). -- `location`: GCP location (defaults to us-central1). -- `staging_bucket`: GCS bucket for staging artifacts (format: gs://...). -- `service_account_path`: Path to the service account JSON file for authentication. +Key options for registration: +- `project`: GCP project name (inferred if `None`). +- `location`: GCP location (defaults to `us-central1`). +- `staging_bucket`: Default staging bucket (format: `gs://...`). +- `service_account_path`: Path to service account JSON for authentication. -Register the tracker as follows: +Register the tracker: ```shell zenml experiment-tracker register vertex_experiment_tracker \ @@ -3011,10 +3037,11 @@ zenml stack register custom_stack -e vertex_experiment_tracker ... --set ``` ### Authentication Methods -1. **Implicit Authentication**: Quick local setup using `gcloud auth login`, but not suitable for production. -2. **GCP Service Connector (recommended)**: Provides better security and configuration. Register with: +1. **Implicit Authentication**: Quick local setup using `gcloud auth login`. Not suitable for production. + +2. **GCP Service Connector (Recommended)**: Provides auto-configuration and best security practices. Register using: -```shell +```sh zenml service-connector register --type gcp -i ``` @@ -3030,7 +3057,7 @@ zenml experiment-tracker register <EXPERIMENT_TRACKER_NAME> \ zenml experiment-tracker connect <EXPERIMENT_TRACKER_NAME> --connector <CONNECTOR_NAME> ``` -3. **GCP Credentials**: Use a service account key stored in a ZenML secret for authentication: +3. **GCP Credentials**: Use a service account key stored in a ZenML Secret for authentication. ```shell zenml experiment-tracker register <EXPERIMENT_TRACKER_NAME> \ @@ -3042,112 +3069,104 @@ zenml experiment-tracker register <EXPERIMENT_TRACKER_NAME> \ ``` ## Usage -To log information from a ZenML pipeline step, enable the experiment tracker with the `@step` decorator. Use Vertex AI's logging capabilities as shown in the examples below. +To log information from a ZenML pipeline step, enable the experiment tracker with the `@step` decorator. Use Vertex AI's logging methods as shown in the examples below. ### Example 1: Logging Metrics -Install the necessary library: - -```bash -pip install google-cloud-aiplatform[autologging] -``` ```python from google.cloud import aiplatform class VertexAICallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): - metrics = {key: value for key, value in (logs or {}).items() if isinstance(value, (int, float))} + metrics = {k: v for k, v in (logs or {}).items() if isinstance(v, (int, float))} aiplatform.log_time_series_metrics(metrics=metrics, step=epoch) @step(experiment_tracker="<VERTEXAI_TRACKER_STACK_COMPONENT_NAME>") def train_model(config, x_train, y_train, x_val, y_val): aiplatform.autolog() - model.fit(x_train, y_train, validation_data=(x_val, y_val), callbacks=[VertexAICallback()]) + model.fit(x_train, y_train, validation_data=(x_val, y_val), epochs=config.epochs, callbacks=[VertexAICallback()]) aiplatform.log_metrics(...) aiplatform.log_params(...) ``` ### Example 2: Uploading TensorBoard Logs -Install the library: - -```bash -pip install google-cloud-aiplatform[tensorboard] -``` ```python @step(experiment_tracker="<VERTEXAI_TRACKER_STACK_COMPONENT_NAME>") def train_model(config, gcs_path, x_train, y_train, x_val, y_val): tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=gcs_path) aiplatform.start_upload_tb_log(tensorboard_experiment_name="experiment_name", logdir=gcs_path) - model.fit(x_train, y_train, validation_data=(x_val, y_val), callbacks=[tensorboard_callback]) + model.fit(x_train, y_train, validation_data=(x_val, y_val), epochs=config.epochs, callbacks=[tensorboard_callback]) aiplatform.end_upload_tb_log() aiplatform.log_metrics(...) aiplatform.log_params(...) ``` -### Experiment Tracker UI -Access the Vertex AI experiment URL via the metadata of the step: +### Dynamic Tracker Usage +Instead of hardcoding the tracker name, use the ZenML Client: ```python from zenml.client import Client -client = Client() -last_run = client.get_pipeline("<PIPELINE_NAME>").last_run -tracking_url = last_run.steps.get("<STEP_NAME>").run_metadata["experiment_tracker_url"].value +experiment_tracker = Client().active_stack.experiment_tracker + +@step(experiment_tracker=experiment_tracker.name) +def tf_trainer(...): + ... +``` + +### Accessing Experiment Tracker UI +Retrieve the URL for the Vertex AI experiment linked to a ZenML run: + +```python +tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value print(tracking_url) ``` ### Additional Configuration -For advanced settings, use `VertexExperimentTrackerSettings` to specify an experiment name or TensorBoard instance: +You can specify additional settings using `VertexExperimentTrackerSettings` to define an experiment name or select a TensorBoard instance: ```python from zenml.integrations.gcp.flavors.vertex_experiment_tracker_flavor import VertexExperimentTrackerSettings -vertexai_settings = VertexExperimentTrackerSettings(experiment="<YOUR_EXPERIMENT_NAME>") +vertexai_settings = VertexExperimentTrackerSettings( + experiment="<YOUR_EXPERIMENT_NAME>", + experiment_tensorboard="TENSORBOARD_RESOURCE_NAME" +) + @step(experiment_tracker="<VERTEXAI_TRACKER_STACK_COMPONENT_NAME>", settings={"experiment_tracker": vertexai_settings}) def step_one(data): ... ``` -For more details, refer to the ZenML documentation on runtime configuration. +For more details, refer to the ZenML documentation on configuration files. ================================================== === File: docs/book/component-guide/experiment-trackers/neptune.md === ### Neptune Experiment Tracker Overview +The Neptune Experiment Tracker, integrated with ZenML, utilizes [neptune.ai](https://neptune.ai/product/experiment-tracking) for logging and visualizing pipeline information (models, parameters, metrics). -The Neptune Experiment Tracker integrates with [neptune.ai](https://neptune.ai/product/experiment-tracking) to log and visualize information from ZenML pipeline steps, such as models, parameters, and metrics. It is useful for tracking and visualizing experiment results during ML experimentation and can also serve as a model registry for production-ready models. - -### Use Cases -You should use the Neptune Experiment Tracker if: -- You are already using neptune.ai for tracking experiment results and want to integrate it with ZenML. -- You prefer a visually interactive way to navigate results from ZenML pipeline runs. -- You want to share logged artifacts and metrics with your team or stakeholders. - -Consider other [Experiment Tracker flavors](./experiment-trackers.md#experiment-tracker-flavors) if you are unfamiliar with neptune.ai. +#### Use Cases +- Ideal for users already familiar with neptune.ai who want to integrate it into MLOps workflows. +- Provides a visually interactive way to navigate results from ZenML pipeline runs. +- Facilitates sharing of logged artifacts and metrics with teams or stakeholders. -### Deployment +#### Deployment To deploy the Neptune Experiment Tracker, install the integration: ```shell zenml integration install neptune -y ``` -Configure it with the required credentials: -- **`api_token`**: Your Neptune account API token (can be stored in environment variables). -- **`project`**: The project name in the format "workspace-name/project-name" (also can be retrieved from environment variables). - -#### Authentication Methods -1. **ZenML Secret (Recommended)**: - Create a ZenML secret to securely store credentials: - +**Authentication Methods:** +1. **ZenML Secret (Recommended):** + - Store credentials securely using ZenML secrets. ```shell zenml secret create neptune_secret --api_token=<API_TOKEN> ``` - - Register the experiment tracker: - + - Register the tracker: ```shell zenml experiment-tracker register neptune_experiment_tracker \ --flavor=neptune \ @@ -3156,39 +3175,46 @@ Configure it with the required credentials: zenml stack register neptune_stack -e neptune_experiment_tracker ... --set ``` -2. **Basic Authentication (Not Recommended for Production)**: - Directly configure credentials in the stack: - +2. **Basic Authentication (Not Recommended for Production):** ```shell zenml experiment-tracker register neptune_experiment_tracker --flavor=neptune \ --project=<project_name> --api_token=<token> zenml stack register neptune_stack -e neptune_experiment_tracker ... --set ``` -### Usage -To log information from a ZenML pipeline step, enable the experiment tracker using the `@step` decorator and fetch the Neptune run object: +#### Usage +To log information from a ZenML pipeline step, enable the experiment tracker with the `@step` decorator and fetch the Neptune run object: ```python from zenml.integrations.neptune.experiment_trackers.run_state import get_neptune_run from zenml import step +from sklearn.model_selection import train_test_split from sklearn.svm import SVC -from sklearn.datasets import load_iris from zenml.client import Client -experiment_tracker = Client().active_stack.experiment_tracker - @step(experiment_tracker="neptune_experiment_tracker") def train_model() -> SVC: iris = load_iris() - model = SVC(kernel="rbf", C=1.0) - model.fit(iris.data, iris.target) - + X_train, _, y_train, _ = train_test_split(iris.data, iris.target, test_size=0.2) + params = {"kernel": "rbf", "C": 1.0} + model = SVC(**params).fit(X_train, y_train) + neptune_run = get_neptune_run() - neptune_run["parameters"] = {"kernel": "rbf", "C": 1.0} - + neptune_run["parameters"] = params return model ``` +**Dynamic Tracker Reference:** +Instead of hardcoding, use the Client to reference the active stack's experiment tracker: + +```python +experiment_tracker = Client().active_stack.experiment_tracker + +@step(experiment_tracker=experiment_tracker.name) +def tf_trainer(...): + ... +``` + #### Logging Metadata Use `get_step_context` to log ZenML metadata: @@ -3197,30 +3223,26 @@ Use `get_step_context` to log ZenML metadata: def my_step(): neptune_run = get_neptune_run() context = get_step_context() - - neptune_run["pipeline_metadata"] = context.pipeline_run.get_metadata().dict() - neptune_run[f"step_metadata/{context.step_name}"] = context.step_run.get_metadata().dict() + neptune_run["pipeline_metadata"] = stringify_unsupported(context.pipeline_run.get_metadata().dict()) + ... ``` #### Adding Tags Use `NeptuneExperimentTrackerSettings` to add tags: ```python -from zenml.integrations.neptune.flavors import NeptuneExperimentTrackerSettings - neptune_settings = NeptuneExperimentTrackerSettings(tags={"keras", "mnist"}) @step(experiment_tracker="<NEPTUNE_TRACKER_STACK_COMPONENT_NAME>", settings={"experiment_tracker": neptune_settings}) def my_step(...): - neptune_run = get_neptune_run() ... ``` -### Neptune UI -Neptune provides a web-based UI to inspect tracked experiments. Each pipeline run is logged as a separate experiment in Neptune, accessible via the console or the dashboard. +#### Neptune UI +Access the Neptune UI to view tracked experiments. Each pipeline run is logged as a separate experiment, with metadata accessible through the dashboard. ### Full Code Example -Here’s a complete example of using the Neptune integration with ZenML: +Here’s a complete example integrating Neptune with ZenML: ```python from zenml.integrations.neptune.experiment_trackers.run_state import get_neptune_run @@ -3230,29 +3252,22 @@ from sklearn.model_selection import train_test_split from sklearn.svm import SVC from sklearn.metrics import accuracy_score -experiment_tracker = Client().active_stack.experiment_tracker - -@step(experiment_tracker=experiment_tracker.name) +@step(experiment_tracker=Client().active_stack.experiment_tracker.name) def train_model() -> SVC: iris = load_iris() X_train, _, y_train, _ = train_test_split(iris.data, iris.target, test_size=0.2) - model = SVC(kernel="rbf", C=1.0) - model.fit(X_train, y_train) - + model = SVC(kernel="rbf", C=1.0).fit(X_train, y_train) neptune_run = get_neptune_run() neptune_run["parameters"] = {"kernel": "rbf", "C": 1.0} - return model -@step(experiment_tracker=experiment_tracker.name) +@step(experiment_tracker=Client().active_stack.experiment_tracker.name) def evaluate_model(model: SVC): iris = load_iris() _, X_test, _, y_test = train_test_split(iris.data, iris.target, test_size=0.2) accuracy = accuracy_score(y_test, model.predict(X_test)) - neptune_run = get_neptune_run() neptune_run["metrics/accuracy"] = accuracy - return accuracy @pipeline @@ -3265,149 +3280,127 @@ if __name__ == "__main__": ``` ### Further Reading -For more details, refer to [Neptune's documentation](https://docs.neptune.ai/integrations/zenml/). +Refer to [Neptune's documentation](https://docs.neptune.ai/integrations/zenml/) for more details on using this integration. ================================================== === File: docs/book/component-guide/experiment-trackers/mlflow.md === -### MLflow Experiment Tracker Overview +### MLflow Experiment Tracker Documentation Summary -The MLflow Experiment Tracker, integrated with ZenML, utilizes the MLflow tracking service to log and visualize pipeline step information (models, parameters, metrics). +**Overview**: The MLflow Experiment Tracker integrates with ZenML to log and visualize experiment data (models, parameters, metrics) using the MLflow tracking service. #### Use Cases -- **Continuity**: For users already employing MLflow for tracking and transitioning to MLOps with ZenML. -- **Visualization**: For interactive navigation of results from ZenML pipeline runs. -- **Collaboration**: For teams with a shared MLflow Tracking service. - -If unfamiliar with MLflow, consider other experiment tracker flavors. +- Ideal for users already utilizing MLflow for tracking and transitioning to MLOps with ZenML. +- Provides a visual interface for navigating results from ZenML pipeline runs. +- Suitable for teams with an existing MLflow Tracking service. #### Configuration -To use the MLflow Experiment Tracker, install the integration: - -```shell -zenml integration install mlflow -y -``` - -##### Deployment Scenarios -1. **Localhost**: Requires a local Artifact Store; not suitable for collaborative settings. +1. **Installation**: ```shell - zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow - zenml stack register custom_stack -e mlflow_experiment_tracker ... --set + zenml integration install mlflow -y ``` -2. **Remote Tracking**: Requires a deployed MLflow Tracking Server with authentication parameters. - - Recommended MLflow version: 2.2.1 or higher due to security vulnerabilities. +2. **Deployment Scenarios**: + - **Localhost**: Requires a local Artifact Store. Not suitable for collaboration. + ```shell + zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow + zenml stack register custom_stack -e mlflow_experiment_tracker ... --set + ``` + - **Remote Tracking Server**: Requires authentication parameters. + - **Databricks**: Connects to a managed MLflow Tracking server; authentication required. -3. **Databricks**: Uses Databricks-managed MLflow Tracking server; requires specific authentication parameters. +3. **Authentication Methods**: + - **Basic Authentication** (not recommended for production): + ```shell + zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow \ + --tracking_uri=<URI> --tracking_token=<token> + ``` + - **ZenML Secret (Recommended)**: + ```shell + zenml secret create mlflow_secret --username=<USERNAME> --password=<PASSWORD> + zenml experiment-tracker register mlflow --flavor=mlflow \ + --tracking_username={{mlflow_secret.username}} --tracking_password={{mlflow_secret.password}} ... + ``` -##### Authentication Methods -- **Basic Authentication**: Directly configure credentials (not recommended for production). - - ```shell - zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow \ - --tracking_uri=<URI> --tracking_token=<token> +#### Usage +- Enable the experiment tracker in a ZenML step using the `@step` decorator: + ```python + import mlflow + + @step(experiment_tracker="<MLFLOW_TRACKER_STACK_COMPONENT_NAME>") + def tf_trainer(x_train, y_train): + mlflow.tensorflow.autolog() + mlflow.log_param(...) + mlflow.log_metric(...) + mlflow.log_artifact(...) + return model + ``` +- To dynamically use the active stack's experiment tracker: + ```python + from zenml.client import Client + experiment_tracker = Client().active_stack.experiment_tracker ``` -- **ZenML Secret (Recommended)**: Store credentials securely using ZenML secrets. - - ```shell - zenml secret create mlflow_secret --username=<USERNAME> --password=<PASSWORD> - zenml experiment-tracker register mlflow --flavor=mlflow \ - --tracking_username={{mlflow_secret.username}} --tracking_password={{mlflow_secret.password}} ... +#### MLflow UI +- Access the MLflow UI for detailed experiment tracking: + ```python + tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value + ``` +- Start local MLflow UI: + ```bash + mlflow ui --backend-store-uri <TRACKING_URL> ``` -#### Usage -To log information from a ZenML pipeline step, enable the experiment tracker with the `@step` decorator and use MLflow's logging features: +#### Additional Configuration +- Use `MLFlowExperimentTrackerSettings` for nested runs or additional tags: + ```python + from zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor import MLFlowExperimentTrackerSettings -```python -import mlflow + mlflow_settings = MLFlowExperimentTrackerSettings(nested=True, tags={"key": "value"}) + + @step(experiment_tracker="<MLFLOW_TRACKER_STACK_COMPONENT_NAME>", settings={"experiment_tracker": mlflow_settings}) + def step_one(data): + ... + ``` -@step(experiment_tracker="<MLFLOW_TRACKER_STACK_COMPONENT_NAME>") -def tf_trainer(x_train: np.ndarray, y_train: np.ndarray) -> tf.keras.Model: - mlflow.tensorflow.autolog() - mlflow.log_param(...) - mlflow.log_metric(...) - mlflow.log_artifact(...) - return model -``` +For more details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.experiment_trackers.mlflow_experiment_tracker). -For dynamic tracker usage: +================================================== -```python -from zenml.client import Client +=== File: docs/book/component-guide/experiment-trackers/experiment-trackers.md === -experiment_tracker = Client().active_stack.experiment_tracker +### Experiment Trackers in ZenML -@step(experiment_tracker=experiment_tracker.name) -def tf_trainer(...): - ... -``` +**Overview**: Experiment trackers log detailed information about ML experiments, including models, datasets, and metrics, allowing for visualization and comparison of runs. In ZenML, each pipeline run is treated as an experiment, with results stored through Experiment Tracker components. -#### MLflow UI -Access the MLflow UI to view tracked experiments. Obtain the experiment URL from the step metadata: +**Key Points**: +- **Integration**: Experiment Trackers are optional stack components that enhance the usability of ZenML by providing a visual interface for logged data, complementing the mandatory Artifact Store. +- **Usage**: Use an Experiment Tracker when you need visual features for tracking experiments. They are designed for ease of use, offering interactive UIs. -```python -tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value -print(tracking_url) -``` +**Architecture**: Experiment Trackers fit into the ZenML stack, as illustrated in the architecture diagram. -To start the local MLflow UI: +**Available Flavors**: +| Experiment Tracker | Flavor | Integration | Notes | +|--------------------|--------|-------------|-------| +| [Comet](comet.md) | `comet`| `comet` | Comet tracking capabilities | +| [MLflow](mlflow.md)| `mlflow`| `mlflow` | MLflow tracking capabilities | +| [Neptune](neptune.md)| `neptune`| `neptune` | Neptune tracking capabilities | +| [Weights & Biases](wandb.md)| `wandb`| `wandb` | Weights & Biases tracking capabilities | +| [Custom Implementation](custom.md)| _custom_| | _custom_ | Custom tracking options | -```bash -mlflow ui --backend-store-uri <TRACKING_URL> -``` - -#### Additional Configuration -For advanced settings, use `MLFlowExperimentTrackerSettings` to create nested runs or add tags: - -```python -from zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor import MLFlowExperimentTrackerSettings - -mlflow_settings = MLFlowExperimentTrackerSettings(nested=True, tags={"key": "value"}) - -@step(experiment_tracker="<MLFLOW_TRACKER_STACK_COMPONENT_NAME>", settings={"experiment_tracker": mlflow_settings}) -def step_one(data: np.ndarray) -> np.ndarray: - ... -``` - -For detailed attributes and configuration options, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor.MLFlowExperimentTrackerSettings). - -================================================== - -=== File: docs/book/component-guide/experiment-trackers/experiment-trackers.md === - -### Experiment Trackers in ZenML - -**Overview**: Experiment Trackers enable tracking of ML experiments by logging detailed information about models, datasets, metrics, and parameters. In ZenML, each pipeline run is treated as an experiment, with results stored through Experiment Tracker components, linking runs to experiments. - -**Key Points**: -- **Integration**: Experiment Trackers are optional stack components that must be registered in your ZenML stack. ZenML also provides versioning and tracking for artifacts via the Artifact Store. -- **Usability**: While ZenML captures artifact information programmatically, Experiment Trackers offer user-friendly UIs for browsing and visualizing logged data, making them ideal for enhancing ZenML's capabilities. - -**Architecture**: Experiment Trackers fit into the ZenML stack, allowing integration with various tracking tools. - -**Available Flavors**: -| Tracker | Flavor | Integration | Notes | -|---------|--------|-------------|-------| -| [Comet](comet.md) | `comet` | `comet` | Adds Comet tracking capabilities | -| [MLflow](mlflow.md) | `mlflow` | `mlflow` | Adds MLflow tracking capabilities | -| [Neptune](neptune.md) | `neptune` | `neptune` | Adds Neptune tracking capabilities | -| [Weights & Biases](wandb.md) | `wandb` | `wandb` | Adds Weights & Biases tracking capabilities | -| [Custom Implementation](custom.md) | _custom_ | | _custom_ | For custom tracking solutions | - -**Command to List Flavors**: -```shell -zenml experiment-tracker flavor list +**Command to List Flavors**: +```shell +zenml experiment-tracker flavor list ``` **Usage Steps**: 1. Configure and add an Experiment Tracker to your ZenML stack. -2. Enable the tracker for specific pipeline steps using decorators. -3. Log information (models, metrics, data) within the steps as you would in standalone tools. +2. Enable the tracker for specific pipeline steps using a decorator. +3. Log information (models, metrics, etc.) explicitly within your steps. 4. Access the Experiment Tracker UI to visualize logged information. -**Accessing Experiment Tracker UI**: +**Code Snippet to Access Tracker UI**: ```python from zenml.client import Client @@ -3416,20 +3409,22 @@ step = pipeline_run.steps["<STEP_NAME>"] experiment_tracker_url = step.run_metadata["experiment_tracker_url"].value ``` -**Note**: Experiment trackers automatically mark runs as failed if the corresponding ZenML pipeline step fails. For detailed usage, refer to the documentation of the specific Experiment Tracker flavor in your stack. +**Note**: If a ZenML pipeline step fails, the corresponding run in the Experiment Tracker will be marked as failed automatically. For detailed usage of specific Experiment Tracker flavors, refer to the respective documentation. ================================================== === File: docs/book/component-guide/experiment-trackers/wandb.md === -### Weights & Biases Integration with ZenML +### Weights & Biases Experiment Tracker Overview -**Overview**: The Weights & Biases (W&B) Experiment Tracker integrates with ZenML to log and visualize pipeline step information (models, parameters, metrics) using the W&B platform. +The Weights & Biases (W&B) Experiment Tracker is a component of the ZenML integration that logs and visualizes pipeline step information (models, parameters, metrics) using the W&B platform. It is ideal for tracking ML experiments and visualizing results during both experimentation and production phases. #### Use Cases -- Ideal for users already familiar with W&B wanting to incorporate MLOps practices in ZenML. -- Provides an interactive way to navigate results from ZenML pipeline runs. -- Facilitates sharing of logged artifacts and metrics with teams or stakeholders. +- Continuing to track results with W&B while adopting MLOps practices in ZenML. +- Seeking an interactive way to navigate results from ZenML pipeline runs. +- Sharing logged artifacts and metrics with teams or stakeholders. + +If unfamiliar with W&B, consider using another experiment tracking tool. #### Deployment To deploy the W&B Experiment Tracker, install the integration: @@ -3438,40 +3433,34 @@ To deploy the W&B Experiment Tracker, install the integration: zenml integration install wandb -y ``` -**Authentication**: Configure credentials for W&B using either basic authentication (not recommended for production) or ZenML secrets (recommended). - -**Basic Authentication**: -```shell -zenml experiment-tracker register wandb_experiment_tracker --flavor=wandb \ - --entity=<entity> --project_name=<project_name> --api_key=<key> -zenml stack register custom_stack -e wandb_experiment_tracker ... --set -``` +**Authentication Methods:** +1. **Basic Authentication** (not recommended for production): + ```shell + zenml experiment-tracker register wandb_experiment_tracker --flavor=wandb \ + --entity=<entity> --project_name=<project_name> --api_key=<key> + zenml stack register custom_stack -e wandb_experiment_tracker ... --set + ``` -**ZenML Secret**: -Create a secret for secure storage: -```shell -zenml secret create wandb_secret \ - --entity=<ENTITY> \ - --project_name=<PROJECT_NAME> \ - --api_key=<API_KEY> -``` -Register the tracker: -```shell -zenml experiment-tracker register wandb_tracker \ - --flavor=wandb \ - --entity={{wandb_secret.entity}} \ - --project_name={{wandb_secret.project_name}} \ - --api_key={{wandb_secret.api_key}} -``` +2. **ZenML Secret (Recommended)**: + Create a secret for secure storage: + ```shell + zenml secret create wandb_secret --entity=<ENTITY> --project_name=<PROJECT_NAME> --api_key=<API_KEY> + ``` + Register the tracker using the secret: + ```shell + zenml experiment-tracker register wandb_tracker --flavor=wandb \ + --entity={{wandb_secret.entity}} --project_name={{wandb_secret.project_name}} \ + --api_key={{wandb_secret.api_key}} + ``` #### Usage -To log information from a ZenML pipeline step, use the `@step` decorator with W&B logging capabilities: +To log information from a ZenML pipeline step, enable the experiment tracker with the `@step` decorator and use W&B logging: ```python import wandb from wandb.integration.keras import WandbCallback -@step(experiment_tracker="<WANDB_TRACKER_NAME>") +@step(experiment_tracker="<WANDB_TRACKER_STACK_COMPONENT_NAME>") def tf_trainer(...): model.fit(..., callbacks=[WandbCallback(log_evaluation=True)]) wandb.log({"<METRIC_NAME>": metric}) @@ -3480,7 +3469,6 @@ def tf_trainer(...): **Dynamic Tracker Reference**: ```python from zenml.client import Client - experiment_tracker = Client().active_stack.experiment_tracker @step(experiment_tracker=experiment_tracker.name) @@ -3489,24 +3477,29 @@ def tf_trainer(...): ``` #### W&B UI -Each ZenML step using W&B creates a separate experiment run, viewable in the W&B UI. Access the tracking URL via: +Each ZenML step using W&B creates a separate experiment run, accessible via the W&B UI. The tracking URL can be retrieved from the step's metadata: + ```python -tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value +last_run = client.get_pipeline("<PIPELINE_NAME>").last_run +tracking_url = last_run.get_step("<STEP_NAME>").run_metadata["experiment_tracker_url"].value print(tracking_url) ``` #### Additional Configuration -You can customize the W&B experiment tracker with `WandbExperimentTrackerSettings` for additional tags or settings: +To customize the W&B experiment tracker, pass `WandbExperimentTrackerSettings`: + ```python +from zenml.integrations.wandb.flavors.wandb_experiment_tracker_flavor import WandbExperimentTrackerSettings + wandb_settings = WandbExperimentTrackerSettings(tags=["some_tag"]) -@step(experiment_tracker="<WANDB_TRACKER_NAME>", settings={"experiment_tracker": wandb_settings}) +@step(experiment_tracker="<WANDB_TRACKER_STACK_COMPONENT_NAME>", settings={"experiment_tracker": wandb_settings}) def my_step(...): ... ``` #### Full Code Example -An end-to-end example of using the W&B integration in ZenML: +Here’s a concise example of a ZenML pipeline using W&B: ```python from zenml import pipeline, step @@ -3526,7 +3519,7 @@ def prepare_data(): @step(experiment_tracker=experiment_tracker.name) def train_model(train_dataset, eval_dataset): model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=2) - training_args = TrainingArguments(output_dir="./results", num_train_epochs=3, per_device_train_batch_size=16) + training_args = TrainingArguments(output_dir="./results", num_train_epochs=3, report_to=["wandb"]) trainer = Trainer(model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset) trainer.train() wandb.log({"final_evaluation": trainer.evaluate()}) @@ -3541,30 +3534,31 @@ if __name__ == "__main__": fine_tuning_pipeline.with_options(settings={"experiment_tracker": wandb_settings})() ``` -For more details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-wandb/#zenml.integrations.wandb.experiment_trackers.wandb_experiment_tracker). +For further details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-wandb/#zenml.integrations.wandb.experiment_trackers.wandb_experiment_tracker). ================================================== === File: docs/book/component-guide/experiment-trackers/comet.md === -### Comet Experiment Tracker Overview +### Comet Experiment Tracker with ZenML -The Comet Experiment Tracker, integrated with ZenML, allows logging and visualizing pipeline step information (models, parameters, metrics) using the Comet platform. +The Comet Experiment Tracker integrates with ZenML to log and visualize pipeline step information (models, parameters, metrics) using the Comet platform. #### Use Cases -- **Continuity**: Ideal for users already using Comet for ML experiments transitioning to MLOps with ZenML. +- **Continuity**: Ideal for users already utilizing Comet for tracking results during ML experimentation. - **Visualization**: Offers an interactive way to navigate results from ZenML pipeline runs. -- **Collaboration**: Facilitates sharing artifacts and metrics with teams or stakeholders. +- **Collaboration**: Facilitates sharing of logged artifacts and metrics with teams or stakeholders. #### Deployment -1. **Install Comet Integration**: - ```bash - zenml integration install comet -y - ``` +To deploy the Comet Experiment Tracker, install the integration: + +```bash +zenml integration install comet -y +``` -2. **Authentication**: Configure credentials for Comet using either ZenML secrets (recommended) or basic authentication. +##### Authentication Methods +1. **ZenML Secret (Recommended)**: Store credentials securely. - **ZenML Secret (Recommended)**: ```bash zenml secret create comet_secret \ --workspace=<WORKSPACE> \ @@ -3572,79 +3566,76 @@ The Comet Experiment Tracker, integrated with ZenML, allows logging and visualiz --api_key=<API_KEY> ``` - **Basic Authentication (Not recommended for production)**: - ```bash - zenml experiment-tracker register comet_experiment_tracker --flavor=comet \ - --workspace=<workspace> --project_name=<project_name> --api_key=<key> - ``` + Configure the tracker: -3. **Register Tracker and Stack**: ```bash zenml experiment-tracker register comet_tracker \ --flavor=comet \ --workspace={{comet_secret.workspace}} \ --project_name={{comet_secret.project_name}} \ --api_key={{comet_secret.api_key}} + ``` - zenml stack register custom_stack -e comet_experiment_tracker ... --set +2. **Basic Authentication**: Directly set credentials (not recommended for production). + + ```bash + zenml experiment-tracker register comet_experiment_tracker --flavor=comet \ + --workspace=<workspace> --project_name=<project_name> --api_key=<key> ``` #### Usage -To log information from a ZenML pipeline step: -1. Enable the experiment tracker with the `@step` decorator. -2. Use Comet logging methods: - ```python - from zenml.client import Client +To log information from a pipeline step, enable the experiment tracker with the `@step` decorator: - experiment_tracker = Client().active_stack.experiment_tracker +```python +from zenml.client import Client - @step(experiment_tracker=experiment_tracker.name) - def my_step(): - experiment_tracker.log_metrics({"my_metric": 42}) - experiment_tracker.log_params({"my_param": "hello"}) - experiment_tracker.experiment.log_model(...) - ``` +experiment_tracker = Client().active_stack.experiment_tracker + +@step(experiment_tracker=experiment_tracker.name) +def my_step(): + experiment_tracker.log_metrics({"my_metric": 42}) + experiment_tracker.log_params({"my_param": "hello"}) +``` #### Comet UI -Each ZenML step using Comet creates a separate experiment viewable in the Comet UI. The experiment URL can be accessed via: +Each ZenML step using Comet creates a separate experiment. Access the experiment URL via step metadata: + ```python tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value +print(tracking_url) ``` #### Full Code Example +Here's a concise example of a pipeline using the Comet Experiment Tracker: + ```python -from comet_ml.integration.sklearn import log_model -import numpy as np -from sklearn.datasets import load_iris -from sklearn.model_selection import train_test_split -from sklearn.preprocessing import StandardScaler -from sklearn.svm import SVC -from sklearn.metrics import accuracy_score from zenml import pipeline, step from zenml.client import Client +from zenml.integrations.comet.experiment_trackers import CometExperimentTracker +from comet_ml.integration.sklearn import log_model +from sklearn import datasets, model_selection, preprocessing, svm, metrics experiment_tracker = Client().active_stack.experiment_tracker @step def load_data(): - iris = load_iris() - return iris.data, iris.target + return datasets.load_iris(return_X_y=True) @step def preprocess_data(X, y): - X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) - scaler = StandardScaler() - return scaler.fit_transform(X_train), scaler.transform(X_test), y_train, y_test + return model_selection.train_test_split(X, y, test_size=0.2) @step(experiment_tracker=experiment_tracker.name) def train_model(X_train, y_train): - model = SVC().fit(X_train, y_train) + model = svm.SVC() + model.fit(X_train, y_train) log_model(experiment=experiment_tracker.experiment, model_name="SVC", model=model) return model @step(experiment_tracker=experiment_tracker.name) def evaluate_model(model, X_test, y_test): - accuracy = accuracy_score(y_test, model.predict(X_test)) + y_pred = model.predict(X_test) + accuracy = metrics.accuracy_score(y_test, y_pred) experiment_tracker.log_metrics({"accuracy": accuracy}) return accuracy @@ -3656,62 +3647,68 @@ def iris_classification_pipeline(): evaluate_model(model, X_test, y_test) if __name__ == "__main__": - iris_classification_pipeline()() + iris_classification_pipeline() ``` #### Additional Configuration -To add tags or customize settings: -```python -from zenml.integrations.comet.flavors.comet_experiment_tracker_flavor import CometExperimentTrackerSettings - -comet_settings = CometExperimentTrackerSettings(tags=["iris_classification"]) +You can pass `CometExperimentTrackerSettings` for additional tags: +```python +comet_settings = CometExperimentTrackerSettings(tags=["example_tag"]) @step(experiment_tracker="<COMET_TRACKER_NAME>", settings={"experiment_tracker": comet_settings}) def my_step(): ... ``` -Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-comet/#zenml.integrations.comet.flavors.comet_experiment_tracker_flavor.CometExperimentTrackerSettings) for more attributes and configuration options. +For more details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-comet/#zenml.integrations.comet.flavors.comet_experiment_tracker_flavor.CometExperimentTrackerSettings). ================================================== === File: docs/book/component-guide/annotators/annotators.md === -### Annotators in ZenML - -**Overview**: Annotators are a component of the ZenML stack that facilitate data annotation within ML workflows. They support iterative annotation processes, integrating labeling into the ML lifecycle. +### Summary of ZenML Annotators Documentation -**Key Use Cases**: -1. **Initial Labeling**: Start with unlabeled or poorly labeled data to bootstrap model training. This helps define labeling standards. -2. **Ongoing Data**: Regularly label incoming data to maintain model accuracy and detect data drift. -3. **Inference Samples**: Label data from model predictions for comparison and potential retraining. -4. **Ad Hoc Interventions**: Address labeling issues or class imbalances through targeted annotation. +**Overview:** +Annotators are a component of the ZenML stack that facilitate data annotation within ML workflows. They can be launched via CLI commands for dataset configuration and task statistics. -**Core Features**: -- Seamless integration of labels in training. -- Versioning of annotation data. -- Conversion between custom formats. -- Generation of UI config files for annotation interfaces. +**Importance of Data Annotation:** +Data annotation is crucial in MLOps, often overlooked. ZenML aims to enhance iterative annotation workflows, integrating labelers into the ML process. -**Available Annotators**: -- **ArgillaAnnotator**: Connects ZenML with Argilla. -- **LabelStudioAnnotator**: Connects ZenML with Label Studio. -- **PigeonAnnotator**: Limited to Jupyter notebooks for image/text classification. -- **ProdigyAnnotator**: Connects ZenML with Prodigy. -- **Custom Implementation**: Allows for user-defined annotator extensions. +**When to Annotate:** +1. **At the Start:** Begin labeling data to bootstrap models, clarifying rules and standards. +2. **As New Data Arrives:** Regularly check and label incoming data, considering automation for data drift detection. +3. **Inference Samples:** Label data from model predictions to compare with actual labels, aiding in model retraining. +4. **Ad Hoc Interventions:** Address bad labels or class imbalances through targeted annotation. -**Command to List Annotators**: +**Core Features:** +- Seamless integration of labels in training steps. +- Versioning of annotation data. +- Conversion of annotation data to/from custom formats. +- Generation of UI config files for annotation tools. + +**Available Annotators:** +ZenML supports various annotators through integrations: +| Annotator | Flavor | Integration | Notes | +|--------------------------|----------------|------------------|--------------------------------------------| +| [ArgillaAnnotator](argilla.md) | `argilla` | `argilla` | Connect ZenML with Argilla | +| [LabelStudioAnnotator](label-studio.md) | `label_studio` | `label_studio` | Connect ZenML with Label Studio | +| [PigeonAnnotator](pigeon.md) | `pigeon` | `pigeon` | Notebook only; image and text classification | +| [ProdigyAnnotator](prodigy.md) | `prodigy` | `prodigy` | Connect ZenML with [Prodigy](https://prodi.gy/) | +| [Custom Implementation](custom.md) | _custom_ | | Extend the annotator abstraction | + +**Command to List Annotator Flavors:** ```shell zenml annotator flavor list ``` -**Usage**: The annotator implementation is primarily based on Label Studio. For details on usage, refer to the [Label Studio documentation](label-studio.md#how-do-you-use-it). +**Usage:** +The annotator implementation is primarily based on Label Studio. For detailed usage, refer to the [Label Studio documentation](label-studio.md#how-do-you-use-it). Note that Pigeon has limited functionality and is Jupyter notebook specific. -**Terminology**: -- ZenML uses "Dataset" for groups of annotations (Label Studio calls it "Project"). -- Individual annotation units are termed "tasks" in ZenML, aligning with Label Studio. +**Naming Conventions:** +- ZenML uses 'Dataset' for grouped annotations, aligning with its terminology rather than 'Project' used by Label Studio. +- The term 'tasks' is used for the combination of an annotation and its source data, consistent with ZenML and Label Studio. -This summary captures the essential information about the annotators in ZenML, their use cases, features, available tools, and terminology. +This summary captures the essential points of the ZenML Annotators documentation, ensuring clarity on usage, features, and available tools. ================================================== @@ -3719,13 +3716,11 @@ This summary captures the essential information about the annotators in ZenML, t ### Develop a Custom Annotator -**Overview**: Custom annotators in ZenML allow for data annotation within your stack and pipelines. Familiarity with the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) is recommended for a foundational understanding of component flavors. +Before developing a custom annotator, review the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational knowledge on ZenML's component flavor concepts. -**Functionality**: Annotators can be launched via CLI commands to configure datasets and retrieve statistics on labeled tasks. +**Annotators** are stack components that facilitate data annotation within ZenML stacks and pipelines. You can use the CLI to launch annotations, configure datasets, and retrieve statistics on labeled tasks. -**Important Note**: The base abstraction for annotators is currently under development, limiting the ability to extend them. Users should refer to the list of available feature stores for immediate use. - - +**Note:** The base abstraction for annotators is currently under development and cannot be extended at this time. For immediate use, refer to the list of available feature stores. ================================================== @@ -3734,37 +3729,37 @@ This summary captures the essential information about the annotators in ZenML, t ### Summary of Argilla Documentation **Argilla Overview** -Argilla is a collaboration tool designed for AI engineers and domain experts to create high-quality datasets for machine learning projects. It facilitates faster data curation through human and machine feedback, supporting the entire MLOps cycle from data labeling to model monitoring. Its unique focus on human-in-the-loop approaches differentiates it from competitors. +Argilla is a collaboration tool designed for AI engineers and domain experts to create high-quality datasets, facilitating robust language model development through efficient data curation with human and machine feedback. It supports the entire MLOps cycle, from data labeling to model monitoring, with a focus on human-in-the-loop approaches. **Use Cases** -Argilla is beneficial for labeling textual data within ML workflows. It can be integrated into a ZenML stack, supporting annotation at various stages. +Argilla is ideal for labeling textual data within ML workflows. It can be integrated into a ZenML stack, supporting both local (Docker-based) and deployed instances, including deployment as a Hugging Face Space. -**Deployment** -To deploy Argilla, install the ZenML integration: +**Deployment Instructions** +To deploy Argilla in ZenML, install the integration: ```shell zenml integration install argilla ``` -You can register the annotator with an API key directly or as a secret for security. For the secret approach: +You can register the annotator with an API key directly or as a secret for security. For secret registration: ```shell zenml secret create argilla_secrets --api_key="<your_argilla_api_key>" ``` -Then register the annotator: +Then, register the annotator: ```shell zenml annotator register argilla --flavor argilla --authentication_secret=argilla_secrets --port=6900 ``` -For a deployed instance, specify the instance URL without a trailing slash and include headers if using a private Hugging Face Spaces instance: +For a deployed instance, specify the URL without a trailing `/` and include headers if using a private Hugging Face Space: ```shell zenml annotator register argilla --flavor argilla --authentication_secret=argilla_secrets --instance_url="https://[your-owner-name]-[your_space_name].hf.space" --headers='{"Authorization": "Bearer {[your_hugging_face_token]}"}' ``` -Add components to a stack and set it as active: +Add components to a stack: ```shell zenml stack copy default annotation @@ -3772,23 +3767,23 @@ zenml stack update annotation -an <YOUR_ARGILLA_ANNOTATOR> zenml stack set annotation ``` -Verify with: +Verify the setup with: ```shell zenml annotator dataset list ``` -**Usage** +**Using the Annotator** Access data and annotations via the CLI: - List datasets: `zenml annotator dataset list` - Annotate a dataset: `zenml annotator dataset annotate <dataset_name>` **Argilla Annotator Component** -The Argilla annotator extends the `BaseAnnotator` class, requiring methods for dataset registration and retrieval. Key functionalities include dataset registration, annotation export, and starting the annotator daemon process. +The Argilla annotator extends the `BaseAnnotator` class, implementing core methods for dataset registration, annotation export, and starting the annotator daemon process. **Argilla Annotator SDK** -For Python access, obtain the client object and use methods as follows: +To use the SDK in Python: ```python from zenml.client import Client @@ -3812,33 +3807,39 @@ For more details, refer to the [Argilla documentation](https://docs.argilla.io/e === File: docs/book/component-guide/annotators/pigeon.md === -### Pigeon Annotation Tool +### Pigeon: Data Annotation Tool -**Overview**: Pigeon is an open-source annotation tool for quick data labeling within Jupyter notebooks, supporting: +**Overview** +Pigeon is a lightweight, open-source annotation tool for labeling data within Jupyter notebooks. It supports: - Text Classification - Image Classification - Text Captioning -**Use Cases**: Ideal for small to medium datasets, Pigeon is useful for: -- Quick labeling tasks -- Iterative labeling in exploratory ML phases +**Use Cases** +Pigeon is ideal for: +- Labeling small to medium datasets in ML workflows +- Quick labeling tasks without a full annotation platform +- Iterative labeling during exploratory ML phases - Collaborative labeling in Jupyter notebooks -**Deployment Steps**: -1. Install the ZenML Pigeon integration: +**Deployment Steps** +1. **Install Pigeon Integration**: ```shell zenml integration install pigeon ``` -2. Register the Pigeon annotator with ZenML: +2. **Register the Annotator**: ```shell zenml annotator register pigeon --flavor pigeon --output_dir="path/to/dir" ``` -3. Update your ZenML stack to include the Pigeon annotator: + (The `output_dir` is relative to the repository or notebook root.) +3. **Update Your Stack**: ```shell zenml stack update <YOUR_STACK_NAME> --annotator pigeon ``` -**Usage**: +**Usage** +After registration, access the Pigeon annotator in your Jupyter notebook: + - **Text Classification**: ```python from zenml.client import Client @@ -3863,61 +3864,59 @@ For more details, refer to the [Argilla documentation](https://docs.argilla.io/e ) ``` -**Annotation Management**: -- List datasets: `zenml annotator dataset list` -- Delete dataset: `zenml annotator dataset delete <dataset_name>` -- Dataset statistics: `zenml annotator dataset stats <dataset_name>` +**Annotation Management** +Use the following commands to manage datasets: +- `zenml annotator dataset list` - List datasets +- `zenml annotator dataset delete <dataset_name>` - Delete a dataset +- `zenml annotator dataset stats <dataset_name>` - Get dataset statistics -**Output**: Annotations are saved as JSON files in the specified output directory, with filenames as dataset names. +**Output** +Annotations are saved as JSON files in the specified output directory, with each file named after its dataset. -**Acknowledgements**: Pigeon was developed by [Anastasis Germanidis](https://github.com/agermanidis) and is available as a [Python package](https://pypi.org/project/pigeon-jupyter/) and [GitHub repository](https://github.com/agermanidis/pigeon), licensed under the Apache License. It has been updated for compatibility with recent `ipywidgets` versions. +**Acknowledgements** +Pigeon was created by [Anastasis Germanidis](https://github.com/agermanidis) and is available as a [Python package](https://pypi.org/project/pigeon-jupyter/) and [GitHub repository](https://github.com/agermanidis/pigeon). It is licensed under the Apache License and has been updated for compatibility with recent `ipywidgets` versions. ================================================== === File: docs/book/component-guide/annotators/label-studio.md === -### Label Studio Integration with ZenML +### Summary of Label Studio Documentation -**Overview**: -Label Studio is an open-source annotation platform for data scientists and ML practitioners, supporting various annotation types including: -- **Computer Vision**: Image classification, object detection, semantic segmentation -- **Audio & Speech**: Classification, speaker diarization, emotion recognition, transcription -- **Text/NLP**: Classification, NER, question answering, sentiment analysis -- **Time Series**: Classification, segmentation, event recognition -- **Multi-Modal**: Dialogue processing, OCR, time series with reference +**Label Studio Overview** +Label Studio is an open-source annotation platform for data scientists and ML practitioners, supporting various annotation types, including: +- **Computer Vision**: image classification, object detection, semantic segmentation +- **Audio & Speech**: classification, speaker diarization, emotion recognition, audio transcription +- **Text/NLP**: classification, NER, question answering, sentiment analysis +- **Time Series**: classification, segmentation, event recognition +- **Multi-Modal/Domain**: dialogue processing, OCR, time series with reference -**Use Case**: -Integrate Label Studio into your ML workflow for data labeling. It requires a cloud artifact store (AWS S3, GCP/GCS, Azure Blob Storage) and does not support purely local stacks. - -### Deployment Steps +**Use Case** +Integrate Label Studio into your ML workflow for data labeling. It requires cloud artifact stores (AWS S3, GCP/GCS, Azure Blob Storage); local stacks are not supported. -1. **Install Integration**: +**Deployment Steps** +1. Install the Label Studio integration: ```shell zenml integration install label_studio ``` - -2. **Set Up Label Studio**: - Clone the repository and start Label Studio: +2. Obtain your Label Studio API key from a local instance: ```shell git clone https://github.com/HumanSignal/label-studio.git cd label-studio docker-compose up -d ``` + Access it at [http://localhost:8080/](http://localhost:8080/) and retrieve the API key from your account settings. -3. **Obtain API Key**: - Access the web interface at [http://localhost:8080/](http://localhost:8080/), log in, and retrieve your API key from [http://localhost:8080/user/account](http://localhost:8080/user/account). - -4. **Register API Key**: +3. Register the API key: ```shell zenml secret create label_studio_secrets --api_key="<your_label_studio_api_key>" ``` - -5. **Register Annotator**: +4. Register the annotator: ```shell zenml annotator register label_studio --flavor label_studio --authentication_secret="label_studio_secrets" --port=8080 ``` + For deployed instances, include the instance URL. -6. **Configure Stack**: +5. Create and set your stack: ```shell zenml stack copy default annotation zenml stack update annotation -a <YOUR_CLOUD_ARTIFACT_STORE> @@ -3925,67 +3924,79 @@ Integrate Label Studio into your ML workflow for data labeling. It requires a cl zenml stack set annotation ``` -### Usage - -- **List Datasets**: - ```shell - zenml annotator dataset list - ``` - -- **Annotate Dataset**: - ```shell - zenml annotator dataset annotate <dataset_name> - ``` - -### Key Components - -- **Label Studio Annotator**: Inherits from `BaseAnnotator`, includes methods for dataset registration, exporting annotations, and starting the annotator daemon. +**Usage** +Use the CLI for data access and annotation: +- List datasets: `zenml annotator dataset list` +- Annotate a dataset: `zenml annotator dataset annotate <dataset_name>` +**Core Components** +- **Label Studio Annotator**: Inherits from `BaseAnnotator`, includes methods for dataset registration and annotation export. - **Standard Steps**: - - `LabelStudioDatasetRegistrationConfig`: Config for registering datasets. - - `LabelStudioDatasetSyncConfig`: Config for syncing datasets. - - `get_or_create_dataset`: Registers or retrieves a dataset. - - `get_labeled_data`: Fetches labeled data in Label Studio format. - - `sync_new_data_to_label_studio`: Syncs annotations with the cloud artifact store. + - `LabelStudioDatasetRegistrationConfig`: Config for dataset registration. + - `LabelStudioDatasetSyncConfig`: Config for syncing new data. + - `get_or_create_dataset`: Registers or retrieves a dataset. + - `get_labeled_data`: Fetches labeled data in Label Studio format. + - `sync_new_data_to_label_studio`: Ensures data synchronization with the cloud artifact store. -- **Helper Functions**: Generate label config strings for object detection, image classification, and OCR. See the `label_config_generators` module for details. +**Helper Functions** +ZenML provides functions to create 'label config' strings for custom annotation interfaces in object detection, image classification, and OCR. Refer to the `label_config_generators` module for implementation details. -This integration allows for efficient data annotation workflows, ensuring that datasets remain synchronized with ongoing annotations. +This integration allows for efficient data annotation workflows, enhancing ML model training and validation processes. ================================================== === File: docs/book/component-guide/annotators/prodigy.md === -### Prodigy Overview -Prodigy is a paid annotation tool designed for creating training and evaluation data for machine learning models. It aids in data inspection, cleaning, error analysis, and developing rule-based systems. Prodigy provides a web application optimized for efficient annotation and includes a Python library with pre-built workflows and customizable components. +### Prodigy Documentation Summary -### Usage Context -Prodigy is beneficial when labeling data in your ML workflow. It can be integrated as an optional annotator component in your ZenML stack. +**Prodigy Overview** +Prodigy is a paid annotation tool designed for creating training and evaluation data for machine learning models. It allows users to inspect, clean data, perform error analysis, and develop rule-based systems. The tool features a web application optimized for efficient annotation and offers a Python library with pre-built workflows and customizable scripts. -### Deployment Steps -1. **Install Prodigy**: Requires a license. Follow the [Prodigy installation guide](https://prodi.gy/docs/install). Ensure `urllib3<2` is installed. -2. **Register Prodigy with ZenML**: +**Usage Context** +Prodigy is beneficial when labeling data as part of a machine learning workflow, making it a suitable addition to the ZenML stack. + +**Deployment Instructions** +To deploy Prodigy with ZenML, follow these steps: + +1. **Export Requirements**: ```shell zenml integration export-requirements --output-file prodigy-requirements.txt prodigy + ``` + +2. **Install Prodigy**: Requires a license. Refer to the [Prodigy installation guide](https://prodi.gy/docs/install). Ensure `urllib3<2` is installed. + +3. **Register the Annotator**: + ```shell zenml annotator register prodigy --flavor prodigy ``` - Optionally, specify a custom config path. -3. **Update ZenML Stack**: +4. **Create and Set Stack**: ```shell zenml stack copy default annotation zenml stack update annotation -an prodigy zenml stack set annotation ``` -### Using Prodigy -Prodigy does not require pre-starting the annotator. Use it as per the [Prodigy documentation](https://prodi.gy). Access your data and annotations through ZenML CLI commands. For annotating a dataset: -```shell -zenml annotator dataset annotate your_dataset --command="textcat.manual news_topics ./news_headlines.jsonl --label Technology,Politics,Economy,Entertainment" -``` +5. **Verify Setup**: + Run `zenml annotator dataset list` to confirm the annotator is ready. + +**Using Prodigy** +Prodigy does not require pre-starting the annotator. Use it as per the [Prodigy documentation](https://prodi.gy). Access data and annotations via the ZenML CLI: + +- **List Datasets**: + ```shell + zenml annotator dataset list + ``` + +- **Annotate Dataset**: + ```shell + zenml annotator dataset annotate your_dataset --command="textcat.manual news_topics ./news_headlines.jsonl --label Technology,Politics,Economy,Entertainment" + ``` + +This command launches the Prodigy interface for the specified dataset and labels. -### Importing Annotations -To import annotations into a ZenML step: +**Integration with ZenML** +To import annotations within a ZenML step: ```python from typing import List, Dict, Any from zenml import step @@ -3998,8 +4009,10 @@ def import_annotations() -> List[Dict[str, Any]]: return annotations ``` -### Prodigy Annotator Component -The Prodigy annotator component extends the `BaseAnnotator` class, implementing core methods for dataset registration and annotation export. It incorporates additional methods specific to Prodigy for enhanced functionality. +For cloud environments, export annotations manually and reference them in ZenML as needed. + +**Prodigy Annotator Component** +The Prodigy annotator component extends the `BaseAnnotator` class, requiring core methods for dataset registration and annotation export. It includes additional methods specific to Prodigy for enhanced functionality. ================================================== @@ -4007,34 +4020,38 @@ The Prodigy annotator component extends the `BaseAnnotator` class, implementing ### Local Image Builder Overview -The Local Image Builder is a built-in feature of ZenML that utilizes the local Docker installation on your machine to build container images. It leverages the official Docker Python library, which accesses authentication credentials from the default location: `$HOME/.docker/config.json`. To specify a different directory for Docker configuration, set the `DOCKER_CONFIG` environment variable: +The Local Image Builder is a built-in feature of ZenML that utilizes the local Docker installation on your machine to create container images. It employs the official Docker Python library for building and pushing images, which accesses authentication credentials from the default config location: `$HOME/.docker/config.json`. To use a different directory for Docker configuration, set the `DOCKER_CONFIG` environment variable: ```shell export DOCKER_CONFIG=/path/to/config_dir ``` -Ensure the specified directory contains a `config.json` file. +The specified directory must contain a `config.json` file. ### When to Use Use the Local Image Builder if: -- You can install and run Docker on your client machine. -- You want to utilize remote components requiring containerization without complex infrastructure setup. +- You can install and run Docker on your machine. +- You want to use remote components requiring containerization without complex infrastructure setup. + +### Deployment + +The Local Image Builder is included with ZenML and requires no additional setup. -### Deployment and Usage +### Usage -The Local Image Builder is included with ZenML and requires no additional setup. To use it, ensure: -- Docker is installed and running. -- The Docker client is authenticated to push to your chosen container registry. +Requirements: +- Docker installed and running. +- Docker client authenticated to push to the desired container registry. -To register the image builder and create a new stack, use: +To register the image builder and create a new stack: ```shell zenml image-builder register <NAME> --flavor=local zenml stack register <STACK_NAME> -i <NAME> ... --set ``` -For detailed configuration options, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-image_builders/#zenml.image_builders.local_image_builder.LocalImageBuilder). +For more details and configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-image_builders/#zenml.image_builders.local_image_builder.LocalImageBuilder). ================================================== @@ -4042,11 +4059,11 @@ For detailed configuration options, refer to the [SDK Docs](https://sdkdocs.zenm ### Develop a Custom Image Builder -To create a custom image builder in ZenML, start by understanding the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). +#### Overview +To create a custom image builder in ZenML, start by reviewing the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). #### Base Abstraction - -The `BaseImageBuilder` is the abstract base class for building Docker images. It provides a basic interface: +The `BaseImageBuilder` is an abstract class for building Docker images. It provides a basic interface: ```python from abc import ABC, abstractmethod @@ -4073,42 +4090,35 @@ class BaseImageBuilder(StackComponent, ABC): ``` #### Steps to Create a Custom Image Builder +1. **Subclass `BaseImageBuilder`**: Implement the `build` method to create a Docker image using the provided context. Handle optional image pushing to a container registry. + +2. **Configuration Class**: If needed, subclass `BaseImageBuilderConfig` to add configuration parameters. -1. **Subclass `BaseImageBuilder`**: Implement the `build` method to create a Docker image using the provided context. If a container registry is specified, push the image there. - -2. **Configuration Class**: If needed, create a class inheriting from `BaseImageBuilderConfig` to add configuration parameters. - -3. **Flavor Class**: Combine the implementation and configuration by inheriting from `BaseImageBuilderFlavor`, ensuring to provide a `name` for the flavor. +3. **Combine Implementation and Configuration**: Inherit from `BaseImageBuilderFlavor`, providing a name for the flavor. 4. **Register the Flavor**: Use the CLI to register your flavor: - ```shell zenml image-builder flavor register <path.to.MyImageBuilderFlavor> ``` - Example: - ```shell zenml image-builder flavor register flavors.my_flavor.MyImageBuilderFlavor ``` -**Note**: Ensure ZenML is initialized at the root of your repository to avoid resolution issues. - -5. **List Available Flavors**: - +#### Important Notes +- Ensure ZenML is initialized at the root of your repository to avoid resolution issues. +- After registration, list available flavors: ```shell zenml image-builder flavor list ``` -#### Important Considerations - -- The `CustomImageBuilderFlavor` is used during flavor creation via CLI. -- The `CustomImageBuilderConfig` is utilized for validating user input during registration. -- The `CustomImageBuilder` is engaged when the component is in use, allowing separation of flavor configuration from implementation. +#### Workflow Integration +- The `CustomImageBuilderFlavor` is used during flavor creation. +- The `CustomImageBuilderConfig` is validated during stack component registration. +- The `CustomImageBuilder` is utilized when the component is executed, allowing for separation of configuration and implementation. #### Custom Build Context - -If a different build context is required, subclass `BuildContext` and override the `build_context_class` property in your image builder implementation to specify your custom context. +To use a custom build context, subclass `BuildContext` and override the `build_context_class` property in your image builder implementation. ================================================== @@ -4116,33 +4126,32 @@ If a different build context is required, subclass `BuildContext` and override t ### Google Cloud Image Builder Overview -The Google Cloud Image Builder is part of the ZenML `gcp` integration that utilizes [Google Cloud Build](https://cloud.google.com/build) to create container images. +The Google Cloud Image Builder is a component of the ZenML `gcp` integration that utilizes [Google Cloud Build](https://cloud.google.com/build) for building container images. -#### When to Use +### When to Use Use the Google Cloud Image Builder if: - You cannot install or use [Docker](https://www.docker.com) locally. - You are already using Google Cloud Platform (GCP). -- Your stack integrates with other GCP components like the [GCS Artifact Store](../artifact-stores/gcp.md) or [Vertex Orchestrator](../orchestrators/vertex.md). +- Your stack includes other GCP components like the [GCS Artifact Store](../artifact-stores/gcp.md) or the [Vertex Orchestrator](../orchestrators/vertex.md). -#### Deployment +### Deployment To deploy the Google Cloud Image Builder, enable the necessary Google Cloud Build APIs in your GCP project. -#### Usage Requirements +### Usage Requirements +To use the Google Cloud Image Builder: 1. Install the ZenML `gcp` integration: ```shell zenml integration install gcp ``` 2. Set up a [GCP Artifact Store](../artifact-stores/gcp.md) for build context. -3. Configure a [GCP container registry](../container-registries/gcp.md) for the built image. -4. Optionally, specify the GCP project ID and service account with required permissions. If omitted, they will be inferred from the environment. - -You can customize: -- The Docker image used for building (default: `'gcr.io/cloud-builders/docker'`). -- The network for the build container. -- The build timeout. - -#### Registering the Image Builder -To register and use the image builder in your active stack: +3. Set up a [GCP container registry](../container-registries/gcp.md) for the built image. +4. Optionally specify: + - GCP project ID and service account with necessary permissions. + - Custom Docker image for builds (default: `'gcr.io/cloud-builders/docker'`). + - Network and build timeout settings. + +### Registering the Image Builder +Register the image builder and use it in your active stack: ```shell zenml image-builder register <IMAGE_BUILDER_NAME> \ --flavor=gcp \ @@ -4153,37 +4162,41 @@ zenml image-builder register <IMAGE_BUILDER_NAME> \ zenml stack register <STACK_NAME> -i <IMAGE_BUILDER_NAME> ... --set ``` -#### Authentication Methods +### Authentication Methods Authentication is required to use the GCP Image Builder. Options include: -1. **Implicit Authentication**: Uses local GCP credentials. Requires Google Cloud CLI setup. - - **Note**: Not portable across environments. - -2. **GCP Service Connector (Recommended)**: Provides better security and reusability. Register using: - ```shell +- **Local Authentication**: Quick setup using local GCP CLI credentials. Requires the Google Cloud CLI installed. +- **GCP Service Connector (recommended)**: Provides better security and reusability across stack components. Register using: + ```sh zenml service-connector register --type gcp -i ``` - For auto-configuration: - ```shell - zenml service-connector register <CONNECTOR_NAME> --type gcp --resource-type gcp-generic --resource-name <GCS_BUCKET_NAME> --auto-configure - ``` +### Connecting the Image Builder +After setting up authentication, connect the GCP Image Builder: +```shell +zenml image-builder connect <IMAGE_BUILDER_NAME> -i +``` +For non-interactive connection: +```shell +zenml image-builder connect <IMAGE_BUILDER_NAME> --connector <CONNECTOR_ID> +``` -3. **GCP Service Account Key**: Create a service account, grant permissions, and register: - ```shell - zenml image-builder register <IMAGE_BUILDER_NAME> \ - --flavor=gcp \ - --project=<GCP_PROJECT_ID> \ - --service_account_path=<PATH_TO_SERVICE_ACCOUNT_KEY> \ - --cloud_builder_image=<BUILDER_IMAGE_NAME> \ - --network=<DOCKER_NETWORK> \ - --build_timeout=<BUILD_TIMEOUT_IN_SECONDS> +### Using GCP Credentials +You can also use a service account key for authentication: +```shell +zenml image-builder register <IMAGE_BUILDER_NAME> \ + --flavor=gcp \ + --project=<GCP_PROJECT_ID> \ + --service_account_path=<PATH_TO_SERVICE_ACCOUNT_KEY> \ + --cloud_builder_image=<BUILDER_IMAGE_NAME> \ + --network=<DOCKER_NETWORK> \ + --build_timeout=<BUILD_TIMEOUT_IN_SECONDS> - zenml stack register <STACK_NAME> -i <IMAGE_BUILDER_NAME> ... --set - ``` +zenml stack register <STACK_NAME> -i <IMAGE_BUILDER_NAME> ... --set +``` -#### Caveats -Google Cloud Build uses a default network (`cloudbuild`) that provides Application Default Credentials (ADC) for GCP service access. The image builder uses this network by default, which is useful for accessing private dependencies in GCP Artifact Registry. To install private dependencies, use a custom base image with `keyrings.google-artifactregistry-auth`: +### Caveats +Google Cloud Build uses a default network (`cloudbuild`) that provides Application Default Credentials (ADC). If you need to install private dependencies, use a custom base image with the `keyrings.google-artifactregistry-auth` package: ```dockerfile FROM zenmldocker/zenml:latest @@ -4191,7 +4204,8 @@ RUN pip install keyrings.google-artifactregistry-auth ``` **Note**: Specify the ZenML version in the base image tag for consistency. -This summary retains critical technical details and instructions while eliminating redundancy for clarity. +### Summary +The Google Cloud Image Builder is a powerful tool for building container images in GCP, especially useful for users unable to run Docker locally. Proper authentication and configuration are crucial for effective use. ================================================== @@ -4199,24 +4213,27 @@ This summary retains critical technical details and instructions while eliminati ### Kaniko Image Builder Overview -The Kaniko image builder, part of the ZenML `kaniko` integration, utilizes [Kaniko](https://github.com/GoogleContainerTools/kaniko) to build container images in Kubernetes environments. +The Kaniko image builder, part of the ZenML `kaniko` integration, utilizes [Kaniko](https://github.com/GoogleContainerTools/kaniko) to build container images. -#### When to Use Kaniko -- If Docker cannot be installed or used on your client machine. -- If you are familiar with Kubernetes. +#### When to Use +Use the Kaniko image builder if: +- You cannot install or use [Docker](https://www.docker.com) on your client machine. +- You are familiar with Kubernetes. #### Deployment Requirements +To deploy the Kaniko image builder, you need: - A deployed Kubernetes cluster. -- ZenML `kaniko` integration installed: +- The ZenML `kaniko` integration installed: ```shell zenml integration install kaniko ``` -- `kubectl` installed. +- [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) installed. - A remote container registry as part of your stack. -- Optionally, a remote artifact store if you want to store the build context there. +- Optionally, configure the build context to be stored in the artifact store by setting `store_context_in_artifact_store=True` and ensuring a remote artifact store is part of your stack. +- Optionally, adjust the pod running timeout with `pod_running_timeout`. #### Registering the Image Builder -To register and use the Kaniko image builder: +Register and use the Kaniko image builder in your active stack: ```shell zenml image-builder register <NAME> \ --flavor=kaniko \ @@ -4229,10 +4246,12 @@ zenml stack register <STACK_NAME> -i <NAME> ... --set #### Authentication The Kaniko build pod must authenticate to: - Push to the container registry. -- Pull from a private parent image registry. +- Pull from a private parent image registry if applicable. - Read from the artifact store if configured. -**AWS Configuration:** +For common scenarios where the Kubernetes cluster and container registry are on the same cloud provider, follow these guidelines: + +**AWS:** - Attach `EC2InstanceProfileForImageBuilderECRContainerBuilds` policy to your EKS node IAM role. - Register the image builder with required environment variables: ```shell @@ -4242,9 +4261,9 @@ The Kaniko build pod must authenticate to: --env='[{"name": "AWS_SDK_LOAD_CONFIG", "value": "true"}, {"name": "AWS_EC2_METADATA_DISABLED", "value": "true"}]' ``` -**GCP Configuration:** +**GCP:** - Enable workload identity and create necessary service accounts. -- Register the image builder with namespace and service account: +- Register the image builder with the correct namespace and service account: ```shell zenml image-builder register <NAME> \ --flavor=kaniko \ @@ -4253,12 +4272,12 @@ The Kaniko build pod must authenticate to: --service_account_name=<KUBERNETES_SERVICE_ACCOUNT_NAME> ``` -**Azure Configuration:** +**Azure:** - Create a Kubernetes `configmap` for Docker config: ```shell kubectl create configmap docker-config --from-literal='config.json={ "credHelpers": { "mycr.azurecr.io": "acr-env" } }' ``` -- Register the image builder to mount the `configmap`: +- Register the image builder to mount the configmap: ```shell zenml image-builder register <NAME> \ --flavor=kaniko \ @@ -4268,7 +4287,7 @@ The Kaniko build pod must authenticate to: ``` #### Additional Parameters -To pass additional parameters to the Kaniko build, use the `executor_args` attribute: +You can pass additional parameters to the Kaniko build using the `executor_args` attribute: ```shell zenml image-builder register <NAME> \ --flavor=kaniko \ @@ -4279,9 +4298,12 @@ zenml image-builder register <NAME> \ **Common Flags:** - `--cache`: Disable caching (default: true). - `--cache-dir`: Directory for cached layers (default: `/cache`). +- `--cache-repo`: Repository for cached layers. +- `--cache-ttl`: Cache expiration time (default: `24h`). - `--cleanup`: Disable cleanup of the working directory (default: true). +- `--compressed-caching`: Disable compressed caching (default: true). -For a complete list of flags, refer to the [Kaniko additional flags documentation](https://github.com/GoogleContainerTools/kaniko#additional-flags). +For more details, refer to the [Kaniko additional flags](https://github.com/GoogleContainerTools/kaniko#additional-flags) and the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kaniko/#zenml.integrations.kaniko.image_builders.kaniko_image_builder.KanikoImageBuilder). ================================================== @@ -4289,48 +4311,46 @@ For a complete list of flags, refer to the [Kaniko additional flags documentatio ### AWS Image Builder with ZenML -**Overview**: The AWS Image Builder is part of the ZenML `aws` integration, utilizing [AWS CodeBuild](https://aws.amazon.com/codebuild) to build container images. +**Overview**: The AWS Image Builder is a component of the ZenML `aws` integration that utilizes AWS CodeBuild to create container images. #### When to Use -- If Docker cannot be installed on your machine. +- If Docker cannot be installed on your client machine. - If you are already using AWS. -- If your stack includes AWS components like the [S3 Artifact Store](../artifact-stores/s3.md) or [SageMaker Orchestrator](../orchestrators/sagemaker.md). +- If your stack consists mainly of AWS components (e.g., S3 Artifact Store, SageMaker Orchestrator). #### Deployment -For quick deployment, use the [in-browser stack deployment wizard](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack.md) or the [ZenML AWS Terraform module](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack-with-terraform.md). +For a quick setup, use the [in-browser stack deployment wizard](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack.md) or the [ZenML AWS Terraform module](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack-with-terraform.md). #### Usage Requirements -1. Install the ZenML `aws` integration: +1. Install ZenML AWS integration: ```shell zenml integration install aws ``` -2. Set up an [S3 Artifact Store](../artifact-stores/s3.md) for build context. -3. Optionally, configure an [AWS container registry](../container-registries/aws.md) for image storage. -4. Create an [AWS CodeBuild project](https://aws.amazon.com/codebuild) in the desired AWS region. Key configurations include: - - **Source Type**: `Amazon S3` - - **Bucket**: Same as the S3 Artifact Store - - **Environment Type**: `Linux Container` - - **Environment Image**: `bentolor/docker-dind-awscli` - - **Privileged Mode**: `false` +2. Set up an S3 Artifact Store for build context. +3. Optionally, configure an AWS container registry for built images. +4. Create an AWS CodeBuild project in the desired AWS region. + +**Example CodeBuild Configuration**: +- **Source Type**: Amazon S3 +- **Bucket**: Same as S3 Artifact Store +- **Environment Type**: Linux Container +- **Environment Image**: bentolor/docker-dind-awscli +- **Privileged Mode**: false **Service Role Permissions**: -Ensure the CodeBuild project’s Service Role has permissions for S3 and ECR (if applicable): +Ensure the CodeBuild service role has permissions to access the S3 bucket and ECR registry: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", - "Action": ["s3:GetObject", "s3:GetObjectVersion"], + "Action": ["s3:GetObject"], "Resource": "arn:aws:s3:::<BUCKET_NAME>/*" }, { "Effect": "Allow", - "Action": [ - "ecr:BatchGetImage", "ecr:DescribeImages", "ecr:BatchCheckLayerAvailability", - "ecr:GetDownloadUrlForLayer", "ecr:InitiateLayerUpload", "ecr:UploadLayerPart", - "ecr:CompleteLayerUpload", "ecr:PutImage" - ], + "Action": ["ecr:*"], "Resource": "arn:aws:ecr:<REGION>:<ACCOUNT_ID>:repository/<REPOSITORY_NAME>" }, { @@ -4342,49 +4362,51 @@ Ensure the CodeBuild project’s Service Role has permissions for S3 and ECR (if } ``` -#### Registering the Image Builder -To register the image builder: -```shell -zenml image-builder register <IMAGE_BUILDER_NAME> \ - --flavor=aws \ - --code_build_project=<CODEBUILD_PROJECT_NAME> -``` -To register and activate a stack: -```shell -zenml stack register <STACK_NAME> -i <IMAGE_BUILDER_NAME> ... --set -``` - #### Authentication Methods -Authentication is essential for using the AWS Image Builder. Recommended methods: -1. **AWS Service Connector**: Best for security and reusability. +- **Local Authentication**: Quick setup using local AWS CLI credentials, but not portable. +- **AWS Service Connector** (recommended): Provides better security and can be registered using: ```shell zenml service-connector register --type aws -i ``` -2. **Implicit Authentication**: Quick setup using local AWS CLI credentials, but not portable across environments. -**Service Connector Permissions**: -Ensure permissions for CodeBuild API: +**Example Command for Auto-Configuration**: +```shell +zenml service-connector register <CONNECTOR_NAME> --type aws --resource-type aws-generic --auto-configure +``` + +**Permissions for CodeBuild**: +Ensure the entity associated with AWS credentials has permissions to access CodeBuild: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", - "Action": ["codebuild:StartBuild", "codebuild:BatchGetBuilds"], + "Action": ["codebuild:StartBuild"], "Resource": "arn:aws:codebuild:<REGION>:<ACCOUNT_ID>:project/<CODEBUILD_PROJECT_NAME>" } ] } ``` -#### Customizing AWS CodeBuild Builds +#### Registering the Image Builder +To register the AWS Image Builder: +```shell +zenml image-builder register <IMAGE_BUILDER_NAME> \ + --flavor=aws \ + --code_build_project=<CODEBUILD_PROJECT_NAME> \ + --connector <CONNECTOR_ID> +``` + +#### Customizing CodeBuild Builds You can customize the image builder with: - `build_image`: Default is `bentolor/docker-dind-awscli`. - `compute_type`: Default is `BUILD_GENERAL1_SMALL`. - `custom_env_vars`: Custom environment variables. -- `implicit_container_registry_auth`: Use implicit (default) or explicit authentication. +- `implicit_container_registry_auth`: Use implicit (default) or explicit authentication for container registry. -For more details on setting up and using the AWS Image Builder, refer to the relevant sections in the ZenML documentation. +### Conclusion +The AWS Image Builder in ZenML allows for efficient container image creation using AWS CodeBuild, with various options for authentication and customization to fit your deployment needs. ================================================== @@ -4392,160 +4414,147 @@ For more details on setting up and using the AWS Image Builder, refer to the rel ### Image Builders in ZenML -**Overview**: The image builder is crucial for building container images necessary for executing machine-learning pipelines in remote environments. +**Overview**: The image builder is crucial for building container images in remote MLOps environments, enabling execution of machine-learning pipelines. -**When to Use**: An image builder is required when components of your MLOps stack need to create container images, particularly for ZenML's remote orchestrators, step operators, and model deployers. +**When to Use**: Required when components of the ZenML stack (like orchestrators, step operators, and model deployers) need to create Docker images. -**Image Builder Flavors**: ZenML offers several image builders: +**Image Builder Flavors**: +ZenML provides several image builders: -| Image Builder | Flavor | Integration | Notes | -|-------------------------|----------|-------------|----------------------------------------------| -| [LocalImageBuilder](local.md) | `local` | _built-in_ | Builds Docker images locally. | -| [KanikoImageBuilder](kaniko.md) | `kaniko` | `kaniko` | Builds Docker images in Kubernetes using Kaniko. | -| [GCPImageBuilder](gcp.md) | `gcp` | `gcp` | Uses Google Cloud Build for Docker images. | -| [AWSImageBuilder](aws.md) | `aws` | `aws` | Uses AWS Code Build for Docker images. | +| Image Builder | Flavor | Integration | Notes | +|-----------------------|----------|-------------|----------------------------------------| +| [LocalImageBuilder](local.md) | `local` | _built-in_ | Builds Docker images locally. | +| [KanikoImageBuilder](kaniko.md) | `kaniko` | `kaniko` | Builds Docker images in Kubernetes. | +| [GCPImageBuilder](gcp.md) | `gcp` | `gcp` | Uses Google Cloud Build for images. | +| [AWSImageBuilder](aws.md) | `aws` | `aws` | Uses AWS Code Build for images. | | [Custom Implementation](custom.md) | _custom_ | | Allows for custom image builder implementations. | -To view available image builder flavors, use the command: +To view available image builder flavors, use: ```shell zenml image-builder flavor list ``` -**Usage**: You do not need to interact directly with the image builder in your code. If the desired image builder is part of your active ZenML stack, it will be automatically utilized by any component requiring container image creation. +**Usage**: Direct interaction with the image builder is not required. The active ZenML stack automatically utilizes the appropriate image builder for any component that needs to build container images. ================================================== === File: docs/book/component-guide/artifact-stores/azure.md === -### Azure Blob Storage Artifact Store +### Azure Blob Storage for ZenML Artifacts -The Azure Artifact Store is a ZenML integration that utilizes Azure Blob Storage to store artifacts. It is suitable for projects requiring shared storage, remote components, or production-grade MLOps. +**Overview**: The Azure Artifact Store, part of the ZenML integration, utilizes Azure Blob Storage to store ZenML artifacts. It is ideal for scenarios requiring shared access, remote components, or scaling beyond local storage. -#### When to Use Azure Artifact Store -- **Collaboration**: Share pipeline results with team members or stakeholders. -- **Remote Components**: Integrate with orchestrators like Kubeflow or Kubernetes. -- **Storage Needs**: Overcome local storage limitations. -- **Scalability**: Handle production-level demands. +**Use Cases**: +- Sharing pipeline results with team members. +- Integrating with remote components (e.g., Kubeflow, Kubernetes). +- Handling storage limitations of local machines. +- Supporting production-grade MLOps. -#### Deployment Steps -1. **Install the Azure Integration**: +**Deployment**: +1. Install the Azure integration: ```shell zenml integration install azure -y ``` - -2. **Register the Azure Artifact Store**: - - The root path URI must point to an Azure Blob Storage container in the format `az://container-name` or `abfs://container-name`. +2. Register the Azure Artifact Store with the root path URI pointing to an Azure Blob Storage container: ```shell zenml artifact-store register az_store -f azure --path=az://container-name - zenml stack register custom_stack -a az_store ... --set ``` - -#### Authentication Methods -- **Implicit Authentication**: Quick local setup using environment variables for Azure credentials. -- **Azure Service Connector**: Recommended for better security and integration with multiple Azure resources. - -**Implicit Authentication Setup**: -- Set environment variables for account name and key/token. - -**Azure Service Connector Setup**: -1. **Register the Service Connector**: +3. Set up a stack with the new artifact store: ```shell - zenml service-connector register --type azure -i + zenml stack register custom_stack -a az_store ... --set ``` -2. **Connect to Blob Storage**: + +**Authentication**: +- **Implicit Authentication**: Quick local setup using environment variables for Azure credentials (e.g., account key, connection string). +- **Azure Service Connector** (recommended): For better security and integration with other Azure components: ```shell zenml service-connector register <CONNECTOR_NAME> --type azure --auth-method service-principal --tenant_id=<AZURE_TENANT_ID> --client_id=<AZURE_CLIENT_ID> --client_secret=<AZURE_CLIENT_SECRET> --resource-type blob-container --resource-id <BLOB_CONTAINER_NAME> ``` -3. **Connect the Artifact Store**: +**Connecting the Artifact Store**: +1. Register the Azure Artifact Store: + ```shell + zenml artifact-store register <AZURE_STORE_NAME> -f azure --path='az://your-container' + ``` +2. Connect it to the Azure Service Connector: ```shell zenml artifact-store connect <AZURE_STORE_NAME> -i ``` -#### ZenML Secret Management -You can create a ZenML Secret to store Azure credentials: +**Using ZenML Secrets**: Store Azure credentials securely in ZenML secrets: ```shell zenml secret create az_secret --account_name='<YOUR_AZURE_ACCOUNT_NAME>' --account_key='<YOUR_AZURE_ACCOUNT_KEY>' ``` -Then register the artifact store with the secret: +Register the artifact store with the secret: ```shell zenml artifact-store register az_store -f azure --path='az://your-container' --authentication_secret=az_secret ``` -#### Usage -The Azure Artifact Store functions like any other ZenML artifact store, with artifacts stored in Azure Blob Storage. +**Usage**: The Azure Artifact Store functions like any other ZenML Artifact Store, allowing seamless integration into your pipelines. -For detailed documentation, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-azure/#zenml.integrations.azure.artifact_stores). +For detailed SDK documentation, refer to [ZenML SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-azure/#zenml.integrations.azure.artifact_stores). ================================================== === File: docs/book/component-guide/artifact-stores/s3.md === -### Summary: Storing Artifacts in an AWS S3 Bucket with ZenML +### Summary of AWS S3 Artifact Store Documentation -#### Overview -The S3 Artifact Store integrates with ZenML to utilize AWS S3 or compatible services (e.g., MinIO, Ceph RGW) for artifact storage. It is ideal for projects requiring shared access, remote components, or scalable storage solutions. +**Overview** +The S3 Artifact Store is a ZenML integration that utilizes AWS S3 or compatible services (e.g., MinIO, Ceph RGW) for artifact storage. It is suitable for projects requiring shared storage, remote components, or production-grade MLOps. -#### When to Use S3 Artifact Store -- Sharing pipeline results with team members. -- Integrating with remote orchestration tools (e.g., Kubeflow). -- Handling large storage needs beyond local capabilities. -- Running production-grade MLOps pipelines. +**Use Cases** +Consider using the S3 Artifact Store if: +- You need to share pipeline results. +- Your components run remotely (e.g., on Kubernetes). +- Local storage is insufficient. +- You require scalable storage for production pipelines. -#### Deployment Steps -1. **Install S3 Integration:** +**Deployment Steps** +1. **Install S3 Integration**: ```shell zenml integration install s3 -y ``` -2. **Register S3 Artifact Store:** - - Mandatory parameter: `--path=s3://bucket-name`. +2. **Register S3 Artifact Store**: + - The root path URI must be in the format `s3://bucket-name`. + - Example registration: ```shell zenml artifact-store register s3_store -f s3 --path=s3://bucket-name - ``` - -3. **Set Stack with Artifact Store:** - ```shell zenml stack register custom_stack -a s3_store ... --set ``` -#### Authentication Methods -- **Implicit Authentication:** Quick local setup using AWS CLI credentials. Not recommended for remote components due to access issues. -- **AWS Service Connector (Recommended):** Provides secure, fine-grained access control. Register using: - ```shell - zenml service-connector register --type aws -i - ``` - - For a specific S3 bucket: - ```shell - zenml service-connector register <CONNECTOR_NAME> --type aws --resource-type s3-bucket --resource-name <S3_BUCKET_NAME> --auto-configure - ``` +3. **Authentication**: + - **Implicit Authentication**: Quick setup using local AWS CLI credentials. Requires AWS CLI installation. + - **AWS Service Connector** (recommended): Provides better security and access control. + - Register using: + ```sh + zenml service-connector register --type aws -i + ``` + - Connect to S3 bucket: + ```sh + zenml artifact-store connect <S3_STORE_NAME> -i + ``` -4. **Connect S3 Artifact Store to AWS Service Connector:** +4. **Using ZenML Secrets**: Store AWS access keys in ZenML Secrets for better security. ```shell - zenml artifact-store connect <S3_STORE_NAME> -i + zenml secret create s3_secret --aws_access_key_id='<YOUR_S3_ACCESS_KEY_ID>' --aws_secret_access_key='<YOUR_S3_SECRET_KEY>' + zenml artifact-store register s3_store -f s3 --path='s3://your-bucket' --authentication_secret=s3_secret ``` -#### Using ZenML Secrets -To enhance security, store AWS access keys in ZenML Secrets: -```shell -zenml secret create s3_secret --aws_access_key_id='<YOUR_S3_ACCESS_KEY_ID>' --aws_secret_access_key='<YOUR_S3_SECRET_KEY>' -zenml artifact-store register s3_store -f s3 --path='s3://your-bucket' --authentication_secret=s3_secret -``` - -#### Advanced Configuration -Customize connections with parameters passed to the S3Fs library: -- `client_kwargs`: For `endpoint_url`, `region_name`. -- `config_kwargs`: Advanced botocore client settings. -- `s3_additional_kwargs`: S3 API parameters (e.g., `ServerSideEncryption`). +**Advanced Configuration** +You can customize the S3 Artifact Store with advanced options: +- `client_kwargs`: For botocore client settings (e.g., `endpoint_url`, `region_name`). +- `config_kwargs`: For botocore client configuration. +- `s3_additional_kwargs`: For S3 API parameters (e.g., `ServerSideEncryption`). -Example: +Example of advanced registration: ```shell zenml artifact-store register minio_store -f s3 --path='s3://minio_bucket' --authentication_secret=s3_secret --client_kwargs='{"endpoint_url": "http://minio.cluster.local:9000", "region_name": "us-east-1"}' ``` -#### Usage -The S3 Artifact Store functions similarly to other Artifact Store flavors in ZenML. For detailed usage, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-s3/#zenml.integrations.s3.artifact_stores.s3_artifact_store). +**Usage** +Using the S3 Artifact Store is similar to other Artifact Store flavors in ZenML. For further details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-s3/#zenml.integrations.s3.artifact_stores.s3_artifact_store). ================================================== @@ -4553,19 +4562,19 @@ The S3 Artifact Store functions similarly to other Artifact Store flavors in Zen ### Local Artifact Store -The Local Artifact Store is a built-in ZenML [Artifact Store](./artifact-stores.md) that stores artifacts on your local filesystem. +The Local Artifact Store is a built-in ZenML Artifact Store that utilizes a folder on your local filesystem for artifact storage. #### Use Cases -- Ideal for getting started with ZenML without needing additional resources or managed object-store services. -- Suitable for evaluation or experimental phases where sharing artifacts is not required. +- Ideal for getting started with ZenML without needing additional resources or managed object-store services (e.g., Amazon S3, Google Cloud Storage). +- Suitable for evaluation or experimental phases where sharing artifacts is unnecessary. -**Warning**: Not for production use; artifacts cannot be shared across teams or accessed from other machines. Artifact visualizations are unavailable when using a local Artifact Store in a cloud-deployed ZenML instance. It lacks features like high availability, scalability, and backup. +**Warning**: Not for production use. The local filesystem is not shareable across teams, and artifacts cannot be accessed from other machines. Artifact visualizations are unavailable when using a local Artifact Store with a cloud-deployed ZenML instance. It lacks production-grade features like high-availability, scalability, and backup. #### Compatibility -- Only local Orchestrators (e.g., local, Kubeflow, Kubernetes) and local Model Deployers (e.g., MLflow) can be used with the Local Artifact Store. +- Only local Orchestrators (e.g., local, Kubeflow, Kubernetes) and local Model Deployers (e.g., MLflow) can be used with a local Artifact Store. - Step Operators cannot be used as they require remote environments. -Transitioning to a team or production setting allows for easy replacement of the Local Artifact Store with other flavors without code changes. +Transitioning to a team or production setting can be done by replacing the local Artifact Store with a more suitable flavor without code changes. #### Deployment The default stack in ZenML includes a local Artifact Store: @@ -4585,12 +4594,12 @@ zenml artifact-store register custom_local --flavor local zenml stack register custom_stack -o default -a custom_local --set ``` -**Warning**: The `path` parameter can be customized during registration, but it's recommended to use the default path to avoid issues with local stack components. +**Warning**: The local Artifact Store accepts a `path` parameter during registration, but using the default path is recommended to avoid issues with local stack components. -For detailed implementation and configuration, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-artifact_stores/#zenml.artifact_stores.local_artifact_store). +For more details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-artifact_stores/#zenml.artifact_stores.local_artifact_store). #### Usage -Using the Local Artifact Store is similar to using any other Artifact Store flavor. +Using the local Artifact Store is similar to using any other Artifact Store flavor. ================================================== @@ -4599,61 +4608,60 @@ Using the Local Artifact Store is similar to using any other Artifact Store flav ### Summary: Developing a Custom Artifact Store in ZenML #### Overview -ZenML provides built-in Artifact Store implementations for local and cloud storage (AWS, GCP, Azure). However, users can create custom Artifact Store implementations to support different object storage services. +ZenML provides built-in artifact store implementations for local and cloud storage (AWS, GCP, Azure). For custom storage solutions, users can create a custom Artifact Store implementation. #### Base Abstraction -The `BaseArtifactStore` class is central to ZenML's artifact storage. Key components include: - -1. **Configuration Parameter**: - - `path`: Defines the root path of the artifact store. - -2. **Supported Schemes**: - - `SUPPORTED_SCHEMES`: A class variable that specifies the file path schemes for the implementation (e.g., `{"abfs://", "az://"}` for Azure). +The `BaseArtifactStore` class serves as the foundation for all artifact stores in ZenML. Key points include: -3. **Abstract Methods**: - - Implementations must define methods such as `open`, `copyfile`, `exists`, `glob`, `isdir`, `listdir`, `makedirs`, `mkdir`, `remove`, `rename`, `rmtree`, `stat`, and `walk`. +1. **Configuration**: The `path` parameter specifies the root path for the artifact store. +2. **Supported Schemes**: The `SUPPORTED_SCHEMES` variable must be defined in subclasses to indicate supported file path schemes (e.g., Azure: `{"abfs://", "az://"}`). +3. **Abstract Methods**: Subclasses must implement the following abstract methods: + - `open`, `copyfile`, `exists`, `glob`, `isdir`, `listdir`, `makedirs`, `mkdir`, `remove`, `rename`, `rmtree`, `stat`, `walk`. #### Example Implementation ```python from zenml.enums import StackComponentType from zenml.stack import StackComponent, StackComponentConfig +from typing import Any, List, Set, Tuple, Type, Union + +PathType = Union[bytes, str] class BaseArtifactStoreConfig(StackComponentConfig): path: str - SUPPORTED_SCHEMES: ClassVar[Set[str]] + SUPPORTED_SCHEMES: Set[str] class BaseArtifactStore(StackComponent): @abstractmethod - def open(self, name: PathType, mode: str = "r") -> Any: pass + def open(self, name: PathType, mode: str = "r") -> Any: ... @abstractmethod - def copyfile(self, src: PathType, dst: PathType, overwrite: bool = False) -> None: pass + def copyfile(self, src: PathType, dst: PathType, overwrite: bool = False) -> None: ... @abstractmethod - def exists(self, path: PathType) -> bool: pass + def exists(self, path: PathType) -> bool: ... @abstractmethod - def glob(self, pattern: PathType) -> List[PathType]: pass + def glob(self, pattern: PathType) -> List[PathType]: ... @abstractmethod - def isdir(self, path: PathType) -> bool: pass + def isdir(self, path: PathType) -> bool: ... @abstractmethod - def listdir(self, path: PathType) -> List[PathType]: pass + def listdir(self, path: PathType) -> List[PathType]: ... @abstractmethod - def makedirs(self, path: PathType) -> None: pass + def makedirs(self, path: PathType) -> None: ... @abstractmethod - def mkdir(self, path: PathType) -> None: pass + def mkdir(self, path: PathType) -> None: ... @abstractmethod - def remove(self, path: PathType) -> None: pass + def remove(self, path: PathType) -> None: ... @abstractmethod - def rename(self, src: PathType, dst: PathType, overwrite: bool = False) -> None: pass + def rename(self, src: PathType, dst: PathType, overwrite: bool = False) -> None: ... @abstractmethod - def rmtree(self, path: PathType) -> None: pass + def rmtree(self, path: PathType) -> None: ... @abstractmethod - def stat(self, path: PathType) -> Any: pass + def stat(self, path: PathType) -> Any: ... @abstractmethod - def walk(self, top: PathType, topdown: bool = True, onerror: Optional[Callable[..., None]] = None) -> Iterable[Tuple[PathType, List[PathType], List[PathType]]]: pass + def walk(self, top: PathType, topdown: bool = True, onerror: Optional[Callable[..., None]] = None) -> Iterable[Tuple[PathType, List[PathType], List[PathType]]]: ... class BaseArtifactStoreFlavor(Flavor): @property @abstractmethod - def name(self) -> Type["BaseArtifactStore"]: pass + def name(self) -> Type["BaseArtifactStore"]: ... @property def type(self) -> StackComponentType: return StackComponentType.ARTIFACT_STORE @@ -4662,19 +4670,19 @@ class BaseArtifactStoreFlavor(Flavor): return BaseArtifactStoreConfig @property @abstractmethod - def implementation_class(self) -> Type["BaseArtifactStore"]: pass + def implementation_class(self) -> Type["BaseArtifactStore"]: ... ``` #### Integration with ZenML -- When an Artifact Store is instantiated and added to a stack, it creates a filesystem for ZenML pipelines, allowing methods like `fileio.open(...)` to utilize the defined `open(...)` method. +Once an artifact store is created and added to a stack, it integrates with the `zenml.io.fileio` module, allowing methods like `fileio.open(...)` to utilize the custom implementation. #### Steps to Create a Custom Artifact Store -1. **Inherit from `BaseArtifactStore`** and implement required methods. -2. **Inherit from `BaseArtifactStoreConfig`** and define `SUPPORTED_SCHEMES`. -3. **Combine both** by inheriting from `BaseArtifactStoreFlavor`. +1. Inherit from `BaseArtifactStore` and implement the abstract methods. +2. Inherit from `BaseArtifactStoreConfig` and define `SUPPORTED_SCHEMES`. +3. Inherit from `BaseArtifactStoreFlavor` to combine both classes. -#### Registration -Register the custom flavor using: +#### Registering the Custom Flavor +Use the CLI to register the custom artifact store flavor: ```shell zenml artifact-store flavor register <path.to.MyArtifactStoreFlavor> ``` @@ -4683,17 +4691,12 @@ Example: zenml artifact-store flavor register flavors.my_flavor.MyArtifactStoreFlavor ``` -#### Important Notes -- Ensure ZenML is initialized at the root of your repository for proper resolution. -- After registration, list available flavors with: -```shell -zenml artifact-store flavor list -``` - -#### Artifact Visualization -Custom Artifact Stores must support authentication for visualizations, either through embedded credentials or secret references. Ensure necessary dependencies are installed in the deployment environment. +#### Important Considerations +- Ensure ZenML is initialized at the root of your repository. +- The custom artifact store must support authentication for visualizations and be deployed with necessary dependencies. -For more details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-artifact_stores/#zenml.artifact_stores.base_artifact_store.BaseArtifactStore). +#### Additional Resources +For further details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-artifact_stores/#zenml.artifact_stores.base_artifact_store.BaseArtifactStore) and guidelines on deploying ZenML with custom Docker images. ================================================== @@ -4701,65 +4704,51 @@ For more details, refer to the [SDK documentation](https://sdkdocs.zenml.io/late ### Google Cloud Storage (GCS) Artifact Store -The GCS Artifact Store is a ZenML integration that utilizes Google Cloud Storage (GCS) to store artifacts. It's suitable for projects requiring shared storage, remote components, or production-grade MLOps. +The GCS Artifact Store is a ZenML integration that utilizes Google Cloud Storage (GCS) for storing artifacts. It is ideal for projects requiring shared storage, remote components, or production-grade MLOps. -#### When to Use GCS Artifact Store -- **Team Collaboration**: Share pipeline results with team members or stakeholders. -- **Remote Components**: Integrate with orchestrators like Kubeflow or Kubernetes. -- **Storage Limitations**: Overcome local storage constraints. -- **Scalability**: Handle large-scale pipeline demands. +#### Use Cases +Consider using GCS when: +- You need to share pipeline results. +- Components run remotely (e.g., on Kubernetes). +- Local storage is insufficient. +- You require scalable storage for production pipelines. -#### Deployment Steps -1. **Install GCP Integration**: - ```shell - zenml integration install gcp -y - ``` +#### Deployment +To deploy the GCS Artifact Store, install the GCP integration: +```shell +zenml integration install gcp -y +``` +Register the GCS Artifact Store with the required GCS bucket URI: +```shell +zenml artifact-store register gs_store -f gcp --path=gs://bucket-name +zenml stack register custom_stack -a gs_store ... --set +``` -2. **Register GCS Artifact Store**: - - **URI Format**: Use `gs://bucket-name`. - - **Registration Command**: - ```shell - zenml artifact-store register gs_store -f gcp --path=gs://bucket-name - zenml stack register custom_stack -a gs_store ... --set - ``` +#### Authentication +Authentication is necessary to use the GCS Artifact Store. Options include: -#### Authentication Methods -- **Implicit Authentication**: Quick local setup using Google Cloud CLI. Requires local credentials but may limit functionality with remote components. -- **GCP Service Connector (Recommended)**: Provides better security and configuration. Register with: +1. **Implicit Authentication**: Quick local setup using existing Google Cloud CLI credentials. Requires CLI installation. + - Note: Limited functionality with remote ZenML servers. + +2. **GCP Service Connector (Recommended)**: Provides better security and configuration. Register a service connector with: ```shell zenml service-connector register --type gcp -i ``` - - For a specific GCS bucket: + Or for a specific GCS bucket: ```shell zenml service-connector register <CONNECTOR_NAME> --type gcp --resource-type gcs-bucket --resource-name <GCS_BUCKET_NAME> --auto-configure ``` -#### Connecting GCS Artifact Store -After setting up the Service Connector: -```shell -zenml artifact-store register <GCS_STORE_NAME> -f gcp --path='gs://your-bucket' -zenml artifact-store connect <GCS_STORE_NAME> -i -``` -For non-interactive connection: +#### GCP Credentials +Alternatively, use a GCP Service Account Key stored in a ZenML Secret for authentication: ```shell -zenml artifact-store connect <GCS_STORE_NAME> --connector <CONNECTOR_ID> +zenml secret create gcp_secret --token=@path/to/service_account_key.json +zenml artifact-store register gcs_store -f gcp --path='gs://your-bucket' --authentication_secret=gcp_secret +zenml stack register custom_stack -a gs_store ... --set ``` -#### Using GCP Credentials -You can use a GCP Service Account Key stored in a ZenML Secret for authentication: -1. **Create Secret**: - ```shell - zenml secret create gcp_secret --token=@path/to/service_account_key.json - ``` -2. **Register GCS Artifact Store**: - ```shell - zenml artifact-store register gcs_store -f gcp --path='gs://your-bucket' --authentication_secret=gcp_secret - zenml stack register custom_stack -a gs_store ... --set - ``` - #### Usage -Using the GCS Artifact Store is similar to other Artifact Store flavors. For detailed information, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-gcp/#zenml.integrations.gcp.artifact_stores.gcp_artifact_store). +Using the GCS Artifact Store is similar to other Artifact Store flavors. For detailed usage instructions, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-gcp/#zenml.integrations.gcp.artifact_stores.gcp_artifact_store). ================================================== @@ -4768,105 +4757,98 @@ Using the GCS Artifact Store is similar to other Artifact Store flavors. For det # Artifact Stores ## Overview -The Artifact Store is a crucial component in the MLOps stack, serving as a persistence layer for artifacts (datasets, models) generated by machine learning pipelines. ZenML automatically serializes and saves these artifacts, enabling features like caching, provenance tracking, and pipeline reproducibility. +The Artifact Store is a key component of the MLOps stack, serving as a persistence layer for artifacts (datasets, models) produced or ingested by machine learning pipelines. ZenML automatically serializes and saves these artifacts, enabling features like caching, provenance tracking, and pipeline reproducibility. -### Key Points -- Not all pipeline step outputs are stored in the Artifact Store; storage is determined by the associated **Materializer**. -- Custom **Materializers** can be created to store artifacts in different mediums (e.g., external model registries). -- You can extend the Artifact Store abstraction to implement a custom storage backend. +## Key Points +- Not all pipeline step outputs are stored in the Artifact Store; storage behavior is determined by the specific **Materializer** implementation. +- Custom Materializers can be created to store artifacts in different mediums (e.g., external model registries, data lakes). +- The Artifact Store can also be utilized by other components, such as the **Great Expectations Data Validator**. -## Usage -The Artifact Store is mandatory in ZenML stacks and must be configured for all pipeline runs. +## Configuration +The Artifact Store is mandatory in ZenML stacks and must be configured for all stacks. ### Artifact Store Flavors -ZenML provides several built-in and integration-based Artifact Stores: +ZenML supports various Artifact Store flavors: | Artifact Store | Flavor | Integration | URI Schema(s) | Notes | |----------------|--------|-------------|----------------|-------| -| Local | `local`| _built-in_ | None | Default, local filesystem storage. | -| Amazon S3 | `s3` | `s3` | `s3://` | AWS S3 backend. | -| Google Cloud Storage | `gcp` | `gcp` | `gs://` | GCP backend. | -| Azure | `azure`| `azure` | `abfs://`, `az://` | Azure Blob Storage backend. | -| Custom | _custom_| | _custom_ | Custom implementation. | +| Local | `local`| _built-in_ | None | Default store for local filesystem. | +| Amazon S3 | `s3` | `s3` | `s3://` | Uses AWS S3 as a backend. | +| Google Cloud Storage | `gcp` | `gcp` | `gs://` | Uses GCP as a backend. | +| Azure | `azure`| `azure` | `abfs://`, `az://` | Uses Azure Blob Storage. | +| Custom | _custom_| | _custom_ | Extend Artifact Store abstraction. | -To list available Artifact Store flavors: +To list available flavors: ```shell zenml artifact-store flavor list ``` -### Registering an Artifact Store -Each Artifact Store requires a `path` attribute for registration: +### Registration +Each Artifact Store requires a `path` attribute pointing to the root storage location. Example for S3: ```shell zenml artifact-store register s3_store -f s3 --path s3://my_bucket ``` -## Interacting with the Artifact Store -While developing pipelines, direct interaction with the Artifact Store is often unnecessary. Higher-level APIs are available for: -- Automatically saving pipeline artifacts. -- Retrieving artifacts post-pipeline execution. +## Usage +Typically, users interact with higher-level APIs for storing and retrieving artifacts. However, direct interaction with the Artifact Store API is necessary for: +- Implementing custom Materializers. +- Storing custom objects. -### Low-Level Artifact Store API -All Artifact Stores implement a unified IO API resembling a file system. Access is facilitated through: -- `zenml.io.fileio`: Low-level utilities for object manipulation (e.g., `open`, `copy`, `remove`). -- `zenml.utils.io_utils`: Higher-level utilities for object transfer between the Artifact Store and local filesystem. +### Artifact Store API +All ZenML Artifact Stores implement a standard IO API, allowing file-like operations. Access is facilitated through: +- `zenml.io.fileio`: Low-level utilities for manipulating artifacts (e.g., `open`, `copy`, `remove`). +- `zenml.utils.io_utils`: Higher-level utilities for transferring objects between the Artifact Store and local filesystem. -When using the API, always use relative URIs to avoid unsupported protocols. Retrieve the root path using the `Repository` singleton: +When using the API, always use URIs relative to the Artifact Store root path. Example to write an artifact: ```python import os from zenml.client import Client from zenml.io import fileio root_path = Client().active_stack.artifact_store.path -artifact_contents = "example artifact" -artifact_path = os.path.join(root_path, "artifacts", "examples") -artifact_uri = os.path.join(artifact_path, "test.txt") -fileio.makedirs(artifact_path) +artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.txt") +fileio.makedirs(os.path.dirname(artifact_uri)) with fileio.open(artifact_uri, "w") as f: - f.write(artifact_contents) + f.write("example artifact") ``` ### Example Operations -- **Writing Data**: +1. **Writing Data**: ```python import os from zenml.utils import io_utils from zenml.client import Client root_path = Client().active_stack.artifact_store.path -artifact_path = os.path.join(root_path, "artifacts", "examples") -artifact_uri = os.path.join(artifact_path, "test.txt") -fileio.makedirs(artifact_path) +artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.txt") io_utils.write_file_contents_as_string(artifact_uri, "example artifact") ``` -- **Reading Data**: +2. **Reading Data**: ```python -import os from zenml.utils import io_utils from zenml.client import Client root_path = Client().active_stack.artifact_store.path artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.txt") -artifact_contents = io_utils.read_file_contents_as_string(artifact_uri) +contents = io_utils.read_file_contents_as_string(artifact_uri) ``` -- **Using Temporary Files**: +3. **Using Temporary Files**: ```python import os import tempfile -import external_lib from zenml.client import Client +from zenml.io import fileio root_path = Client().active_stack.artifact_store.path -artifact_path = os.path.join(root_path, "artifacts", "examples") -artifact_uri = os.path.join(artifact_path, "test.json") -fileio.makedirs(artifact_path) +artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.json") -with tempfile.NamedTemporaryFile(mode="w", suffix=".json") as f: - external_lib.external_object.save_to_file(f.name) +with tempfile.NamedTemporaryFile(mode="w", suffix=".json", delete=True) as f: + # Save to temporary file and copy to artifact store fileio.copy(f.name, artifact_uri) ``` -This summary retains the essential technical details and code examples necessary for understanding and utilizing the Artifact Store in ZenML. +This documentation provides a comprehensive overview of the Artifact Store's role, configuration, and usage within ZenML, ensuring efficient management of machine learning artifacts. ================================================== @@ -4874,104 +4856,111 @@ This summary retains the essential technical details and code examples necessary ### Developing a Custom Data Validator in ZenML -To create a custom data validator in ZenML, follow these steps: +Before implementing a custom data validator, review the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational knowledge. + +**Note:** The base abstraction for Data Validators is under development. While you can create a custom Data Validator, be aware that future updates may require refactoring. -1. **Class Inheritance**: Create a class that inherits from the `BaseDataValidator` class and override necessary abstract methods based on the library/service you want to integrate. -2. **Configuration Class**: If configuration is needed, create a class inheriting from `BaseDataValidatorConfig`. -3. **Flavor Integration**: Combine both classes by inheriting from `BaseDataValidatorFlavor`. -4. **Standard Steps (Optional)**: Provide standard steps for easy integration into pipelines. +ZenML provides built-in [Data Validator implementations](./data-validators.md#data-validator-flavors) that integrate various data logging and validation libraries. To create a custom Data Validator: -**Registration**: Register your custom data validator flavor using the CLI with the following command, ensuring to use dot notation for the flavor class: +1. Inherit from the `BaseDataValidator` class and override necessary abstract methods based on your chosen library/service. +2. If configuration is needed, create a class inheriting from `BaseDataValidatorConfig`. +3. Combine these classes by inheriting from `BaseDataValidatorFlavor`. +4. Optionally, include standard steps for easy integration into pipelines. + +To register your custom flavor, use the CLI command: ```shell zenml data-validator flavor register <path.to.MyDataValidatorFlavor> ``` -For example, if your flavor class is in `flavors/my_flavor.py`: +For example, if your flavor class is `MyDataValidatorFlavor` in `flavors/my_flavor.py`, register it as follows: ```shell zenml data-validator flavor register flavors.my_flavor.MyDataValidatorFlavor ``` -**Best Practices**: Initialize ZenML at the root of your repository using `zenml init` to avoid resolution issues. After registration, confirm the flavor is available with: +**Important:** Ensure ZenML is initialized at the root of your repository to avoid resolution issues. After registration, confirm the flavor is available: ```shell zenml data-validator flavor list ``` -**Important Notes**: -- The `CustomDataValidatorFlavor` is used during flavor creation. -- The `CustomDataValidatorConfig` is utilized when registering/updating a stack component, validating user-provided values. -- The `CustomDataValidator` is engaged when the component is in use, allowing for separation of flavor configuration and implementation. +### Key Points on Base Abstractions: +- **CustomDataValidatorFlavor** is used during flavor creation. +- **CustomDataValidatorConfig** is utilized when registering/updating a stack component, validating user input. +- **CustomDataValidator** is invoked when the component is in use, allowing separation of flavor configuration from implementation. -This modular approach enables registration and component usage without needing all dependencies installed locally. +This design enables registration of flavors and components even if their dependencies are not installed locally, provided the flavor and config are implemented in separate modules. ================================================== === File: docs/book/component-guide/data-validators/great-expectations.md === -### Great Expectations Integration with ZenML +### Great Expectations with ZenML -**Overview**: Great Expectations (GE) is an open-source library for data quality checks, profiling, and documentation. The ZenML integration allows users to run GE data validation within their pipelines, automatically generating documentation and enabling automated corrective actions. +**Overview** +Great Expectations is an open-source library for data quality checks, profiling, and documentation. The ZenML integration allows users to run data profiling and quality tests on `pandas.DataFrame` datasets, with results automatically documented for visualization and evaluation. -#### Key Features: -- **Data Profiling**: Automatically generates validation rules (Expectations) from dataset properties. -- **Data Quality**: Validates datasets against predefined or inferred Expectations. -- **Data Docs**: Maintains human-readable documentation of validation rules and results. +**Use Cases** +Utilize Great Expectations when you need: +- **Data Profiling**: Automatically generate validation rules (Expectations) from dataset properties. +- **Data Quality**: Validate datasets against predefined or inferred rules. +- **Data Docs**: Create human-readable documentation of validation rules and results. -#### Deployment: -To use the Great Expectations Data Validator in ZenML, install the integration: +**Deployment** +To deploy the Great Expectations Data Validator in ZenML, install the integration: ```shell zenml integration install great_expectations -y ``` -**Registration**: -1. Register the data validator: - ```shell - zenml data-validator register ge_data_validator --flavor=great_expectations - ``` -2. Set up a stack: - ```shell - zenml stack register custom_stack -dv ge_data_validator ... --set - ``` +Register the data validator and stack: -#### Configuration Options: -1. **ZenML Managed Configuration**: ZenML initializes and manages GE configuration, storing Expectation Suites and Validation Results in the Artifact Store. -2. **Using Existing Configuration**: Point to your existing `great_expectations.yaml`: - ```shell - zenml data-validator register ge_data_validator --flavor=great_expectations --context_root_dir=/path/to/my/great_expectations - ``` -3. **Migrating Configuration**: Migrate your existing GE configuration to ZenML: - ```shell - zenml data-validator register ge_data_validator --flavor=great_expectations --context_config=@/path/to/my/great_expectations/great_expectations.yaml - ``` +```shell +zenml data-validator register ge_data_validator --flavor=great_expectations +zenml stack register custom_stack -dv ge_data_validator ... --set +``` + +**Configuration Options** +1. **ZenML Managed Configuration**: ZenML initializes and manages the Great Expectations configuration, storing Expectation Suites and Validation Results in the Artifact Store. +2. **Use Existing Configuration**: Point to an existing `great_expectations.yaml` file to reuse configurations. +3. **Migrate Configuration**: Load existing configurations into ZenML for use with both local and remote orchestrators. + +**Advanced Configuration** +- `configure_zenml_stores`: Automatically update Great Expectations to use ZenML's Artifact Store. +- `configure_local_docs`: Set up a local Data Docs site for visualization. -**Advanced Configuration**: -- `configure_zenml_stores`: Automatically updates GE configuration to use ZenML's Artifact Store. -- `configure_local_docs`: Sets up a local Data Docs site for visualization. +**Usage in Pipelines** +Great Expectations is integrated into ZenML with two main steps: +1. **Data Profiler Step**: Generates an Expectation Suite from a `pandas.DataFrame`. -#### Usage in Pipelines: -1. **Data Profiler Step**: Automatically generates an Expectation Suite from a `pandas.DataFrame`. ```python from zenml.integrations.great_expectations.steps import great_expectations_profiler_step ge_profiler_step = great_expectations_profiler_step.with_options( - parameters={"expectation_suite_name": "steel_plates_suite", "data_asset_name": "steel_plates_train_df"} + parameters={ + "expectation_suite_name": "steel_plates_suite", + "data_asset_name": "steel_plates_train_df", + } ) ``` -2. **Data Validator Step**: Validates a `pandas.DataFrame` against an existing Expectation Suite. +2. **Data Validator Step**: Validates a dataset against an existing Expectation Suite. + ```python from zenml.integrations.great_expectations.steps import great_expectations_validator_step ge_validator_step = great_expectations_validator_step.with_options( - parameters={"expectation_suite_name": "steel_plates_suite", "data_asset_name": "steel_plates_train_df"} + parameters={ + "expectation_suite_name": "steel_plates_suite", + "data_asset_name": "steel_plates_train_df", + } ) ``` -#### Direct Interaction with Great Expectations: -You can directly use the GE library within ZenML steps, ensuring to use the Data Context managed by ZenML: +**Direct Library Use** +You can directly use Great Expectations in custom steps, ensuring to utilize the Data Context managed by ZenML: + ```python import great_expectations as ge from zenml.integrations.great_expectations.data_validators import GreatExpectationsDataValidator @@ -4986,8 +4975,9 @@ def create_custom_expectation_suite() -> ExpectationSuite: return suite ``` -#### Visualization: +**Visualization** Results can be visualized in the ZenML dashboard or within Jupyter notebooks using: + ```python from zenml.client import Client @@ -4998,7 +4988,7 @@ def visualize_results(pipeline_name: str, step_name: str) -> None: validation_step.visualize() ``` -This integration streamlines data validation and documentation processes, enhancing data quality management in pipelines. +This summary encapsulates the essential technical details and steps for using Great Expectations with ZenML, ensuring clarity and conciseness while maintaining critical information. ================================================== @@ -5006,47 +4996,43 @@ This integration streamlines data validation and documentation processes, enhanc ### Summary of Deepchecks Integration with ZenML -**Overview**: The Deepchecks integration with ZenML allows for data integrity, drift, and model performance testing within machine learning pipelines. It supports both tabular and computer vision data formats. +**Overview**: The Deepchecks Data Validator in ZenML utilizes the Deepchecks library to perform data integrity, drift, and model performance tests on datasets and models in ZenML pipelines. Results can trigger automated actions or provide visual insights. -#### Key Features: -- **Data Integrity Checks**: Identify issues like missing values and conflicting labels. -- **Data Drift Checks**: Compare target and reference datasets to detect data skew. -- **Model Performance Checks**: Evaluate model performance using metrics like confusion matrix. -- **Multi-Model Performance Reports**: Summarize performance scores for multiple models. +#### Use Cases +Deepchecks is suitable for: +- **Data Integrity Checks**: Identify issues like missing values and conflicting labels in datasets. +- **Data Drift Checks**: Compare target datasets against reference datasets to detect feature and label drift. +- **Model Performance Checks**: Evaluate model performance using metrics like confusion matrices. +- **Multi-Model Performance Reports**: Summarize performance across multiple models. -#### Supported Formats: -- **Tabular Data**: `pandas.DataFrame`, models as `sklearn.base.ClassifierMixin`. -- **Computer Vision Data**: `torch.utils.data.dataloader.DataLoader`, models as `torch.nn.Module`. +**Supported Formats**: +- **Tabular Data**: `pandas.DataFrame`, models of type `sklearn.base.ClassifierMixin`. +- **Computer Vision Data**: `torch.utils.data.DataLoader`, models of type `torch.nn.Module`. -#### Installation: +#### Deployment To install the Deepchecks integration: ```shell zenml integration install deepchecks -y ``` - -#### Registering the Data Validator: +To register and set up the Deepchecks Data Validator: ```shell -# Register the Deepchecks data validator zenml data-validator register deepchecks_data_validator --flavor=deepchecks - -# Register and set a stack with the new data validator zenml stack register custom_stack -dv deepchecks_data_validator ... --set ``` -#### Usage: -Deepchecks validation checks are categorized based on input parameters: +#### Usage +Deepchecks validation checks are categorized into four types based on input requirements: 1. **Data Integrity Checks**: Single dataset input. 2. **Data Drift Checks**: Two datasets (target and reference). -3. **Model Validation Checks**: Single dataset and model input. -4. **Model Drift Checks**: Two datasets and a model input. +3. **Model Validation Checks**: Single dataset and a model. +4. **Model Drift Checks**: Two datasets and a model. -#### Standard Steps: -- `deepchecks_data_integrity_check_step`: For data integrity tests. -- `deepchecks_data_drift_check_step`: For data drift tests. -- `deepchecks_model_validation_check_step`: For model performance tests. -- `deepchecks_model_drift_check_step`: For model comparison tests. +You can use Deepchecks in ZenML by: +- Instantiating standard Deepchecks steps. +- Calling validation methods in custom steps. +- Using the Deepchecks library directly. -#### Example of Data Integrity Check Step: +**Example of Data Integrity Check Step**: ```python from zenml.integrations.deepchecks.steps import deepchecks_data_integrity_check_step @@ -5058,34 +5044,40 @@ data_validator = deepchecks_data_integrity_check_step.with_options( def data_validation_pipeline(): df_train, df_test = data_loader() data_validator(dataset=df_train) - -data_validation_pipeline() -``` - -#### Customizing Checks: -You can specify a custom list of checks and additional parameters: -```python -deepchecks_data_integrity_check_step( - check_list=[ - DeepchecksDataIntegrityCheck.TABULAR_MIXED_DATA_TYPES, - DeepchecksDataIntegrityCheck.TABULAR_DATA_DUPLICATES, - ], - dataset=... -) ``` -#### Docker Configuration for Remote Orchestrators: -To run Deepchecks with remote orchestrators, create a Dockerfile: +#### Remote Orchestrators +For remote orchestrators (e.g., Kubeflow), extend the Docker image to include required binaries for `opencv2`: ```shell ARG ZENML_VERSION=0.20.0 FROM zenmldocker/zenml:${ZENML_VERSION} AS base +RUN apt-get update && apt-get install ffmpeg libsm6 libxext6 -y +``` + +#### Standard Steps +ZenML provides four standard steps for Deepchecks: +- `deepchecks_data_integrity_check_step` +- `deepchecks_data_drift_check_step` +- `deepchecks_model_validation_check_step` +- `deepchecks_model_drift_check_step` -RUN apt-get update -RUN apt-get install ffmpeg libsm6 libxext6 -y +**Example of Custom Data Integrity Check**: +```python +@step +def data_integrity_check(dataset: pd.DataFrame) -> SuiteResult: + data_validator = DeepchecksDataValidator.get_active_data_validator() + suite = data_validator.data_validation( + dataset=dataset, + check_list=[ + DeepchecksDataIntegrityCheck.TABULAR_OUTLIER_SAMPLE_DETECTION, + DeepchecksDataIntegrityCheck.TABULAR_STRING_LENGTH_OUT_OF_BOUNDS, + ], + ) + return suite ``` -#### Visualizing Results: -Results can be visualized in the ZenML dashboard or Jupyter notebooks: +#### Visualizing Results +Results can be visualized in the ZenML dashboard or Jupyter notebooks using: ```python def visualize_results(pipeline_name: str, step_name: str) -> None: pipeline = Client().get_pipeline(pipeline=pipeline_name) @@ -5094,7 +5086,7 @@ def visualize_results(pipeline_name: str, step_name: str) -> None: step.visualize() ``` -This integration provides a robust framework for validating data and models in machine learning workflows, ensuring data integrity and model performance. +This integration provides a robust framework for validating data and models within ZenML pipelines, ensuring data integrity and model performance through automated checks and visualizations. ================================================== @@ -5102,40 +5094,37 @@ This integration provides a robust framework for validating data and models in m # Data Validators -Data Validators are essential tools in machine learning (ML) that ensure data quality and monitor model performance throughout the ML project lifecycle. They help in data profiling, integrity testing, and drift detection at various stages: data ingestion, model training, evaluation, and inference. Data profiles and performance evaluations can be visualized to identify and address issues. +Data Validators are essential tools in machine learning (ML) for ensuring data quality and monitoring model performance throughout the ML project lifecycle. They provide features for data profiling, integrity testing, and drift detection at various stages, including data ingestion, model training, and inference. ## Key Concepts -- Data Validators are optional components in ZenML stacks. -- They generate versioned data profiles and quality reports stored in the Artifact Store for later retrieval and visualization. - -## When to Use Data Validators -Consider using Data Validators in the following scenarios: -- Early stages to log data quality and model performance. -- Pipelines that regularly ingest new data for integrity checks. -- Continuous training pipelines to compare new data and model performance. -- Batch inference or online inference pipelines to analyze data drift and detect discrepancies between training and serving data. +- **Data Validators**: Optional components in ZenML stacks that generate data profiles and quality reports, stored in the [Artifact Store](../artifact-stores/artifact-stores.md). +- **Use Cases**: + - Logging data quality and model performance during development. + - Conducting regular integrity checks for pipelines that ingest new data. + - Comparing new training data and model performance in continuous training pipelines. + - Analyzing data drift in batch inference or online inference scenarios. ## Data Validator Flavors -The following Data Validators are available in ZenML, each with specific features and applicable data/model types: +ZenML supports various Data Validators, each with unique features: -| Data Validator | Validation Features | Data Types | Model Types | Notes | Flavor/Integration | -|------------------------|------------------------------------------------------|----------------------------------------|--------------------------------------|-------------------------------------------------|------------------------| -| [Deepchecks](deepchecks.md) | data quality, drift, model performance | tabular: `pandas.DataFrame`, CV: `torch.utils.data.dataloader.DataLoader` | tabular: `sklearn.base.ClassifierMixin`, CV: `torch.nn.Module` | Integrate Deepchecks tests into pipelines | `deepchecks` | -| [Evidently](evidently.md) | data quality, drift, model performance | tabular: `pandas.DataFrame` | N/A | Generate data quality and drift reports | `evidently` | -| [Great Expectations](great-expectations.md) | data profiling, quality | tabular: `pandas.DataFrame` | N/A | Perform data testing and profiling | `great_expectations` | -| [Whylogs/WhyLabs](whylogs.md) | data drift | tabular: `pandas.DataFrame` | N/A | Generate data profiles and upload to WhyLabs | `whylogs` | +| Data Validator | Validation Features | Data Types | Model Types | Notes | Flavor/Integration | +|----------------------|--------------------------------------------------|--------------------------------------------------|------------------------------------------------|------------------------------------------------|--------------------------| +| [Deepchecks](deepchecks.md) | data quality, drift, model performance | tabular: `pandas.DataFrame`, CV: `torch.utils.data.dataloader.DataLoader` | tabular: `sklearn.base.ClassifierMixin`, CV: `torch.nn.Module` | Integrate Deepchecks tests into pipelines | `deepchecks` | +| [Evidently](evidently.md) | data quality, drift, model performance | tabular: `pandas.DataFrame` | N/A | Generate reports and visualizations | `evidently` | +| [Great Expectations](great-expectations.md) | data profiling, quality | tabular: `pandas.DataFrame` | N/A | Data testing and profiling | `great_expectations` | +| [Whylogs/WhyLabs](whylogs.md) | data drift | tabular: `pandas.DataFrame` | N/A | Generate profiles and upload to WhyLabs | `whylogs` | To view available Data Validator flavors, use: ```shell zenml data-validator flavor list ``` -## How to Use Data Validators -1. Configure and add a Data Validator to your ZenML stack. -2. Integrations include built-in validation steps for pipelines, or you can use libraries directly in custom steps to return results as artifacts. -3. Access data validation artifacts in subsequent steps or fetch them later for processing or visualization. +## Usage +1. **Configure and Add**: Integrate a Data Validator into your ZenML stack. +2. **Add Validation Steps**: Use built-in validation steps in your pipelines or implement them in custom steps, returning results as artifacts. +3. **Access Artifacts**: Retrieve validation artifacts in subsequent steps or [fetch them later](../../how-to/data-artifact-management/handle-data-artifacts/load-artifacts-into-memory.md) for processing or visualization. -Refer to the specific Data Validator flavor documentation for detailed usage instructions. +Refer to the specific [Data Validator flavor documentation](data-validators.md#data-validator-flavors) for detailed usage instructions. ================================================== @@ -5143,177 +5132,178 @@ Refer to the specific Data Validator flavor documentation for detailed usage ins ### Summary of Evidently Data Validator Documentation -**Overview:** -The Evidently Data Validator, integrated with ZenML, utilizes the Evidently library for data quality, drift, and model performance analysis. It generates reports and performs checks to facilitate automated corrective actions in machine learning pipelines. +**Overview**: The Evidently Data Validator, integrated with ZenML, utilizes the Evidently library to assess data quality, detect data and model drift, and evaluate model performance. It generates reports that can trigger automated corrective actions or provide interactive visualizations. -**Use Cases:** -Evidently is suitable for monitoring and debugging ML models by analyzing datasets. Key features include: -- **Data Quality Reports:** Analyze feature statistics and compare datasets. -- **Data Drift Reports:** Detect changes in feature distributions between datasets. -- **Target Drift Reports:** Identify changes in target functions or model predictions. -- **Performance Reports:** Evaluate model performance against datasets. +**Use Cases**: Evidently is suitable for monitoring machine learning models by analyzing datasets through profiling and validation reports. It supports various tasks, including: +- **Data Quality**: Analyzes feature statistics and behavior, allowing comparisons between datasets. +- **Data Drift**: Detects changes in feature distributions between datasets. +- **Target Drift**: Identifies changes in target functions or model predictions. +- **Model Performance**: Evaluates model performance against historical data or alternative models. -**Deployment:** -To use the Evidently Data Validator, install the integration: +**Deployment**: +To install the Evidently integration: ```shell zenml integration install evidently -y ``` -Register the data validator in your stack: +To register the Evidently Data Validator: ```shell zenml data-validator register evidently_data_validator --flavor=evidently zenml stack register custom_stack -dv evidently_data_validator ... --set ``` -**Usage:** -Evidently profiling functions accept `pandas.DataFrame` or CSV datasets and output a `Report` object. For profiling, datasets must include target and prediction columns, requiring column mappings. - -**Generating Reports:** -You can generate reports in ZenML pipelines using: -1. **Standard Evidently Report Step:** Recommended for ease of use. -2. **Custom Step Implementation:** Offers more flexibility. -3. **Direct Evidently Library Usage:** Full control over features. - -**Example of Evidently Report Step:** -```python -from zenml.integrations.evidently.steps import evidently_report_step - -text_data_report = evidently_report_step.with_options( - parameters=dict( - column_mapping=EvidentlyColumnMapping( - target="Rating", - numerical_features=["Age", "Positive_Feedback_Count"], - categorical_features=["Division_Name", "Department_Name", "Class_Name"], - text_features=["Review_Text", "Title"], - ), - metrics=[ - EvidentlyMetricConfig.metric("DataQualityPreset"), - EvidentlyMetricConfig.metric("TextOverviewPreset", column_name="Review_Text"), - ], - download_nltk_data=True, - ), -) -``` - -**Data Validation:** -Evidently also supports automated data validation tests. Similar to profiling, you can use: -1. **Standard Evidently Test Step:** Easiest method. -2. **Custom Step Implementation:** More flexibility. -3. **Direct Library Usage:** Complete freedom. +**Usage**: +1. **Data Profiling**: Use `evidently_report_step` to generate reports from `pandas.DataFrame` datasets. Requires target and prediction columns for certain analyses. + Example configuration: + ```python + from zenml.integrations.evidently.steps import evidently_report_step + + text_data_report = evidently_report_step.with_options( + parameters=dict( + column_mapping=EvidentlyColumnMapping( + target="Rating", + numerical_features=["Age", "Positive_Feedback_Count"], + categorical_features=["Division_Name", "Department_Name", "Class_Name"], + text_features=["Review_Text", "Title"], + ), + metrics=[ + EvidentlyMetricConfig.metric("DataQualityPreset"), + EvidentlyMetricConfig.metric("TextOverviewPreset", column_name="Review_Text"), + ], + download_nltk_data=True, + ), + ) + ``` -**Example of Evidently Test Step:** -```python -from zenml.integrations.evidently.steps import evidently_test_step +2. **Data Validation**: Similar to profiling, use `evidently_test_step` for running validation tests. + Example configuration: + ```python + from zenml.integrations.evidently.steps import evidently_test_step + + text_data_test = evidently_test_step.with_options( + parameters=dict( + column_mapping=EvidentlyColumnMapping( + target="Rating", + numerical_features=["Age", "Positive_Feedback_Count"], + categorical_features=["Division_Name", "Department_Name", "Class_Name"], + text_features=["Review_Text", "Title"], + ), + tests=[ + EvidentlyTestConfig.test("DataQualityTestPreset"), + ], + download_nltk_data=True, + ), + ) + ``` -text_data_test = evidently_test_step.with_options( - parameters=dict( - column_mapping=EvidentlyColumnMapping( - target="Rating", - numerical_features=["Age", "Positive_Feedback_Count"], - categorical_features=["Division_Name", "Department_Name", "Class_Name"], - text_features=["Review_Text", "Title"], - ), - tests=[ - EvidentlyTestConfig.test("DataQualityTestPreset"), - EvidentlyTestConfig.test_generator("TestColumnRegExp", columns=["Review_Text", "Title"], reg_exp=r"[A-Z][A-Za-z0-9 ]*"), - ], - download_nltk_data=True, - ), -) -``` +3. **Direct Use of Evidently**: You can also call Evidently directly in custom steps. + Example: + ```python + from evidently.report import Report + from evidently.pipeline.column_mapping import ColumnMapping -**Direct Library Usage Example:** -```python -from evidently.report import Report -from evidently.pipeline.column_mapping import ColumnMapping + report = Report(metrics=[metric_preset.DataQualityPreset()]) + report.run(current_data=dataset, reference_data=dataset) + ``` -report = Report(metrics=[metric_preset.DataQualityPreset()]) -report.run(current_data=dataset, reference_data=dataset) -``` +**Visualization**: Reports can be visualized in the ZenML dashboard or Jupyter notebooks using the `artifact.visualize()` method. -**Visualizing Reports:** -Reports can be visualized in the ZenML dashboard or Jupyter notebooks using the `artifact.visualize()` method. +**Key Points**: +- Evidently supports both regression and classification tasks. +- Requires minimal configuration for standard reports and tests. +- Allows for custom implementations for advanced use cases. +- Provides JSON and HTML outputs for reports and test results. -**Conclusion:** -Evidently provides a comprehensive solution for monitoring data quality and model performance in machine learning workflows, with flexible integration options within ZenML. For detailed configurations and metric options, refer to the [official Evidently documentation](https://docs.evidentlyai.com/). +For detailed configurations and options, refer to the [Evidently documentation](https://docs.evidentlyai.com/reference/all-metrics) and [ZenML SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-evidently/#zenml.integrations.evidently.steps.evidently_report.evidently_report_step). ================================================== === File: docs/book/component-guide/data-validators/whylogs.md === -### Summary of Whylogs/WhyLabs Profiling Documentation +### Summary of Whylogs/WhyLabs Profiling with ZenML Integration -**Overview**: -The whylogs/WhyLabs Data Validator, integrated with ZenML, utilizes the open-source library [whylogs](https://whylabs.ai/whylogs) to generate and track data profiles, which are statistical summaries of data. These profiles can facilitate automated corrective actions and provide interactive visualizations. +**Overview**: The whylogs/WhyLabs Data Validator in ZenML leverages the open-source library [whylogs](https://whylabs.ai/whylogs) to generate data profiles, which are statistical summaries of data. These profiles can be used for automated corrective actions and visual analysis. -**Use Cases**: -Utilize whylogs for: +**Use Cases**: - **Data Quality**: Validate model input data quality. -- **Data Drift**: Detect shifts in model input features. -- **Model Drift**: Identify training-serving skew and model performance degradation. - -**Deployment**: -To deploy the whylogs Data Validator, install the ZenML integration: - -```shell -zenml integration install whylogs -y -``` - -Register the Data Validator without configuration if not connecting to WhyLabs: +- **Data Drift**: Detect changes in model input features. +- **Model Drift**: Identify training-serving skew and performance degradation. -```shell -zenml data-validator register whylogs_data_validator --flavor=whylogs -zenml stack register custom_stack -dv whylogs_data_validator ... --set -``` - -For WhyLabs integration, create a ZenML Secret for authentication: - -```shell -zenml secret create whylabs_secret \ - --whylabs_default_org_id=<YOUR-WHYLOGS-ORGANIZATION-ID> \ - --whylabs_api_key=<YOUR-WHYLOGS-API-KEY> - -zenml data-validator register whylogs_data_validator --flavor=whylogs \ - --authentication_secret=whylabs_secret -``` - -Enable logging for custom pipeline steps by setting `upload_to_whylabs=True`. +**Deployment**: +1. Install the integration: + ```shell + zenml integration install whylogs -y + ``` +2. Register the Data Validator: + ```shell + zenml data-validator register whylogs_data_validator --flavor=whylogs + zenml stack register custom_stack -dv whylogs_data_validator ... --set + ``` -**Usage**: -Whylogs profiling functions accept a `pandas.DataFrame` and produce a `DatasetProfileView`. There are three usage methods: -1. **Standard Step**: Use `WhylogsProfilerStep` for ease of use. -2. **Custom Step**: Call validation methods from the Data Validator for flexibility. -3. **Direct Library Use**: Leverage whylogs directly for complete control. +**WhyLabs Integration**: +- Create a ZenML Secret for WhyLabs authentication: + ```shell + zenml secret create whylabs_secret \ + --whylabs_default_org_id=<YOUR-WHYLOGS-ORGANIZATION-ID> \ + --whylabs_api_key=<YOUR-WHYLOGS-API-KEY> + ``` +- Register the Data Validator with WhyLabs: + ```shell + zenml data-validator register whylogs_data_validator --flavor=whylogs \ + --authentication_secret=whylabs_secret + ``` -**Example of Standard Step**: +**Pipeline Integration**: +- Enable WhyLabs logging in custom steps: + ```python + @step( + settings={ + "data_validator": WhylogsDataValidatorSettings( + enable_whylabs=True, dataset_id="model-1" + ) + } + ) + def data_loader() -> Tuple[Annotated[pd.DataFrame, "data"], Annotated[DatasetProfileView, "profile"]]: + ... + ``` -```python -from zenml.integrations.whylogs.steps import get_whylogs_profiler_step +**Usage**: +- Use `WhylogsProfilerStep` for standard profiling: + ```python + from zenml.integrations.whylogs.steps import get_whylogs_profiler_step -train_data_profiler = get_whylogs_profiler_step(dataset_id="model-2") -``` + train_data_profiler = get_whylogs_profiler_step(dataset_id="model-2") + ``` -**Example of Custom Step**: +- Example pipeline: + ```python + @pipeline + def data_profiling_pipeline(): + data, _ = data_loader() + train_data_profiler(train) + test_data_profiler(test) -```python -@step(settings={"data_validator": WhylogsDataValidatorSettings(enable_whylabs=True, dataset_id="model-1")}) -def data_loader() -> Tuple[Annotated[pd.DataFrame, "data"], Annotated[DatasetProfileView, "profile"]]: - X, y = datasets.load_diabetes(return_X_y=True, as_frame=True) - df = pd.merge(X, y, left_index=True, right_index=True) - profile = why.log(pandas=df).profile().view() - return df, profile -``` + data_profiling_pipeline() + ``` -**Visualizing Profiles**: -Visualizations can be accessed in the ZenML dashboard or via Jupyter notebooks using the `artifact.visualize()` method. An example function to visualize statistics is: +**Direct Whylogs Usage**: +- Use whylogs directly in custom steps: + ```python + @step(settings={"data_validator": whylogs_settings}) + def data_profiler(dataset: pd.DataFrame) -> DatasetProfileView: + results = why.log(dataset) + return results.profile().view() + ``` -```python -def visualize_statistics(step_name: str, reference_step_name: Optional[str] = None) -> None: - pipe = Client().get_pipeline(pipeline="data_profiling_pipeline") - whylogs_step = pipe.last_run.steps[step_name] - whylogs_step.visualize() -``` +**Visualization**: +- Visualize profiles in ZenML dashboard or Jupyter: + ```python + def visualize_statistics(step_name: str, reference_step_name: Optional[str] = None) -> None: + pipe = Client().get_pipeline(pipeline="data_profiling_pipeline") + whylogs_step = pipe.last_run.steps[step_name] + whylogs_step.visualize() + ``` -This documentation provides essential details for implementing and utilizing whylogs profiling within ZenML pipelines, ensuring effective data validation and visualization. +This documentation provides a comprehensive guide on how to implement and utilize whylogs profiling within ZenML, covering installation, configuration, and usage in data pipelines. For further details, refer to the official [whylogs documentation](https://whylogs.readthedocs.io/en/latest/index.html). ================================================== @@ -5321,17 +5311,17 @@ This documentation provides essential details for implementing and utilizing why ### Local Orchestrator -The local orchestrator is a built-in component of ZenML that allows you to run pipelines locally without additional setup. +The local orchestrator is a built-in component of ZenML that allows you to run pipelines locally without additional setup. -#### When to Use -- Ideal for beginners starting with ZenML. +#### When to Use It +- Ideal for beginners wanting to run pipelines without cloud infrastructure. - Useful for quickly experimenting and debugging new pipelines. -#### Deployment +#### How to Deploy It The local orchestrator is included with ZenML and requires no extra configuration. -#### Usage -To register and activate the local orchestrator in your stack: +#### How to Use It +To register and use the local orchestrator in your active stack: ```shell zenml orchestrator register <ORCHESTRATOR_NAME> --flavor=local @@ -5344,19 +5334,19 @@ Run your ZenML pipeline with: python file_that_runs_a_zenml_pipeline.py ``` -For detailed configuration options, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-orchestrators/#zenml.orchestrators.local.local_orchestrator.LocalOrchestrator). +For more details on configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-orchestrators/#zenml.orchestrators.local.local_orchestrator.LocalOrchestrator). ================================================== === File: docs/book/component-guide/orchestrators/custom.md === -### Develop a Custom Orchestrator +### Custom Orchestrator Development in ZenML #### Overview -To create a custom orchestrator in ZenML, familiarize yourself with the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). +To develop a custom orchestrator in ZenML, it is essential to understand the component flavor concepts. Refer to the [general guide](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational knowledge. #### Base Implementation -ZenML's `BaseOrchestrator` abstracts orchestration details and provides a simplified interface: +ZenML's `BaseOrchestrator` abstracts orchestration tools, providing a simplified interface: ```python from abc import ABC, abstractmethod @@ -5372,17 +5362,17 @@ class BaseOrchestrator(StackComponent, ABC): @abstractmethod def prepare_or_run_pipeline(self, deployment: PipelineDeploymentResponseModel, stack: Stack, environment: Dict[str, str]) -> Any: """Prepares and runs the pipeline or returns an intermediate representation.""" - + @abstractmethod def get_orchestrator_run_id(self) -> str: - """Returns the unique run ID for the active orchestrator run.""" + """Returns a unique run ID for the active orchestrator run.""" class BaseOrchestratorFlavor(Flavor): @property @abstractmethod def name(self): """Returns the name of the flavor.""" - + @property def type(self) -> StackComponentType: return StackComponentType.ORCHESTRATOR @@ -5397,35 +5387,29 @@ class BaseOrchestratorFlavor(Flavor): """Implementation class for this flavor.""" ``` -#### Building a Custom Orchestrator -To create a custom orchestrator flavor: - +#### Creating a Custom Orchestrator +To create a custom flavor: 1. Inherit from `BaseOrchestrator` and implement `prepare_or_run_pipeline(...)` and `get_orchestrator_run_id()`. -2. If needed, create a configuration class inheriting from `BaseOrchestratorConfig`. -3. Combine both by inheriting from `BaseOrchestratorFlavor` and define a `name`. - -Register the flavor via CLI: +2. Create a configuration class inheriting from `BaseOrchestratorConfig`. +3. Combine them by inheriting from `BaseOrchestratorFlavor`, providing a name. +Register the flavor using: ```shell zenml orchestrator flavor register <path.to.MyOrchestratorFlavor> ``` - -For example: - +Example: ```shell zenml orchestrator flavor register flavors.my_flavor.MyOrchestratorFlavor ``` -Ensure ZenML is initialized at the root of your repository. - -#### Implementation Guide -1. **Create your orchestrator class:** Inherit from `BaseOrchestrator` or `ContainerizedOrchestrator` for container-based execution. -2. **Implement `prepare_or_run_pipeline(...)`:** Convert the pipeline to a format understood by your orchestration tool and run it. -3. **Implement `get_orchestrator_run_id()`:** Return a unique ID for each pipeline run. +#### Implementation Steps +1. **Create Orchestrator Class:** Inherit from `BaseOrchestrator` or `ContainerizedOrchestrator` if using Docker. +2. **Implement `prepare_or_run_pipeline(...)`:** Convert the pipeline for your orchestration tool and run it. +3. **Implement `get_orchestrator_run_id()`:** Return a unique ID for each run, consistent across steps. -#### Optional Features -- **Scheduling:** Handle `deployment.schedule` if supported; otherwise, log a warning or raise an exception. -- **Resource Specification:** Manage resource settings from `step.config.resource_settings`. +Optional features include: +- Scheduling pipelines. +- Specifying hardware resources. #### Code Sample ```python @@ -5449,58 +5433,60 @@ class MyOrchestrator(ContainerizedOrchestrator): ... ``` -#### Enabling CUDA for GPU Hardware -To run steps on a GPU, follow the [instructions for enabling CUDA](../../how-to/pipeline-development/training-with-gpus/README.md) to ensure proper acceleration. +#### Additional Notes +- Ensure CUDA is enabled for GPU usage by following [these instructions](../../how-to/pipeline-development/training-with-gpus/README.md). +- For a complete example, refer to the [full end-to-end custom orchestrator guide](https://github.com/zenml-io/zenml-plugins/tree/main/how_to_custom_orchestrator). ================================================== === File: docs/book/component-guide/orchestrators/hyperai.md === -# HyperAI Orchestrator Overview +# HyperAI Orchestrator Summary -The HyperAI Orchestrator is a component of the HyperAI cloud compute platform that enables easy deployment of AI pipelines on HyperAI instances. It is specifically designed for remote ZenML deployments. +The **HyperAI Orchestrator** is a component of the HyperAI cloud compute platform that facilitates the deployment of AI pipelines on HyperAI instances. It is intended for use in remote ZenML deployment scenarios only. ## When to Use -- Managed solution for running pipelines. -- You are a HyperAI customer. +- For managed pipeline execution. +- If you are a HyperAI customer. ## Prerequisites -1. **HyperAI Instance**: Must be accessible via the internet and support SSH key-based access. -2. **Docker**: A recent version of Docker with Docker Compose must be installed. -3. **NVIDIA Driver**: Required on the HyperAI instance (if not pre-installed). -4. **NVIDIA Container Toolkit**: Must be installed and configured for GPU usage. If omitted, disable GPU access in the orchestrator configuration. +1. A running HyperAI instance with internet access and SSH key-based access. +2. A recent version of Docker, including Docker Compose. +3. An appropriate [NVIDIA Driver](https://www.nvidia.com/en-us/drivers/unix/) installed on the HyperAI instance. +4. The [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) installed and configured (optional for GPU use). ## Functionality -The orchestrator utilizes Docker Compose to construct and execute machine learning pipelines. It generates a Docker Compose file for each ZenML pipeline step, ensuring that steps only run if their upstream dependencies succeed. It can connect to a container registry for Docker image transfers. +- Utilizes Docker Compose to create and execute machine learning pipelines. +- Creates a Docker Compose file for each ZenML pipeline step, ensuring steps run only if upstream steps succeed. +- Can connect to the stack's container registry for Docker image transfers. -### Scheduled Pipelines -The orchestrator supports: -- **Cron Expressions**: For periodic runs (requires `crontab`). -- **Run Once**: For single scheduled runs at a specified time (requires `at`). +## Scheduled Pipelines +Supports: +- **Cron expressions** for periodic runs (requires `crontab`). +- **Scheduled runs** for one-time executions at specified times (requires `at`). ## Deployment Steps -1. **Configure HyperAI Service Connector**: Link it to the HyperAI orchestrator in ZenML using supported authentication methods. - - Example command for RSA-based key authentication: - ```shell - zenml service-connector register <SERVICE_CONNECTOR_NAME> --type=hyperai --auth-method=rsa-key --base64_ssh_key=<BASE64_SSH_KEY> --hostnames=<INSTANCE_1>,<INSTANCE_2> --username=<INSTANCE_USERNAME> - ``` +1. Configure a **HyperAI Service Connector** in ZenML with credentials for connecting to the HyperAI instance. +2. Register the orchestrator in a stack that includes a container registry and image builder. -2. **Register the Orchestrator**: - ```shell - zenml orchestrator register <ORCHESTRATOR_NAME> --flavor=hyperai - zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set - ``` +### Service Connector Registration +```shell +zenml service-connector register <SERVICE_CONNECTOR_NAME> --type=hyperai --auth-method=rsa-key --base64_ssh_key=<BASE64_SSH_KEY> --hostnames=<INSTANCE_1>,<INSTANCE_2>,..,<INSTANCE_N> --username=<INSTANCE_USERNAME> +``` -3. **Run a ZenML Pipeline**: - ```shell - python file_that_runs_a_zenml_pipeline.py - ``` +### Orchestrator Registration +```shell +zenml orchestrator register <ORCHESTRATOR_NAME> --flavor=hyperai +zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set +``` -### Enabling CUDA for GPU -To utilize GPU acceleration, follow the instructions for enabling CUDA as detailed in the relevant documentation. This requires additional settings customization. +### Running a Pipeline +```shell +python file_that_runs_a_zenml_pipeline.py +``` -This summary provides the essential information needed to understand and use the HyperAI Orchestrator effectively. +## GPU Support +For GPU execution, follow the instructions to enable CUDA for full acceleration. ================================================== @@ -5509,45 +5495,46 @@ This summary provides the essential information needed to understand and use the # Orchestrators in ZenML ## Overview -The orchestrator is a crucial component in the MLOps stack, responsible for executing machine learning pipelines. It ensures that pipeline steps are executed only when all required inputs are available. +The orchestrator is a key component in the MLOps stack, responsible for executing machine learning pipelines. It ensures that pipeline steps are executed only when all their required inputs are available. -## Key Points -- **Mandatory Component**: The orchestrator is required in all ZenML stacks to store artifacts from pipeline runs. -- **Docker Integration**: Many remote orchestrators build Docker images to transport and execute pipeline code. +**Note:** ZenML's remote orchestrators often build [Docker](https://www.docker.com/) images for transporting and executing pipeline code. Refer to the [Docker guide](../../how-to/customize-docker-builds/README.md) for more details. + +## When to Use +The orchestrator is mandatory in the ZenML stack, storing all artifacts from pipeline runs and must be configured in all stacks. ## Orchestrator Flavors -ZenML provides several orchestrator flavors, including: - -| Orchestrator | Flavor | Integration | Notes | -|-------------------------------|-----------------|---------------|-------------------------------------| -| LocalOrchestrator | `local` | _built-in_ | Runs pipelines locally. | -| LocalDockerOrchestrator | `local_docker` | _built-in_ | Runs pipelines locally using Docker.| -| KubernetesOrchestrator | `kubernetes` | `kubernetes` | Runs pipelines in Kubernetes. | -| KubeflowOrchestrator | `kubeflow` | `kubeflow` | Runs pipelines using Kubeflow. | -| VertexOrchestrator | `vertex` | `gcp` | Runs pipelines in Vertex AI. | -| SagemakerOrchestrator | `sagemaker` | `aws` | Runs pipelines in Sagemaker. | -| AzureMLOrchestrator | `azureml` | `azure` | Runs pipelines in AzureML. | -| TektonOrchestrator | `tekton` | `tekton` | Runs pipelines using Tekton. | -| AirflowOrchestrator | `airflow` | `airflow` | Runs pipelines using Airflow. | -| SkypilotAWSOrchestrator | `vm_aws` | `skypilot[aws]`| Runs pipelines in AWS VMs using SkyPilot. | -| SkypilotGCPOrchestrator | `vm_gcp` | `skypilot[gcp]`| Runs pipelines in GCP VMs using SkyPilot. | -| SkypilotAzureOrchestrator | `vm_azure` | `skypilot[azure]`| Runs pipelines in Azure VMs using SkyPilot. | -| HyperAIOrchestrator | `hyperai` | `hyperai` | Runs pipelines in HyperAI.ai instances. | -| Custom Implementation | _custom_ | | Extend the orchestrator abstraction. | +ZenML includes a default `local` orchestrator and supports additional orchestrators through integrations: + +| Orchestrator | Flavor | Integration | Notes | +|----------------------------------|----------------|------------------|-----------------------------------| +| [LocalOrchestrator](local.md) | `local` | _built-in_ | Runs pipelines locally. | +| [LocalDockerOrchestrator](local-docker.md) | `local_docker` | _built-in_ | Runs pipelines locally using Docker. | +| [KubernetesOrchestrator](kubernetes.md) | `kubernetes` | `kubernetes` | Runs pipelines in Kubernetes. | +| [KubeflowOrchestrator](kubeflow.md) | `kubeflow` | `kubeflow` | Runs pipelines using Kubeflow. | +| [VertexOrchestrator](vertex.md) | `vertex` | `gcp` | Runs pipelines in Vertex AI. | +| [SagemakerOrchestrator](sagemaker.md) | `sagemaker` | `aws` | Runs pipelines in Sagemaker. | +| [AzureMLOrchestrator](azureml.md) | `azureml` | `azure` | Runs pipelines in AzureML. | +| [TektonOrchestrator](tekton.md) | `tekton` | `tekton` | Runs pipelines using Tekton. | +| [AirflowOrchestrator](airflow.md) | `airflow` | `airflow` | Runs pipelines using Airflow. | +| [SkypilotAWSOrchestrator](skypilot-vm.md) | `vm_aws` | `skypilot[aws]` | Runs pipelines in AWS VMs using SkyPilot. | +| [SkypilotGCPOrchestrator](skypilot-vm.md) | `vm_gcp` | `skypilot[gcp]` | Runs pipelines in GCP VMs using SkyPilot. | +| [SkypilotAzureOrchestrator](skypilot-vm.md) | `vm_azure` | `skypilot[azure]` | Runs pipelines in Azure VMs using SkyPilot. | +| [HyperAIOrchestrator](hyperai.md) | `hyperai` | `hyperai` | Runs pipelines in HyperAI.ai instances. | +| [Custom Implementation](custom.md) | _custom_ | | Extend the orchestrator abstraction. | To view available orchestrator flavors, use: ```shell zenml orchestrator flavor list ``` -## Usage -You don't need to interact directly with the orchestrator in your code. Simply ensure the desired orchestrator is part of your active ZenML stack, and run your pipeline with: +## How to Use +You don't need to interact directly with the ZenML orchestrator in your code. As long as it's part of your active ZenML stack, run a pipeline with: ```shell python file_that_runs_a_zenml_pipeline.py ``` -### Inspecting Runs -To get the URL for the orchestrator UI of a specific pipeline run: +### Inspecting Runs in the Orchestrator UI +If your orchestrator has a UI (e.g., Kubeflow, Airflow), retrieve the URL for a specific pipeline run: ```python from zenml.client import Client @@ -5555,8 +5542,8 @@ pipeline_run = Client().get_pipeline_run("<PIPELINE_RUN_NAME>") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value ``` -### Specifying Resources -Specify hardware requirements for pipeline steps as needed. Refer to the documentation for details on runtime configuration and step operators. +### Specifying Per-Step Resources +To execute steps on specific hardware, specify resources as detailed [here](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md). If unsupported, consider using [step operators](../step-operators/step-operators.md). ================================================== @@ -5564,29 +5551,31 @@ Specify hardware requirements for pipeline steps as needed. Refer to the documen ### Local Docker Orchestrator -The local Docker orchestrator in ZenML allows you to run pipelines locally in isolated Docker environments. +The Local Docker Orchestrator is a built-in orchestrator in ZenML that runs pipelines locally using Docker. #### When to Use -- For local execution of pipeline steps in isolated environments. -- For debugging pipeline issues without incurring costs from remote infrastructure. +- For running pipeline steps in isolated local environments. +- For debugging pipeline issues without incurring costs for remote infrastructure. #### Deployment -Ensure Docker is installed and running. To deploy the local Docker orchestrator: +Ensure Docker is installed and running. + +#### Usage +To register and activate the local Docker orchestrator in your stack: ```shell zenml orchestrator register <ORCHESTRATOR_NAME> --flavor=local_docker zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set ``` -#### Usage -Run any ZenML pipeline with the orchestrator: +Run a ZenML pipeline with: ```shell python file_that_runs_a_zenml_pipeline.py ``` #### Additional Configuration -You can customize the local Docker orchestrator using `LocalDockerOrchestratorSettings`. Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-orchestrators/#zenml.orchestrators.local_docker.local_docker_orchestrator.LocalDockerOrchestratorSettings) for available attributes. For `run_args`, consult the [Docker Python SDK documentation](https://docker-py.readthedocs.io/en/stable/containers.html). +You can customize the Local Docker orchestrator using `LocalDockerOrchestratorSettings`. Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-orchestrators/#zenml.orchestrators.local_docker.local_docker_orchestrator.LocalDockerOrchestratorSettings) for available attributes and [this page](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md) for specifying settings. Example of specifying CPU count (Windows only): @@ -5608,7 +5597,7 @@ def simple_pipeline(): ``` #### Enabling CUDA for GPU -To run steps on a GPU, follow the instructions on enabling CUDA for full acceleration. Refer to the relevant documentation for detailed steps. +To run steps on a GPU, follow the instructions [here](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA for GPU acceleration. ================================================== @@ -5616,36 +5605,30 @@ To run steps on a GPU, follow the instructions on enabling CUDA for full acceler ### SkyPilot VM Orchestrator Overview -The SkyPilot VM Orchestrator, integrated with ZenML, enables provisioning and management of virtual machines (VMs) across various cloud providers supported by the SkyPilot framework. It simplifies running machine learning workloads in the cloud, providing cost efficiency, high GPU availability, and managed execution. It is recommended for users needing GPU access without managing complex cloud infrastructure. - -**Warning:** This component is intended for remote ZenML deployments only; using it locally may cause unexpected behavior. - -### When to Use +The **SkyPilot VM Orchestrator** is a ZenML integration for provisioning and managing virtual machines (VMs) across supported cloud providers via the [SkyPilot framework](https://skypilot.readthedocs.io/en/latest/index.html). It simplifies running machine learning workloads in the cloud, offering cost efficiency, high GPU availability, and managed execution. This component is intended for remote ZenML deployments only. +#### When to Use Use the SkyPilot VM Orchestrator if: -- You want to maximize cost savings with spot VMs and auto-selection of the cheapest options. +- You want to leverage cost savings with spot VMs and auto-select the cheapest options. - You require high GPU availability across multiple cloud zones/regions. -- You prefer not to maintain Kubernetes or pay for managed solutions. +- You prefer not to maintain Kubernetes or pay for managed services like Sagemaker. -### Functionality - -The orchestrator utilizes the SkyPilot framework for VM provisioning and scaling, supporting both on-demand and managed spot VMs. It includes: +#### Functionality +The orchestrator utilizes the SkyPilot framework to provision and scale VMs automatically, supporting both on-demand and managed spot VMs. It includes: - An optimizer for selecting the cheapest VM options. -- An autostop feature to clean up idle clusters and reduce costs. +- An autostop feature to clean up idle clusters, reducing costs. -**Note:** The orchestrator does not support scheduling pipeline runs. All ZenML pipeline runs execute in Docker containers within the provisioned VMs, requiring configuration for GPU support (e.g., `docker_run_args=["--gpus=all"]`). +**Note:** It does not support scheduling pipeline runs. -### Deployment - -To deploy the SkyPilot VM Orchestrator: -- Ensure you have permissions to provision VMs on your chosen cloud provider. -- Configure the orchestrator using service connectors. +#### Deployment +To deploy, ensure you have: +- Permissions to provision VMs on your chosen cloud provider. +- A configured SkyPilot orchestrator using [service connectors](../../how-to/infrastructure-deployment/auth-management/service-connectors-guide.md). **Supported Cloud Platforms:** AWS, GCP, Azure. -### Installation - -To use the SkyPilot VM Orchestrator, install the appropriate SkyPilot integration for your cloud provider: +#### Installation +Install the SkyPilot integration for your cloud provider: ```shell # AWS @@ -5661,65 +5644,60 @@ pip install "zenml[connectors-azure]" zenml integration install azure skypilot_azure ``` -### Configuration for Cloud Providers - -**AWS Configuration:** -1. Install the integration and configure the AWS Service Connector. -2. Register the orchestrator and connect it to the service connector. - -```shell -zenml service-connector register aws-skypilot-vm --type aws --region=us-east-1 --auto-configure -zenml orchestrator register <ORCHESTRATOR_NAME> --flavor vm_aws -zenml orchestrator connect <ORCHESTRATOR_NAME> --connector aws-skypilot-vm -``` - -**GCP Configuration:** -1. Install the integration and configure the GCP Service Connector. -2. Register the orchestrator and connect it to the service connector. - -```shell -zenml service-connector register gcp-skypilot-vm -t gcp --auth-method user-account --auto-configure -zenml orchestrator register <ORCHESTRATOR_NAME> --flavor vm_gcp -zenml orchestrator connect <ORCHESTRATOR_NAME> --connector gcp-skypilot-vm -``` - -**Azure Configuration:** -1. Install the integration and configure the Azure Service Connector. -2. Register the orchestrator and connect it to the service connector. - -```shell -zenml service-connector register azure-skypilot-vm -t azure --auth-method access-token --auto-configure -zenml orchestrator register <ORCHESTRATOR_NAME> --flavor vm_azure -zenml orchestrator connect <ORCHESTRATOR_NAME> --connector azure-skypilot-vm -``` +#### Configuration for Cloud Providers -**Lambda Labs Configuration:** -1. Install the integration and register the API key as a secret. -2. Register the orchestrator. +**AWS:** +1. Install integration. +2. Configure AWS Service Connector with required permissions. +3. Register the orchestrator: + ```shell + zenml orchestrator register <ORCHESTRATOR_NAME> --flavor vm_aws + zenml orchestrator connect <ORCHESTRATOR_NAME> --connector aws-skypilot-vm + ``` -```shell -zenml integration install skypilot_lambda -zenml secret create lambda_api_key --scope user --api_key=<VALUE_1> -zenml orchestrator register <ORCHESTRATOR_NAME> --flavor vm_lambda --api_key={{lambda_api_key.api_key}} -``` +**GCP:** +1. Install integration. +2. Authenticate with GCP. +3. Register the orchestrator: + ```shell + zenml orchestrator register <ORCHESTRATOR_NAME> --flavor vm_gcp + zenml orchestrator connect <ORCHESTRATOR_NAME> --connector gcp-skypilot-vm + ``` -**Kubernetes Configuration:** -1. Install the integration and configure the Kubernetes Service Connector. -2. Register the orchestrator. +**Azure:** +1. Install integration. +2. Register the orchestrator: + ```shell + zenml orchestrator register <ORCHESTRATOR_NAME> --flavor vm_azure + zenml orchestrator connect <ORCHESTRATOR_NAME> --connector azure-skypilot-vm + ``` -```shell -zenml service-connector register kubernetes-skypilot --type kubernetes -i -zenml orchestrator register <ORCHESTRATOR_NAME> --flavor sky_kubernetes -zenml orchestrator connect <ORCHESTRATOR_NAME> --connector kubernetes-skypilot -``` +**Lambda Labs:** +1. Install integration: + ```shell + zenml integration install skypilot_lambda + ``` +2. Register the orchestrator with API key: + ```shell + zenml secret create lambda_api_key --scope user --api_key=<VALUE_1> + zenml orchestrator register <ORCHESTRATOR_NAME> --flavor vm_lambda --api_key={{lambda_api_key.api_key}} + ``` -### Additional Configuration +**Kubernetes:** +1. Install integration: + ```shell + zenml integration install skypilot_kubernetes + ``` +2. Register the orchestrator: + ```shell + zenml orchestrator register <ORCHESTRATOR_NAME> --flavor sky_kubernetes + ``` +#### Additional Configuration You can configure various attributes for the orchestrator, including: -- `instance_type`, `cpus`, `memory`, `accelerators`, `use_spot`, `region`, `zone`, `image_id`, `disk_size`, `disk_tier`, `cluster_name`, `idle_minutes_to_autostop`, `down`, `stream_logs`, and `docker_run_args`. +- `instance_type`, `cpus`, `memory`, `accelerators`, `region`, `zone`, `image_id`, `disk_size`, `cluster_name`, `idle_minutes_to_autostop`, and `docker_run_args`. **Example for AWS:** - ```python from zenml.integrations.skypilot_aws.flavors.skypilot_orchestrator_aws_vm_flavor import SkypilotAWSOrchestratorSettings @@ -5731,36 +5709,27 @@ skypilot_settings = SkypilotAWSOrchestratorSettings( region="us-west-1", cluster_name="my_cluster", idle_minutes_to_autostop=60, - down=True, docker_run_args=["--gpus=all"] ) @pipeline(settings={"orchestrator": skypilot_settings}) -def my_pipeline(): - pass ``` -### Configuring Step-Specific Resources - -You can specify resources for each pipeline step, allowing for tailored resource allocation. If no specific settings are provided, default orchestrator settings apply. - -**Disable Step-Based Settings:** - +#### Step-Specific Resources +You can configure resources for each pipeline step individually. If no specific settings are provided, the orchestrator defaults to the general settings. To disable step-specific settings: ```shell zenml orchestrator update <ORCHESTRATOR_NAME> --disable_step_based_settings=True ``` **Example for a Resource-Intensive Step:** - ```python @step(settings={"orchestrator": high_resource_settings}) def my_resource_intensive_step(): + # Step implementation pass ``` -### Important Notes -- Use the `settings` parameter to target the orchestrator flavor, not the component name. -- For a complete list of attributes, refer to the SDK documentation. +For more details, consult the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-skypilot/#zenml.integrations.skypilot.flavors.skypilot_orchestrator_base_vm_flavor.SkypilotBaseOrchestratorSettings). ================================================== @@ -5768,7 +5737,7 @@ def my_resource_intensive_step(): ### AWS Sagemaker Orchestrator Overview -The **Sagemaker Orchestrator** integrates with [Sagemaker Pipelines](https://aws.amazon.com/sagemaker/pipelines) to facilitate serverless ML workflows on AWS. It is designed for production-ready, repeatable cloud orchestration with minimal setup. +**AWS Sagemaker Orchestrator** is a serverless ML workflow tool designed for running machine learning pipelines on AWS with minimal setup. It is intended for use in remote ZenML deployments and not recommended for local deployments. #### When to Use - You are using AWS. @@ -5776,21 +5745,21 @@ The **Sagemaker Orchestrator** integrates with [Sagemaker Pipelines](https://aws - You prefer a managed, serverless solution for running pipelines. #### Functionality -The ZenML Sagemaker orchestrator creates a SageMaker `PipelineStep` for each ZenML pipeline step, which can include Sagemaker Processing or Training jobs. +The ZenML Sagemaker orchestrator utilizes Sagemaker Pipelines to construct ML pipelines. Each ZenML pipeline step corresponds to a Sagemaker `PipelineStep`, which can include Sagemaker Processing or Training jobs. ### Deployment Requirements 1. Deploy ZenML to the cloud, ideally in the same region as Sagemaker. -2. Ensure connection to the remote ZenML server. -3. Enable relevant IAM permissions for your role. +2. Connect to the remote ZenML server. +3. Ensure necessary IAM permissions for your role. ### Usage Prerequisites -- Install ZenML `aws` and `s3` integrations: +- Install ZenML AWS and S3 integrations: ```shell zenml integration install aws s3 ``` -- Install and run Docker. -- Configure a remote artifact store and container registry. -- Assign an IAM role with `AmazonSageMakerFullAccess` and `sagemaker.amazonaws.com` as a Principal Service. +- Install Docker. +- Set up a remote artifact store and container registry. +- Assign an IAM role with `AmazonSageMakerFullAccess` policy. ### Authentication Methods 1. **Service Connector** (recommended): @@ -5810,75 +5779,49 @@ The ZenML Sagemaker orchestrator creates a SageMaker `PipelineStep` for each Zen 3. **Implicit Authentication**: ```shell zenml orchestrator register <ORCHESTRATOR_NAME> --flavor=sagemaker --execution_role=<YOUR_IAM_ROLE_ARN> - python run.py # Authenticates with `default` profile + python run.py # Uses default AWS profile ``` ### Running Pipelines -To execute a ZenML pipeline: +To run a ZenML pipeline: ```shell python run.py ``` -Expect a delay of 5-15 minutes for the pipeline to start running. +Output indicates the status of the pipeline run. ### Sagemaker UI -Access the Sagemaker Pipelines UI via Sagemaker Studio to view logs and details of your pipeline runs. +Access the Sagemaker Pipelines UI through Sagemaker Studio to view logs and details about pipeline runs. ### Debugging -If a pipeline fails before the first step, check the SageMaker UI for error messages and logs. +If a pipeline fails before starting, check the Sagemaker UI for error messages and logs. Use Amazon CloudWatch for detailed logs. ### Configuration Options -Additional configurations can be applied at the pipeline or step level using `SagemakerOrchestratorSettings`, including: -- `processor_args` for Processor settings. -- Instance types, environment variables, etc. - -Example: -```python -from zenml import step -from zenml.integrations.aws.flavors.sagemaker_orchestrator_flavor import SagemakerOrchestratorSettings - -sagemaker_orchestrator_settings = SagemakerOrchestratorSettings( - instance_type="ml.m5.large", - environment={"MY_ENV_VAR": "my_value"} -) - -@step(settings={"orchestrator": sagemaker_orchestrator_settings}) -def my_step() -> None: - pass -``` - -### Using Warm Pools -Warm Pools can reduce startup time: -```python -sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(keep_alive_period_in_seconds=300) -``` +- **Pipeline/Step Level Configuration**: Customize settings using `SagemakerOrchestratorSettings`. +- **Warm Pools**: Keep instances warm to reduce startup time. + ```python + sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(keep_alive_period_in_seconds=300) + ``` ### S3 Data Access -Configure S3 data access for importing and exporting data: -**Importing Data:** -```python -sagemaker_orchestrator_settings = SagemakerOrchestratorSettings( - input_data_s3_mode="File", - input_data_s3_uri="s3://some-bucket-name/folder" -) -``` +Configure S3 data import/export in `SagemakerOrchestratorSettings`: +- **Import Data**: + ```python + sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(input_data_s3_uri="s3://bucket/folder") + ``` -**Exporting Data:** -```python -sagemaker_orchestrator_settings = SagemakerOrchestratorSettings( - output_data_s3_mode="EndOfJob", - output_data_s3_uri="s3://some-results-bucket-name/results" -) -``` +- **Export Data**: + ```python + sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(output_data_s3_uri="s3://bucket/results") + ``` -### Tagging Pipeline Executions -Add tags at the pipeline and step levels: +### Tagging +Add tags to pipeline executions and jobs for better resource management: ```python pipeline_settings = SagemakerOrchestratorSettings(pipeline_tags={"project": "my-ml-project"}) -step_settings = SagemakerOrchestratorSettings(tags={"step": "data-preprocessing"}) ``` ### Scheduling Pipelines -Configure schedules using cron expressions, fixed intervals, or one-time runs: +Schedule pipelines using cron expressions, fixed intervals, or specific times: ```python @pipeline def my_scheduled_pipeline(): @@ -5887,11 +5830,11 @@ def my_scheduled_pipeline(): my_scheduled_pipeline.with_options(schedule=Schedule(cron_expression="0/5 * * * ? *"))() ``` -### Required IAM Permissions -Ensure your IAM role has permissions for managing schedules and launching Sagemaker jobs. Configure trust relationships for the `scheduler_role` to allow EventBridge Scheduler service access. +### IAM Permissions for Scheduling +Ensure your IAM role has permissions for scheduling and managing Sagemaker jobs. Use `scheduler_role` for separate scheduling permissions. ### Conclusion -The Sagemaker orchestrator provides a powerful, serverless solution for managing ML workflows in AWS, with extensive configuration options and integration capabilities. +The Sagemaker orchestrator provides a robust solution for managing ML pipelines on AWS, with features for scheduling, tagging, and S3 data access, while ensuring efficient resource management and execution. ================================================== @@ -5899,64 +5842,90 @@ The Sagemaker orchestrator provides a powerful, serverless solution for managing ### Kubeflow Orchestrator Overview -The Kubeflow orchestrator is a ZenML integration that utilizes [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/) for running pipelines. It is intended for remote ZenML deployments and may cause issues in local setups. +The Kubeflow orchestrator is a component of the ZenML `kubeflow` integration that utilizes [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/) to manage pipeline executions. It is designed for remote ZenML deployments and may not function correctly in local scenarios. -#### When to Use -- For a production-grade orchestrator. -- To track pipeline runs via a UI. -- If comfortable with Kubernetes setup and maintenance. -- If willing to deploy and maintain Kubeflow Pipelines. +### When to Use -#### Deployment Steps -1. **Kubernetes Cluster Setup**: Deploy Kubeflow Pipelines on a Kubernetes cluster (AWS, GCP, Azure, or any Kubernetes). -2. **Install Required Tools**: Ensure you have the respective CLI tools (`aws`, `gcloud`, `az`, `kubectl`) installed and configured. +Use the Kubeflow orchestrator if you need: +- A production-grade orchestrator. +- A UI for tracking pipeline runs. +- Familiarity with Kubernetes or willingness to manage a Kubernetes cluster. +- To deploy and maintain Kubeflow Pipelines. -**Example Commands**: -- **AWS**: - ```powershell - aws eks --region REGION update-kubeconfig --name CLUSTER_NAME - ``` -- **GCP**: - ```powershell - gcloud container clusters get-credentials CLUSTER_NAME - ``` -- **Azure**: - ```powershell - az aks get-credentials --resource-group RESOURCE_GROUP --name CLUSTER_NAME - ``` +### Deployment Steps -#### Usage Requirements -- A Kubernetes cluster with Kubeflow Pipelines installed. -- A remote ZenML server accessible from the cluster. -- ZenML `kubeflow` integration installed: - ```shell - zenml integration install kubeflow - ``` +To deploy ZenML pipelines on Kubeflow, set up a Kubernetes cluster and install Kubeflow Pipelines. Here are the steps for various cloud providers: + +#### AWS +1. Set up an [EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html). +2. Configure the AWS CLI and `kubectl`: + ```powershell + aws eks --region REGION update-kubeconfig --name CLUSTER_NAME + ``` +3. Install Kubeflow Pipelines. +4. (Optional) Set up an AWS Service Connector for secure access. + +#### GCP +1. Set up a [GKE cluster](https://cloud.google.com/kubernetes-engine/docs/quickstart). +2. Configure the Google Cloud CLI and `kubectl`: + ```powershell + gcloud container clusters get-credentials CLUSTER_NAME + ``` +3. Install Kubeflow Pipelines. +4. (Optional) Set up a GCP Service Connector. + +#### Azure +1. Set up an [AKS cluster](https://azure.microsoft.com/en-in/services/kubernetes-service/#documentation). +2. Configure the `az` CLI and `kubectl`: + ```powershell + az aks get-credentials --resource-group RESOURCE_GROUP --name CLUSTER_NAME + ``` +3. Install Kubeflow Pipelines. +4. (Note: Change the default runtime to `k8sapi` if using containerd). + +#### Other Kubernetes +1. Set up a Kubernetes cluster. +2. Install `kubectl` and configure it. +3. Install Kubeflow Pipelines. +4. (Optional) Set up a Kubernetes Service Connector. + +### Usage Requirements + +To use the Kubeflow orchestrator, ensure: +- A Kubernetes cluster with Kubeflow Pipelines. +- A remote ZenML server. +- The ZenML `kubeflow` integration installed: + ```shell + zenml integration install kubeflow + ``` - Docker installed (unless using a remote Image Builder). -- Optional: `kubectl` installed for local context. +- (Optional) `kubectl` installed. + +### Registering the Orchestrator -#### Registering the Orchestrator 1. **With Service Connector**: - ```shell - zenml service-connector list-resources --resource-type kubernetes-cluster -e - zenml orchestrator register <ORCHESTRATOR_NAME> --flavor kubeflow --connector <SERVICE_CONNECTOR_NAME> --resource-id <KUBERNETES_CLUSTER_NAME> - zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> -a <ARTIFACT_STORE_NAME> -c <CONTAINER_REGISTRY_NAME> - ``` + ```shell + zenml service-connector list-resources --resource-type kubernetes-cluster -e + zenml orchestrator register <ORCHESTRATOR_NAME> --flavor kubeflow --connector <SERVICE_CONNECTOR_NAME> --resource-id <KUBERNETES_CLUSTER_NAME> + zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> -a <ARTIFACT_STORE_NAME> -c <CONTAINER_REGISTRY_NAME> + ``` 2. **Without Service Connector**: - ```shell - zenml orchestrator register <ORCHESTRATOR_NAME> --flavor=kubeflow --kubernetes_context=<KUBERNETES_CONTEXT> - zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> -a <ARTIFACT_STORE_NAME> -c <CONTAINER_REGISTRY_NAME> - ``` + ```shell + zenml orchestrator register <ORCHESTRATOR_NAME> --flavor=kubeflow --kubernetes_context=<KUBERNETES_CONTEXT> + zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> -a <ARTIFACT_STORE_NAME> -c <CONTAINER_REGISTRY_NAME> + ``` -#### Running a Pipeline -To execute a ZenML pipeline: +### Running Pipelines + +Execute a ZenML pipeline using: ```shell python file_that_runs_a_zenml_pipeline.py ``` -#### Accessing Kubeflow UI -Retrieve the Kubeflow UI URL for pipeline runs: +### Accessing Kubeflow UI + +To retrieve the Kubeflow UI URL for pipeline runs: ```python from zenml.client import Client @@ -5964,13 +5933,14 @@ pipeline_run = Client().get_pipeline_run("<PIPELINE_RUN_NAME>") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"] ``` -#### Additional Configuration -Use `KubeflowOrchestratorSettings` for: +### Additional Configuration + +You can customize the Kubeflow orchestrator with `KubeflowOrchestratorSettings`: - `client_args`: KFP client arguments. -- `user_namespace`: Namespace for experiments and runs. -- `pod_settings`: Kubernetes Pod configurations. +- `user_namespace`: Namespace for experiments. +- `pod_settings`: Settings for Kubernetes Pods. -**Example**: +Example configuration: ```python from zenml.integrations.kubeflow.flavors.kubeflow_orchestrator_flavor import KubeflowOrchestratorSettings @@ -5984,36 +5954,32 @@ kubeflow_settings = KubeflowOrchestratorSettings( ) ``` -#### Multi-Tenancy Considerations -For multi-tenant deployments, set the `kubeflow_hostname` with `/pipeline` suffix when registering the orchestrator: +### Multi-Tenancy Considerations + +For multi-tenant deployments, specify the `kubeflow_hostname` ending with `/pipeline` when registering: ```shell zenml orchestrator register <NAME> --flavor=kubeflow --kubeflow_hostname=<KUBEFLOW_HOSTNAME> ``` -Use the following code to pass authentication credentials: +Use the appropriate settings for authentication: ```python kubeflow_settings = KubeflowOrchestratorSettings( - client_username=USERNAME, - client_password=PASSWORD, - user_namespace=NAMESPACE + client_username="{{kubeflow_secret.username}}", + client_password="{{kubeflow_secret.password}}", + user_namespace="namespace_name" ) ``` -#### Using Secrets -Store sensitive information as secrets: +### Secrets Management + +Create secrets for sensitive information: ```shell zenml secret create kubeflow_secret --username=admin --password=abc123 ``` -Access them in code: -```python -kubeflow_settings = KubeflowOrchestratorSettings( - client_username="{{kubeflow_secret.username}}", - client_password="{{kubeflow_secret.password}}", - user_namespace="namespace_name" -) -``` -For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubeflow/#zenml.integrations.kubeflow.orchestrators.kubeflow_orchestrator.KubeflowOrchestrator). +### Conclusion + +For further details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubeflow/#zenml.integrations.kubeflow.orchestrators.kubeflow_orchestrator.KubeflowOrchestrator). ================================================== @@ -6021,36 +5987,37 @@ For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integr # Lightning AI Orchestrator -The Lightning AI Orchestrator, integrated with ZenML, enables running pipelines on Lightning AI's infrastructure, utilizing its scalable compute resources. This component is designed for remote ZenML deployments only. +The Lightning AI Orchestrator, integrated with ZenML, enables the execution of machine learning pipelines on Lightning AI's scalable infrastructure. It is designed for remote ZenML deployments and not recommended for local use. ## When to Use -- Fast execution of pipelines on GPU instances. -- Existing use of Lightning AI for machine learning projects. -- Simplified deployment and scaling of ML workflows. -- Benefit from Lightning AI's optimizations for ML workloads. +- To run pipelines on GPU instances quickly. +- If you're already utilizing Lightning AI for machine learning projects. +- To leverage managed infrastructure for simplified deployment and scaling of ML workflows. +- To benefit from Lightning AI's optimizations for machine learning workloads. ## Deployment Requirements -1. A Lightning AI account and credentials. -2. No additional infrastructure deployment is needed; it uses Lightning AI's managed resources. +- A Lightning AI account with credentials. +- No additional infrastructure deployment is necessary as it uses Lightning AI's managed resources. ## Functionality -- Archives the ZenML repository and uploads it to Lightning AI. -- Creates a new studio in Lightning AI using `lightning-sdk`. -- Executes commands via `studio.run()` to prepare for the pipeline run. -- Supports both CPU and GPU machine types, specified in `LightningOrchestratorSettings`. -- Allows asynchronous pipeline execution and custom pre-run commands. +- The orchestrator archives the ZenML repository and uploads it to Lightning AI Studio. +- It creates a new studio and runs commands via `studio.run()` to prepare for pipeline execution. +- Supports async mode for background execution and status checking in ZenML Dashboard or Lightning AI Studio. +- Allows custom commands for environment setup and supports both CPU and GPU machine types. ## Setup Instructions -1. Install the Lightning integration: +1. Install the ZenML Lightning integration: ```shell zenml integration install lightning ``` -2. Set up a remote artifact store. -3. Obtain Lightning AI credentials: +2. Configure a remote artifact store in your stack. +3. Obtain the following Lightning AI credentials: - `LIGHTNING_USER_ID` - `LIGHTNING_API_KEY` - Optional: `LIGHTNING_USERNAME`, `LIGHTNING_TEAMSPACE`, `LIGHTNING_ORG` + Find these in your Lightning AI account under "Global Settings" > "Keys". + 4. Register the orchestrator: ```shell zenml orchestrator register lightning_orchestrator \ @@ -6091,7 +6058,7 @@ python file_that_runs_a_zenml_pipeline.py ``` ## Monitoring -Use the Lightning AI UI to monitor running applications. Retrieve the UI URL for a specific run: +Use the Lightning AI UI to monitor running applications. Retrieve the UI URL for a specific pipeline run: ```python from zenml.client import Client @@ -6100,17 +6067,19 @@ orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value ``` ## Additional Configuration -Customize the Lightning AI orchestrator using `LightningOrchestratorSettings`: +Customize the orchestrator settings further: ```python lightning_settings = LightningOrchestratorSettings( main_studio_name="my_studio", machine_type="gpu", # Specify GPU-enabled machine type + async_mode=True, + custom_commands=["pip install -r requirements.txt"] ) ``` -## References -- For available GPU types, consult [Lightning AI's documentation](https://lightning.ai/docs/overview/studios/change-gpus). -- Check the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-lightning/#zenml.integrations.lightning.flavors.lightning_orchestrator_flavor.LightningOrchestratorSettings) for a complete list of attributes. +You can specify settings at both the pipeline and step levels. + +For more details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-lightning/#zenml.integrations.lightning.flavors.lightning_orchestrator_flavor.LightningOrchestratorSettings) and [runtime configuration](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md). Check Lightning AI's documentation for available GPU machine types. ================================================== @@ -6118,46 +6087,52 @@ lightning_settings = LightningOrchestratorSettings( # AzureML Orchestrator Summary -**Overview**: AzureML is a cloud-based orchestration service by Microsoft for building, training, deploying, and managing machine learning models. It supports the entire ML lifecycle from data preparation to monitoring. +## Overview +AzureML is a cloud-based orchestration service by Microsoft for building, training, deploying, and managing machine learning models. It supports the entire ML lifecycle, including data preparation, model development, deployment, and monitoring. -## When to Use AzureML Orchestrator -- You are using Azure. -- You need a production-grade orchestrator. -- You want a UI to track pipeline runs. -- You prefer a managed solution for running pipelines. +## When to Use +Use AzureML Orchestrator if you: +- Are already using Azure. +- Need a production-grade orchestrator. +- Want a UI for tracking pipeline runs. +- Prefer a managed solution for running pipelines. ## Functionality -The ZenML AzureML orchestrator uses the AzureML Python SDK v2 to build ML pipelines. Each ZenML step is converted into an AzureML CommandComponent. +The ZenML AzureML orchestrator utilizes the AzureML Python SDK v2 to create pipelines by generating AzureML `CommandComponent` for each ZenML step. -## Deployment Steps -1. Deploy ZenML to the cloud (preferably in the same region as AzureML). +## Deployment +To use the AzureML orchestrator: +1. Deploy ZenML to the cloud, ideally in the same region as AzureML. 2. Ensure you are connected to the remote ZenML server. -## Usage Requirements -- Install the ZenML Azure integration: - ```shell - zenml integration install azure - ``` -- Install Docker or have a remote image builder. -- Set up a remote artifact store and container registry. -- Create an Azure resource group with an AzureML workspace. - -### Authentication Methods -1. **Default Authentication**: Combines Azure hosting and local development credentials. -2. **Service Principal Authentication (recommended)**: Connects cloud components securely. Create a service principal and register a ZenML Azure Service Connector: - ```bash - zenml service-connector register <CONNECTOR_NAME> --type azure -i - zenml orchestrator connect <ORCHESTRATOR_NAME> -c <CONNECTOR_NAME> - ``` +## Installation +Install the Azure integration: +```shell +zenml integration install azure +``` +Ensure you have: +- Docker installed or a remote image builder. +- A remote artifact store. +- A remote container registry. +- An Azure resource group with an AzureML workspace. + +## Authentication +Two authentication methods: +1. **Default Authentication**: Simplifies the process using Azure credentials. +2. **Service Principal Authentication (recommended)**: Requires creating a service principal on Azure and registering a ZenML Azure Service Connector: +```bash +zenml service-connector register <CONNECTOR_NAME> --type azure -i +zenml orchestrator connect <ORCHESTRATOR_NAME> -c <CONNECTOR_NAME> +``` ## Docker Integration ZenML builds a Docker image for each pipeline run, named `<CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME>`. ## AzureML UI -AzureML Studio allows inspection, management, and debugging of pipelines. Double-click steps to view configurations and logs. +AzureML workspace includes a Machine Learning studio for managing and debugging pipelines. Double-clicking steps opens their configuration and execution logs. -## Orchestrator Settings -Use `AzureMLOrchestratorSettings` to configure compute resources. Three modes are supported: +## Settings +The `AzureMLOrchestratorSettings` class configures pipeline execution resources. It supports three modes: 1. **Serverless Compute (Default)**: ```python @@ -6188,18 +6163,17 @@ Use `AzureMLOrchestratorSettings` to configure compute resources. Three modes ar ``` ## Scheduling Pipelines -Pipelines can be scheduled using JobSchedules with cron expressions or intervals: +Pipelines can be scheduled using `JobSchedules` with cron expressions or intervals: ```python @pipeline def my_pipeline(): ... my_pipeline = my_pipeline.with_options( - schedule=Schedule(cron_expression="*/5 * * * *") + schedule=Schedule(cron_expression="*/5 * * * *") ) -my_pipeline() ``` -**Note**: ZenML schedules runs but users must manage the lifecycle of the schedule via Azure UI. +Scheduled runs can be managed through the Azure UI, as ZenML only initiates the schedule. For more details on compute sizes, refer to the [AzureML documentation](https://learn.microsoft.com/en-us/azure/machine-learning/concept-compute-target?view=azureml-api-2#supported-vm-series-and-sizes). @@ -6209,9 +6183,9 @@ For more details on compute sizes, refer to the [AzureML documentation](https:// ### Kubernetes Orchestrator Overview -The ZenML `kubernetes` integration allows you to orchestrate and scale ML pipelines on a Kubernetes cluster without writing Kubernetes code. It operates similarly to Kubeflow, running each pipeline step in separate Kubernetes pods, but uses a master pod for orchestration, making it faster and simpler to deploy. +The ZenML Kubernetes integration allows you to orchestrate and scale ML pipelines on a Kubernetes cluster without writing Kubernetes code. It is a lightweight alternative to distributed orchestrators like Airflow or Kubeflow, executing each pipeline step in separate Kubernetes pods, managed by a master pod that orchestrates execution via topological sort. This approach is simpler and faster than Kubeflow, as it eliminates the need for Kubeflow installation and maintenance. -**Warning**: This component is intended for remote ZenML deployments only; using it locally may cause issues. +**Warning:** This component is intended for remote ZenML deployments only; using it locally may cause unexpected behavior. ### When to Use the Kubernetes Orchestrator - For lightweight pipeline execution on Kubernetes. @@ -6219,63 +6193,70 @@ The ZenML `kubernetes` integration allows you to orchestrate and scale ML pipeli - If you want to avoid managed solutions like Vertex. ### Deployment Requirements -- A Kubernetes cluster (refer to the [cloud guide](../../user-guide/cloud-guide/cloud-guide.md) for deployment options). -- A remote ZenML server connected to the cluster. +- A Kubernetes cluster (remote or cloud-based). +- A remote ZenML server connection. +- ZenML `kubernetes` integration installed: + ```shell + zenml integration install kubernetes + ``` +- Docker and `kubectl` installed. -### Usage Steps -1. **Install the ZenML Kubernetes Integration**: +### Using the Kubernetes Orchestrator +1. **With a Service Connector:** + - Register the orchestrator without needing local `kubectl`: ```shell - zenml integration install kubernetes + zenml orchestrator register <ORCHESTRATOR_NAME> --flavor kubernetes + zenml orchestrator connect <ORCHESTRATOR_NAME> --connector <CONNECTOR_NAME> + zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set ``` -2. **Prerequisites**: - - Docker and kubectl installed. - - A remote artifact store and container registry as part of your stack. - - Optional: Configure a Service Connector for better portability. - -3. **Register the Orchestrator**: - - **With Service Connector**: - ```shell - zenml orchestrator register <ORCHESTRATOR_NAME> --flavor kubernetes - zenml orchestrator connect <ORCHESTRATOR_NAME> --connector <CONNECTOR_NAME> - zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set - ``` - - - **Without Service Connector**: - ```shell - zenml orchestrator register <ORCHESTRATOR_NAME> --flavor=kubernetes --kubernetes_context=<KUBERNETES_CONTEXT> - zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set - ``` - -4. **Run a Pipeline**: +2. **Without a Service Connector:** + - Configure `kubectl` context and register the orchestrator: ```shell - python file_that_runs_a_zenml_pipeline.py + zenml orchestrator register <ORCHESTRATOR_NAME> --flavor=kubernetes --kubernetes_context=<KUBERNETES_CONTEXT> + zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set ``` -5. **Interact with Pods**: - Use labels for debugging: - ```shell - kubectl delete pod -n zenml -l pipeline=kubernetes_example_pipeline - ``` +### Running a Pipeline +To run a ZenML pipeline: +```shell +python file_that_runs_a_zenml_pipeline.py +``` +You can check pod logs and status with: +```shell +kubectl get pods -n zenml +``` + +### Pod Interaction +Pods are labeled for easier management: +- `run`: ZenML run name +- `pipeline`: ZenML pipeline name + +To delete pods related to a specific pipeline: +```shell +kubectl delete pod -n zenml -l pipeline=<PIPELINE_NAME> +``` ### Additional Configuration -- Default namespace: `zenml`. A service account `zenml-service-account` is created with edit RBAC role. -- Customizable attributes: +- Default namespace: `zenml` with a service account `zenml-service-account`. +- Custom settings can include: - `kubernetes_namespace`: Specify an existing namespace. - - `service_account_name`: Use an existing service account with appropriate RBAC roles. - - `pod_settings`: Configure node selectors, tolerations, resource requests/limits, annotations, volumes, and volume mounts. + - `service_account_name`: Use a specific service account with appropriate RBAC roles. -**Example Configuration**: +**Example of Custom Settings:** ```python from zenml.integrations.kubernetes.flavors.kubernetes_orchestrator_flavor import KubernetesOrchestratorSettings kubernetes_settings = KubernetesOrchestratorSettings( + kubernetes_namespace="ml-pipelines", + service_account_name="zenml-pipeline-runner", pod_settings={ "node_selectors": {"cloud.google.com/gke-nodepool": "ml-pool"}, - "resources": {"requests": {"cpu": "2", "memory": "4Gi"}}, - }, - kubernetes_namespace="ml-pipelines", - service_account_name="zenml-pipeline-runner" + "resources": { + "requests": {"cpu": "2", "memory": "4Gi"}, + "limits": {"cpu": "4", "memory": "8Gi"} + } + } ) @pipeline(settings={"orchestrator": kubernetes_settings}) @@ -6283,8 +6264,8 @@ def my_kubernetes_pipeline(): ... ``` -### Step-Level Settings -Override settings at the step level for specific configurations: +### Step-Level Configuration +You can override pipeline settings at the step level for specific configurations: ```python @step(settings={"orchestrator": k8s_settings}) def train_model(data: dict) -> None: @@ -6292,9 +6273,9 @@ def train_model(data: dict) -> None: ``` ### GPU Configuration -To run steps on GPU-backed hardware, follow the specific instructions to enable CUDA for full acceleration. +To run steps on GPU, follow specific instructions to enable CUDA and customize settings accordingly. -For further details and a full list of configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubernetes/#zenml.integrations.kubernetes.orchestrators.kubernetes_orchestrator.KubernetesOrchestrator). +For further details on attributes and configurations, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubernetes/#zenml.integrations.kubernetes.orchestrators.kubernetes_orchestrator.KubernetesOrchestrator). ================================================== @@ -6302,45 +6283,38 @@ For further details and a full list of configurable attributes, refer to the [SD # Databricks Orchestrator Overview -The Databricks Orchestrator, part of the ZenML integration, enables running ML pipelines on Databricks, leveraging its distributed computing capabilities and optimized environment for big data processing. - -## When to Use -Use the Databricks orchestrator if: -- You are using Databricks for data and ML workloads. -- You want to utilize Databricks' distributed computing for ML pipelines. -- You seek a managed solution that integrates with Databricks services. +The Databricks orchestrator, part of the ZenML integration, allows users to run ML pipelines on Databricks, leveraging its distributed computing capabilities. It is suitable for users already utilizing Databricks for data and ML workloads, seeking a managed solution with optimized performance. ## Prerequisites -- An active Databricks workspace (AWS, Azure, GCP). -- A Databricks account or service account with permissions to create and run jobs. +- Active Databricks workspace (AWS, Azure, or GCP). +- Databricks account or service account with permissions to create and run jobs. ## How It Works -1. ZenML creates a Python wheel package containing your pipeline code and dependencies. -2. The wheel package is uploaded to Databricks. -3. ZenML uses the Databricks SDK to define a job that includes pipeline steps and cluster settings (e.g., Spark version, worker count). -4. The job executes the pipeline, ensuring steps run in the correct order. -5. ZenML retrieves logs and job status for monitoring. +1. **Wheel Packages**: ZenML creates a Python wheel package containing the necessary code and dependencies for the pipeline. +2. **Job Definition**: ZenML uses the Databricks SDK to create a job definition that includes pipeline steps and cluster settings (Spark version, number of workers, node type). +3. **Execution**: The job retrieves the wheel package from Databricks and executes the pipeline, ensuring steps run in the correct order. +4. **Monitoring**: ZenML retrieves job logs and status for progress tracking. -## Usage Steps -1. Install the Databricks integration: +## Usage +1. **Install Integration**: ```shell zenml integration install databricks ``` -2. Register the orchestrator: +2. **Register Orchestrator**: ```shell zenml orchestrator register databricks_orchestrator --flavor=databricks --host="https://xxxxx.x.azuredatabricks.net" --client_id={{databricks.client_id}} --client_secret={{databricks.client_secret}} ``` -3. Add the orchestrator to your stack: +3. **Add to Stack**: ```shell zenml stack register databricks_stack -o databricks_orchestrator ... --set ``` -4. Run your ZenML pipeline: +4. **Run Pipeline**: ```shell python run.py ``` ## Databricks UI -Access pipeline run details and logs via the Databricks UI. Retrieve the UI URL in Python: +Access detailed logs and pipeline run information via the Databricks UI. Retrieve the UI URL in Python: ```python from zenml.client import Client @@ -6349,7 +6323,7 @@ orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value ``` ## Scheduling Pipelines -Schedule pipelines using Databricks' native scheduling capability: +Use Databricks' native scheduling capability: ```python from zenml.config.schedule import Schedule @@ -6357,10 +6331,11 @@ pipeline_instance.run( schedule=Schedule(cron_expression="*/5 * * * *") ) ``` -- Only `cron_expression` is supported; Java Timezone IDs are required. +- Only `cron_expression` is supported. +- Requires Java Timezone IDs in the `cron_expression`. ## Additional Configuration -Customize the Databricks orchestrator with `DatabricksOrchestratorSettings`: +Customize the orchestrator settings: ```python from zenml.integrations.databricks.flavors.databricks_orchestrator_flavor import DatabricksOrchestratorSettings @@ -6380,7 +6355,7 @@ def my_pipeline(): ``` ## GPU Support -To enable GPU support, use a GPU-enabled Spark version and node type: +To enable GPU support, modify `spark_version` and `node_type_id`: ```python databricks_settings = DatabricksOrchestratorSettings( spark_version="15.3.x-gpu-ml-scala2.12", @@ -6388,9 +6363,9 @@ databricks_settings = DatabricksOrchestratorSettings( autoscale=(1, 2), ) ``` -Follow additional instructions to enable CUDA for full GPU acceleration. +Follow additional instructions to enable CUDA for GPU acceleration. -For a complete list of configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-databricks/#zenml.integrations.databricks.orchestrators.databricks_orchestrator.DatabricksOrchestrator). +For comprehensive details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-databricks/#zenml.integrations.databricks.flavors.databricks_orchestrator_flavor.DatabricksOrchestratorSettings) and the [configuration guide](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md). ================================================== @@ -6399,71 +6374,81 @@ For a complete list of configurable attributes, refer to the [SDK Docs](https:// # Google Cloud Vertex AI Orchestrator Summary ## Overview -Vertex AI Pipelines is a serverless ML workflow tool on Google Cloud Platform (GCP), designed for running production-ready, repeatable cloud orchestrators with minimal setup. It is intended for use within a remote ZenML deployment. +Vertex AI Pipelines is a serverless ML workflow tool on Google Cloud Platform (GCP), designed for running production-ready, repeatable workflows with minimal setup. It is intended for use within a remote ZenML deployment. ## When to Use Use the Vertex orchestrator if: - You are using GCP. -- You need a production-grade orchestrator with UI tracking. -- You prefer a managed, serverless solution for pipelines. +- You need a proven production-grade orchestrator. +- You want a UI for tracking pipeline runs. +- You prefer a managed, serverless solution. -## Deployment Requirements -1. Deploy ZenML to the cloud, preferably in the same GCP project as the Vertex infrastructure. -2. Connect to the remote ZenML server. +## Deployment +To deploy the Vertex AI orchestrator: +1. Deploy ZenML to the cloud, ideally in the same GCP project as the Vertex infrastructure. +2. Ensure connection to the remote ZenML server. 3. Enable relevant Vertex APIs in your GCP project. -## Usage Prerequisites +## Requirements +To use the Vertex orchestrator: - Install ZenML `gcp` integration: - ```shell - zenml integration install gcp - ``` + ```shell + zenml integration install gcp + ``` - Install and run Docker. - Set up a remote artifact store and container registry. - Obtain GCP credentials with appropriate permissions. ### GCP Credentials and Permissions You need a GCP user account or service accounts with permissions for: -- Creating Vertex AI jobs (e.g., `Vertex AI User` role). +- Creating jobs in Vertex Pipelines (e.g., `Vertex AI User` role). - Running Vertex AI pipelines (e.g., `Vertex AI Service Agent` role). -- (Optional) `Storage Object Creator Role` for writing to the artifact store. + +You can authenticate using: +1. `gcloud` CLI. +2. Service account key file. +3. Recommended: GCP Service Connector. -### Configuration Use-Cases -1. **Local `gcloud` CLI with User Account**: - - Authenticate using `gcloud auth login`. - - Register orchestrator: - ```shell - zenml orchestrator register <ORCHESTRATOR_NAME> --flavor=vertex --project=<PROJECT_ID> --location=<GCP_LOCATION> --synchronous=true - ``` +## Configuration Use-Cases +1. **Local `gcloud` CLI with User Account**: + ```shell + zenml orchestrator register <ORCHESTRATOR_NAME> \ + --flavor=vertex \ + --project=<PROJECT_ID> \ + --location=<GCP_LOCATION> \ + --synchronous=true + ``` -2. **GCP Service Connector with Single Service Account**: - - Create a service account with necessary permissions and a key file. - - Register the service connector and orchestrator: - ```shell - zenml service-connector register <CONNECTOR_NAME> --type gcp --auth-method=service-account --project_id=<PROJECT_ID> --service_account_json=@connectors-vertex-ai-workload.json --resource-type gcp-generic - - zenml orchestrator register <ORCHESTRATOR_NAME> --flavor=vertex --location=<GCP_LOCATION> --synchronous=true --workload_service_account=<SERVICE_ACCOUNT_NAME>@<PROJECT_NAME>.iam.gserviceaccount.com - - zenml orchestrator connect <ORCHESTRATOR_NAME> --connector <CONNECTOR_NAME> - ``` +2. **GCP Service Connector with Single Service Account**: + ```shell + zenml service-connector register <CONNECTOR_NAME> --type gcp --auth-method=service-account --project_id=<PROJECT_ID> --service_account_json=@connectors-vertex-ai-workload.json --resource-type gcp-generic + + zenml orchestrator register <ORCHESTRATOR_NAME> \ + --flavor=vertex \ + --location=<GCP_LOCATION> \ + --synchronous=true \ + --workload_service_account=<SERVICE_ACCOUNT_NAME>@<PROJECT_NAME>.iam.gserviceaccount.com -3. **GCP Service Connector with Different Service Accounts**: - - Use multiple service accounts for different permissions. - - Register the service connector and orchestrator similarly as above. + zenml orchestrator connect <ORCHESTRATOR_NAME> --connector <CONNECTOR_NAME> + ``` -### Configuring the Stack -To use the orchestrator in your active stack: +3. **GCP Service Connector with Different Service Accounts**: + Similar to the single service account setup but uses multiple accounts for least privilege. + +## Configuring the Stack +Register and activate a stack with the orchestrator: ```shell zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set ``` ## Running Pipelines -To run a ZenML pipeline: +Run any ZenML pipeline using the Vertex orchestrator: ```shell python file_that_runs_a_zenml_pipeline.py ``` -### Vertex UI -Access pipeline run details via the Vertex UI. Retrieve the URL in Python: +## Vertex UI +Access the Vertex UI for pipeline run details: ```python from zenml.client import Client @@ -6471,8 +6456,8 @@ pipeline_run = Client().get_pipeline_run("<PIPELINE_RUN_NAME>") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"] ``` -### Scheduling Pipelines -Schedule pipelines using the `Schedule` class: +## Scheduling Pipelines +Schedule pipelines using: ```python from datetime import datetime, timedelta from zenml import pipeline @@ -6487,9 +6472,8 @@ first_pipeline = first_pipeline.with_options( ) first_pipeline() ``` -**Note**: Only `cron_expression`, `start_time`, and `end_time` are supported. -### Additional Configuration +## Additional Configuration Configure labels and resource settings: ```python from zenml.integrations.gcp.flavors.vertex_orchestrator_flavor import VertexOrchestratorSettings @@ -6507,21 +6491,22 @@ vertex_settings = VertexOrchestratorSettings( resource_settings = ResourceSettings(gpu_count=1) ``` -### Enabling CUDA for GPU -Follow specific instructions to enable CUDA for GPU steps. +## Enabling CUDA for GPU +Follow specific instructions to enable CUDA for GPU acceleration. -For more details on configuration and available attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-gcp/#zenml.integrations.gcp.orchestrators.vertex_orchestrator.VertexOrchestrator). +For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-gcp/#zenml.integrations.gcp.orchestrators.vertex_orchestrator.VertexOrchestrator). ================================================== === File: docs/book/component-guide/orchestrators/tekton.md === -# Tekton Orchestrator Summary +# Tekton Orchestrator -## Overview -Tekton is an open-source framework for CI/CD systems, enabling developers to build, test, and deploy applications across various environments. It is designed for use with a remote ZenML deployment. +**Tekton** is an open-source framework for CI/CD systems, enabling developers to build, test, and deploy applications across cloud and on-premise environments. -### When to Use +**Warning**: This component is intended for use with a remote ZenML deployment. Local deployments may cause unexpected behavior. + +## When to Use Tekton Use the Tekton orchestrator if: - You need a production-grade orchestrator. - You want a UI to track pipeline runs. @@ -6529,88 +6514,98 @@ Use the Tekton orchestrator if: - You can deploy and maintain Tekton Pipelines on your cluster. ## Deployment Steps -1. **Set Up Kubernetes Cluster**: Choose your cloud provider (AWS, GCP, Azure) and set up a Kubernetes cluster. -2. **Install Tekton Pipelines**: Follow the specific instructions for your cloud provider to install Tekton. - -### AWS -- Ensure you have an EKS cluster and AWS CLI set up. -- Configure `kubectl`: - ```powershell - aws eks --region REGION update-kubeconfig --name CLUSTER_NAME - ``` -- Install Tekton Pipelines. - -### GCP -- Ensure you have a GKE cluster and Google Cloud CLI set up. -- Configure `kubectl`: - ```powershell - gcloud container clusters get-credentials CLUSTER_NAME - ``` -- Install Tekton Pipelines. - -### Azure -- Ensure you have an AKS cluster and Azure CLI set up. -- Configure `kubectl`: - ```powershell - az aks get-credentials --resource-group RESOURCE_GROUP --name CLUSTER_NAME - ``` -- Install Tekton Pipelines. +1. **Set up a Kubernetes cluster** and deploy Tekton Pipelines. +2. Follow the specific instructions for your cloud provider: + + ### AWS + - Set up a remote ZenML server. + - Create an EKS cluster. + - Install AWS CLI and configure `kubectl`: + ```powershell + aws eks --region REGION update-kubeconfig --name CLUSTER_NAME + ``` + - Install Tekton Pipelines. + + ### GCP + - Set up a remote ZenML server. + - Create a GKE cluster. + - Install Google Cloud CLI and configure `kubectl`: + ```powershell + gcloud container clusters get-credentials CLUSTER_NAME + ``` + - Install Tekton Pipelines. + + ### Azure + - Set up a remote ZenML server. + - Create an AKS cluster. + - Install Azure CLI and configure `kubectl`: + ```powershell + az aks get-credentials --resource-group RESOURCE_GROUP --name CLUSTER_NAME + ``` + - Install Tekton Pipelines. -**Note**: Tekton Pipelines must be version >=0.38.3. +**Note**: Ensure Tekton Pipelines version is >=0.38.3. -## Usage -1. **Install ZenML Tekton Integration**: - ```shell - zenml integration install tekton -y - ``` -2. **Requirements**: - - Docker installed. - - Remote artifact store and container registry configured. - - Optional: `kubectl` installed. +## Using Tekton +1. Install the ZenML `tekton` integration: + ```shell + zenml integration install tekton -y + ``` +2. Ensure Docker is installed and running. +3. Deploy Tekton pipelines on a remote cluster. +4. Identify your Kubernetes context using: + ```shell + kubectl config get-contexts + ``` +5. Set up a remote artifact store and container registry as part of your stack. ### Registering the Orchestrator 1. **With Service Connector**: - ```shell - zenml orchestrator register <ORCHESTRATOR_NAME> --flavor tekton - zenml orchestrator connect <ORCHESTRATOR_NAME> --connector <CONNECTOR_NAME> - zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set - ``` + ```shell + zenml orchestrator register <ORCHESTRATOR_NAME> --flavor tekton + zenml orchestrator connect <ORCHESTRATOR_NAME> --connector <CONNECTOR_NAME> + zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set + ``` 2. **Without Service Connector**: - ```shell - zenml orchestrator register <ORCHESTRATOR_NAME> --flavor=tekton --kubernetes_context=<KUBERNETES_CONTEXT> - zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set - ``` + ```shell + zenml orchestrator register <ORCHESTRATOR_NAME> --flavor=tekton --kubernetes_context=<KUBERNETES_CONTEXT> + zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set + ``` ### Running a Pipeline -Run your ZenML pipeline: +Run any ZenML pipeline using: ```shell python file_that_runs_a_zenml_pipeline.py ``` -## Tekton UI -Access the Tekton UI for pipeline run details: +### Tekton UI +Access the Tekton UI for pipeline details and logs: ```bash kubectl get ingress -n tekton-pipelines -o jsonpath='{.items[0].spec.rules[0].host}' ``` -## Additional Configuration -Use `TektonOrchestratorSettings` to configure node selectors, affinity, and tolerations: +### Additional Configuration +Configure `TektonOrchestratorSettings` for node selectors, affinity, and tolerations: ```python from zenml.integrations.tekton.flavors.tekton_orchestrator_flavor import TektonOrchestratorSettings +from kubernetes.client.models import V1Toleration tekton_settings = TektonOrchestratorSettings( pod_settings={ "affinity": {...}, - "tolerations": [...] + "tolerations": [V1Toleration(...)] } ) ``` -Specify resource settings for pipeline or step: +Specify resource requirements: ```python resource_settings = ResourceSettings(cpu_count=8, memory="16GB") +``` +Apply settings at the pipeline or step level: +```python @pipeline(settings={"orchestrator": tekton_settings, "resources": resource_settings}) def my_pipeline(): ... @@ -6620,37 +6615,36 @@ def my_step(): ... ``` -## GPU Configuration -To run steps on GPU, follow specific instructions to enable CUDA for hardware acceleration. +### Enabling CUDA for GPU +To run steps on a GPU, follow the instructions to enable CUDA for full acceleration. -For more details on configuration and attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-tekton/). +For further details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-tekton/#zenml.integrations.tekton.orchestrators.tekton_orchestrator.TektonOrchestrator). ================================================== === File: docs/book/component-guide/orchestrators/airflow.md === -### Airflow Orchestrator for ZenML Pipelines +# Airflow Orchestrator for ZenML Pipelines -ZenML pipelines can be executed as Airflow DAGs, leveraging Airflow's orchestration capabilities alongside ZenML's ML-specific features. Each ZenML step runs in a separate Docker container managed by Airflow. +ZenML pipelines can be executed as [Airflow](https://airflow.apache.org/) DAGs, leveraging Airflow's orchestration capabilities alongside ZenML's ML-specific features. Each ZenML step runs in a separate Docker container managed by Airflow. -#### When to Use Airflow Orchestrator +## When to Use Airflow Orchestrator - Proven production-grade orchestrator. -- Already using Airflow. -- Need local pipeline execution. -- Willing to maintain Airflow. +- Existing use of Airflow. +- Local pipeline execution. +- Willingness to deploy and maintain Airflow. -#### Deployment Options -- **Local Deployment**: No additional setup required. -- **Remote Deployment**: Requires a remote ZenML deployment. - - Use ZenML GCP Terraform module for Google Cloud Composer. - - Managed services: Google Cloud Composer, Amazon MWAA, Astronomer. - - Manual deployment: Refer to [Airflow docs](https://airflow.apache.org/docs/apache-airflow/stable/production-deployment.html). +## Deployment Options +- **Local**: No additional setup required. +- **Remote**: Requires a remote ZenML deployment. + - Use [ZenML GCP Terraform module](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack-with-terraform.md) or managed services like [Google Cloud Composer](https://cloud.google.com/composer), [Amazon MWAA](https://aws.amazon.com/managed-workflows-for-apache-airflow/), or [Astronomer](https://www.astronomer.io/). + - Manual deployment is also an option; refer to [Airflow docs](https://airflow.apache.org/docs/apache-airflow/stable/production-deployment.html). -**Required Python Packages for Remote Deployment**: +### Required Python Packages for Remote Deployment - `pydantic~=2.7.1` -- `apache-airflow-providers-docker` or `apache-airflow-providers-cncf-kubernetes` (based on the operator used). +- `apache-airflow-providers-docker` or `apache-airflow-providers-cncf-kubernetes` -#### Usage Steps +## Usage Steps 1. Install ZenML Airflow integration: ```shell zenml integration install airflow @@ -6662,36 +6656,38 @@ ZenML pipelines can be executed as Airflow DAGs, leveraging Airflow's orchestrat zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set ``` -**Local Setup**: +### Local Setup - Create a virtual environment: ```bash python -m venv airflow_server_environment source airflow_server_environment/bin/activate pip install "apache-airflow==2.4.0" "apache-airflow-providers-docker<3.8.0" "pydantic~=2.7.1" ``` -- Set environment variables for Airflow configuration: - - `AIRFLOW_HOME`: Default `~/airflow`. - - `AIRFLOW__CORE__DAGS_FOLDER`: Default `<AIRFLOW_HOME>/dags`. - - `AIRFLOW__SCHEDULER__DAG_DIR_LIST_INTERVAL`: Default 30 seconds. +- Set environment variables: + - `AIRFLOW_HOME`: Default `~/airflow`. + - `AIRFLOW__CORE__DAGS_FOLDER`: Default `<AIRFLOW_HOME>/dags`. + - `AIRFLOW__SCHEDULER__DAG_DIR_LIST_INTERVAL`: Default 30 seconds. -**Start Local Airflow Server**: +**MacOS Users**: Set `no_proxy` to avoid crashes: ```bash -airflow standalone +export no_proxy=* ``` -Access the UI at [http://0.0.0.0:8080](http://0.0.0.0:8080). -**Run ZenML Pipeline**: -```shell -python file_that_runs_a_zenml_pipeline.py -``` -Copy the generated `.zip` file to the Airflow DAGs directory. +- Start the local Airflow server: + ```bash + airflow standalone + ``` +- Run the ZenML pipeline: + ```shell + python file_that_runs_a_zenml_pipeline.py + ``` -#### Remote Deployment Considerations -- Requires a remote ZenML server and a remote artifact store. -- Running `pipeline.run()` creates a `.zip` file, which must be placed in the Airflow DAGs directory. +### Remote Setup +- Requires a remote ZenML server, deployed Airflow server, remote artifact store, and remote container registry. +- Running `pipeline.run()` creates a `.zip` file representing the pipeline, which must be placed in the Airflow DAGs directory. -#### Scheduling Pipelines -Schedule pipeline runs in Airflow: +## Scheduling Pipelines +Schedule pipeline runs in the past: ```python from datetime import datetime, timedelta from zenml.pipelines import Schedule @@ -6707,119 +6703,113 @@ scheduled_pipeline = fashion_mnist_pipeline.with_options( scheduled_pipeline() ``` -#### Airflow UI -Access the Airflow UI at [http://localhost:8080](http://localhost:8080). Default username is `admin`, and the password can be found in `<AIRFLOW_HOME>/standalone_admin_password.txt`. +## Airflow UI +Access the UI at [http://localhost:8080](http://localhost:8080). Default credentials: username `admin`, password found in `<AIRFLOW_HOME>/standalone_admin_password.txt`. -#### Additional Configuration -- Use `AirflowOrchestratorSettings` for further configuration. -- For GPU usage, follow specific instructions to enable CUDA. +## Additional Configuration +Pass `AirflowOrchestratorSettings` for further customization. For GPU support, follow [CUDA instructions](../../how-to/pipeline-development/training-with-gpus/README.md). -#### Airflow Operators -- **DockerOperator**: Runs Docker images on the same machine. -- **KubernetesPodOperator**: Runs Docker images in Kubernetes pods. +## Using Different Airflow Operators +Supported operators: +- `DockerOperator`: Runs on the same machine. +- `KubernetesPodOperator`: Runs in a Kubernetes pod. Specify the operator: ```python +from zenml import pipeline, step from zenml.integrations.airflow.flavors.airflow_orchestrator_flavor import AirflowOrchestratorSettings -airflow_settings = AirflowOrchestratorSettings( - operator="docker", # or "kubernetes_pod" - operator_args={} -) +airflow_settings = AirflowOrchestratorSettings(operator="docker") @step(settings={"orchestrator": airflow_settings}) def my_step(...): + +@pipeline(settings={"orchestrator": airflow_settings}) +def my_pipeline(...): ``` -#### Custom Operators and DAG Generator -To use custom operators, specify the operator path in `AirflowOrchestratorSettings`. For custom DAG generation, provide a custom DAG generator file referencing the necessary classes. +For custom operators, specify the operator class path in `AirflowOrchestratorSettings`. + +## Custom DAG Generator +ZenML creates a Zip archive with a JSON config and a Python DAG generator. For custom behavior, provide a custom DAG generator file referencing the original classes. -For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-airflow/#zenml.integrations.airflow.orchestrators.airflow_orchestrator.AirflowOrchestrator). +For a complete list of attributes and settings, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-airflow/#zenml.integrations.airflow.orchestrators.airflow_orchestrator.AirflowOrchestrator). ================================================== === File: docs/book/how-to/debug-and-solve-issues.md === -# Debugging ZenML Issues: A Quick Guide +# Debugging ZenML Issues -This document provides guidance for troubleshooting common issues with ZenML, including when to seek help and how to effectively communicate your problem. +This guide provides best practices for debugging common issues with ZenML and obtaining help. -## When to Get Help -Before asking for assistance, follow this checklist: -- Search Slack for answers. -- Check [GitHub issues](https://github.com/zenml-io/zenml/issues). -- Use the [ZenML documentation](https://docs.zenml.io) search bar. +### When to Seek Help +Before asking for assistance, check the following: +- Search Slack, GitHub issues, and the ZenML documentation. - Review the [common errors](debug-and-solve-issues.md#most-common-errors) section. - Analyze [additional logs](debug-and-solve-issues.md#41-additional-logs) and [client/server logs](debug-and-solve-issues.md#client-and-server-logs). If you still need help, post your question on [Slack](https://zenml.io/slack). -## How to Post on Slack -When posting on Slack, include the following information: - -### 1. System Information -Run the command below and share the output: -```shell -zenml info -a -s -``` -For package-specific issues, use: -```shell -zenml info -p <package_name> -``` +### How to Post on Slack +Provide the following information for effective troubleshooting: -### 2. What Happened? -Briefly describe: -- Your goal. -- Expected outcome. -- Actual outcome. +1. **System Information**: Run the command below and include the output: + ```shell + zenml info -a -s + ``` + For specific package issues, use: + ```shell + zenml info -p <package_name> + ``` -### 3. How to Reproduce the Error -Provide step-by-step instructions to reproduce the error, either in text or video format. +2. **What Happened**: Describe your goal, expected outcome, and actual result. -### 4. Relevant Log Output -Attach relevant logs and error tracebacks. If lengthy, use services like [Pastebin](https://pastebin.com/) or [GitHub Gist](https://gist.github.com/). Include outputs from: -- `zenml status` -- `zenml stack describe` +3. **Reproduce the Error**: Provide step-by-step instructions to replicate the issue. -For additional logs, adjust the logging verbosity: -```shell -export ZENML_LOGGING_VERBOSITY=DEBUG -``` +4. **Relevant Log Output**: Attach relevant logs and error tracebacks. Include outputs from: + ```shell + zenml status + zenml stack describe + ``` + For additional logs, toggle the `ZENML_LOGGING_VERBOSITY` environment variable: + ```shell + export ZENML_LOGGING_VERBOSITY=DEBUG + ``` -## Client and Server Logs -For server-related issues, view server logs with: +### Client and Server Logs +To view server logs, run: ```shell zenml logs ``` -## Most Common Errors - -### 1. Error Initializing Rest Store -Occurs as: -```bash -RuntimeError: Error initializing rest store with URL 'http://127.0.0.1:8237': ... -``` -Solution: Re-run `zenml login --local` after each machine restart. +### Common Errors +1. **Error initializing rest store**: + ```bash + RuntimeError: Error initializing rest store with URL 'http://127.0.0.1:8237'... + ``` + Solution: Re-run `zenml login --local` after machine restarts. -### 2. Column 'step_configuration' Cannot Be Null -Error: -```bash -sqlalchemy.exc.IntegrityError: (pymysql.err.IntegrityError) (1048, "Column 'step_configuration' cannot be null") -``` -Solution: Ensure step configurations are within the character limit. +2. **Column 'step_configuration' cannot be null**: + ```bash + sqlalchemy.exc.IntegrityError: (pymysql.err.IntegrityError) (1048, "Column 'step_configuration' cannot be null") + ``` + Solution: Ensure step configurations do not exceed 65K characters. -### 3. 'NoneType' Object Has No Attribute 'Name' -Error snippet: -```shell -AttributeError: 'NoneType' object has no attribute 'name' -``` -Solution: Register an experiment tracker: -```shell -zenml experiment-tracker register mlflow_tracker --flavor=mlflow -zenml stack update -e mlflow_tracker -``` +3. **'NoneType' object has no attribute 'name'**: + ```shell + AttributeError: 'NoneType' object has no attribute 'name' + ``` + Solution: Register an experiment tracker: + ```shell + zenml experiment-tracker register mlflow_tracker --flavor=mlflow + ``` + Update your stack: + ```shell + zenml stack update -e mlflow_tracker + ``` -This guide aims to streamline the debugging process for ZenML users, ensuring efficient communication and resolution of issues. +This guide aims to streamline the debugging process and enhance your troubleshooting experience with ZenML. ================================================== @@ -6827,63 +6817,68 @@ This guide aims to streamline the debugging process for ZenML users, ensuring ef # Pipeline Development in ZenML -This section outlines the essential components and processes involved in developing pipelines using ZenML. - -## Key Components - -1. **Pipelines**: A pipeline is a sequence of steps that define the workflow for data processing and model training. +This section outlines the key components and processes involved in developing pipelines using ZenML. -2. **Steps**: Each step in a pipeline represents a specific operation, such as data ingestion, preprocessing, model training, or evaluation. - -3. **Artifacts**: Artifacts are outputs generated by steps, which can be used as inputs for subsequent steps. +## Key Concepts -4. **Parameters**: Parameters allow customization of step behavior, enabling dynamic configurations during pipeline execution. +- **Pipelines**: A sequence of steps that define a workflow for data processing and model training. +- **Steps**: Individual tasks within a pipeline, which can include data ingestion, preprocessing, model training, and evaluation. ## Pipeline Creation -To create a pipeline, define the steps and their dependencies. Here’s a simplified example: +1. **Define Steps**: Create functions for each step in the pipeline. + ```python + from zenml.steps import step -```python -from zenml.pipelines import pipeline -from zenml.steps import step + @step + def data_ingestion() -> DataFrame: + # Code to ingest data + pass -@step -def data_ingestion(): - # Code for data ingestion - return data + @step + def data_preprocessing(data: DataFrame) -> DataFrame: + # Code to preprocess data + pass -@step -def data_preprocessing(data): - # Code for preprocessing - return processed_data + @step + def model_training(data: DataFrame) -> Model: + # Code to train model + pass + ``` -@pipeline -def my_pipeline(): - data = data_ingestion() - processed_data = data_preprocessing(data) -``` +2. **Assemble Pipeline**: Combine steps into a pipeline. + ```python + from zenml.pipelines import pipeline -## Running Pipelines + @pipeline + def training_pipeline(): + data = data_ingestion() + processed_data = data_preprocessing(data) + model = model_training(processed_data) + ``` -Pipelines can be executed locally or in a cloud environment. Use the ZenML CLI or SDK to trigger pipeline runs. +## Execution -## Monitoring and Logging +- **Run Pipeline**: Execute the pipeline using the ZenML CLI or programmatically. + ```bash + zenml run pipeline training_pipeline + ``` -ZenML provides built-in monitoring and logging features to track pipeline execution and performance metrics. +## Best Practices -## Conclusion +- Modularize steps for reusability. +- Use version control for pipelines. +- Monitor and log pipeline executions for debugging. -ZenML streamlines the pipeline development process, allowing for efficient orchestration of machine learning workflows. Key aspects include defining steps, managing artifacts, and configuring parameters for flexibility. +This concise overview provides the essential information needed for developing and executing pipelines in ZenML, ensuring clarity and focus on critical elements. ================================================== === File: docs/book/how-to/pipeline-development/develop-locally/README.md === -### Develop Locally - -This section outlines best practices for developing pipelines locally, enabling faster iteration and reduced costs. Developers typically work with smaller datasets or synthetic data. ZenML facilitates this local development approach, allowing users to push and execute pipelines on more powerful remote hardware when needed. +# Develop Locally - +This section outlines best practices for developing pipelines locally, allowing for faster iteration and cost-effective testing. It is common to work with a smaller subset of data or synthetic data. ZenML supports local development, guiding users to eventually push and run pipelines on more powerful remote hardware. ================================================== @@ -6892,161 +6887,151 @@ This section outlines best practices for developing pipelines locally, enabling ### Summary of ZenML Documentation on Keeping Pipeline Runs Clean #### Overview -This documentation provides guidance on maintaining a clean development environment for ZenML pipelines, focusing on avoiding clutter in the dashboard and server during development. +This documentation provides guidance on maintaining a clean development environment for ZenML pipeline runs, helping to avoid clutter on shared servers and dashboards. + +#### Key Methods to Keep Runs Clean + +1. **Run Locally**: + To avoid cluttering a shared server, disconnect and run a local server: + ```bash + zenml login --local + ``` + Reconnect to the remote server with: + ```bash + zenml login <remote-url> + ``` -#### Key Options for Clean Development +2. **Unlisted Runs**: + Create pipeline runs without associating them explicitly to a pipeline using: + ```python + pipeline_instance.run(unlisted=True) + ``` + Unlisted runs won’t appear on the pipeline's dashboard but are visible in the pipeline run section. -1. **Run Locally**: - - To avoid cluttering a shared server, disconnect from the remote server and run a local server: +3. **Deleting Pipeline Runs**: + - To delete a specific run: ```bash - zenml login --local + zenml pipeline runs delete <PIPELINE_RUN_NAME_OR_ID> ``` - - Reconnect to the remote server using `zenml login <remote-url>` when needed. - -2. **Pipeline Runs**: - - **Unlisted Runs**: Create runs without associating them with a pipeline: + - To delete all runs from the last 24 hours: ```python - pipeline_instance.run(unlisted=True) - ``` - - Unlisted runs won't appear on the pipeline's dashboard page, keeping the history focused. - - - **Deleting Pipeline Runs**: - - To delete a specific run: - ```bash - zenml pipeline runs delete <PIPELINE_RUN_NAME_OR_ID> - ``` - - To delete all runs from the last 24 hours: - ```python - #!/usr/bin/env python3 - import datetime - from zenml.client import Client - - def delete_recent_pipeline_runs(): - zc = Client() - time_filter = (datetime.datetime.now(datetime.timezone.utc) - datetime.timedelta(hours=24)).strftime("%Y-%m-%d %H:%M:%S") - recent_runs = zc.list_pipeline_runs(created=f"gt:{time_filter}") - for run in recent_runs: - zc.delete_pipeline_run(run.id) - print(f"Deleted {len(recent_runs)} pipeline runs.") - - if __name__ == "__main__": - delete_recent_pipeline_runs() - ``` - -3. **Deleting Pipelines**: - - Remove unnecessary pipelines: - ```bash - zenml pipeline delete <PIPELINE_ID_OR_NAME> + #!/usr/bin/env python3 + import datetime + from zenml.client import Client + + def delete_recent_pipeline_runs(): + zc = Client() + time_filter = (datetime.datetime.now(datetime.timezone.utc) - datetime.timedelta(hours=24)).strftime("%Y-%m-%d %H:%M:%S") + recent_runs = zc.list_pipeline_runs(created=f"gt:{time_filter}") + for run in recent_runs: + zc.delete_pipeline_run(run.id) + + if __name__ == "__main__": + delete_recent_pipeline_runs() ``` -4. **Unique Pipeline Names**: - - Assign unique names to runs for differentiation: - ```python - training_pipeline = training_pipeline.with_options(run_name="custom_pipeline_run_name") - training_pipeline() - ``` +4. **Deleting Pipelines**: + To delete an entire pipeline: + ```bash + zenml pipeline delete <PIPELINE_ID_OR_NAME> + ``` -5. **Models**: - - Models must be registered to run pipelines. To delete a model: +5. **Unique Pipeline Names**: + Assign unique names to pipeline runs for differentiation: + ```python + training_pipeline = training_pipeline.with_options(run_name="custom_pipeline_run_name") + training_pipeline() + ``` + +6. **Model Management**: + - To delete a model: ```bash zenml model delete <MODEL_NAME> ``` -6. **Artifacts**: - - Prune unreferenced artifacts: +7. **Artifact Management**: + - To prune unreferenced artifacts: ```bash zenml artifact prune ``` - - Control deletion behavior with `--only-artifact` and `--only-metadata` flags. + - Use flags `--only-artifact` and `--only-metadata` to control deletion behavior. -7. **Cleaning Environment**: - - Use `zenml clean` to delete all local pipelines, runs, and artifacts: - ```bash - zenml clean --local - ``` - - Note: This command does not affect server data. +8. **Cleaning Environment**: + Use the command to clean local data: + ```bash + zenml clean + ``` + The `--local` flag deletes local files related to the active stack. -By utilizing these strategies, users can maintain an organized and efficient pipeline dashboard, focusing on relevant runs for their projects. +By following these methods, users can maintain an organized pipeline dashboard, focusing on relevant runs for their projects. ================================================== === File: docs/book/how-to/pipeline-development/develop-locally/local-prod-pipeline-variants.md === -### Summary: Creating Pipeline Variants in ZenML - -When developing ZenML pipelines, it's useful to create different variants for local development and production. This allows for rapid iteration during development while maintaining a robust setup for production. Variants can be created using: - -1. **Configuration Files** -2. **Code Implementation** -3. **Environment Variables** - -#### 1. Using Configuration Files -ZenML supports pipeline configurations via YAML files. For example, a development configuration might look like this: - -```yaml -enable_cache: False -parameters: - dataset_name: "small_dataset" -steps: - load_data: - enable_cache: False -``` - -To apply this configuration, use: - -```python -from zenml import step, pipeline - -@step -def load_data(dataset_name: str) -> dict: - ... - -@pipeline -def ml_pipeline(dataset_name: str): - load_data(dataset_name) - -if __name__ == "__main__": - ml_pipeline.with_options(config_path="path/to/config.yaml")() -``` +### Summary: Creating Pipeline Variants for Local Development and Production in ZenML -You can maintain separate files for development (`config_dev.yaml`) and production (`config_prod.yaml`). +When developing ZenML pipelines, it's useful to create different variants for local development and production. This allows for rapid iteration during development while ensuring a robust setup for production. Variants can be implemented in three ways: -#### 2. Implementing Variants in Code -You can also define variants directly in your code: +1. **Using Configuration Files** + - ZenML supports YAML configuration files for pipeline and step settings. Example for a development variant: + ```yaml + enable_cache: False + parameters: + dataset_name: "small_dataset" + steps: + load_data: + enable_cache: False + ``` + - To apply this configuration: + ```python + from zenml import step, pipeline -```python -import os -from zenml import step, pipeline + @step + def load_data(dataset_name: str) -> dict: + ... -@step -def load_data(dataset_name: str) -> dict: - ... + @pipeline + def ml_pipeline(dataset_name: str): + load_data(dataset_name) -@pipeline -def ml_pipeline(is_dev: bool = False): - dataset = "small_dataset" if is_dev else "full_dataset" - load_data(dataset) + if __name__ == "__main__": + ml_pipeline.with_options(config_path="path/to/config.yaml")() + ``` -if __name__ == "__main__": - is_dev = os.environ.get("ZENML_ENVIRONMENT") == "dev" - ml_pipeline(is_dev=is_dev) -``` +2. **Implementing Variants in Code** + - You can directly implement variants in your code: + ```python + import os + from zenml import step, pipeline -This method uses a boolean flag to switch between environments. + @step + def load_data(dataset_name: str) -> dict: + ... -#### 3. Using Environment Variables -Environment variables can determine which configuration to use: + @pipeline + def ml_pipeline(is_dev: bool = False): + dataset = "small_dataset" if is_dev else "full_dataset" + load_data(dataset) -```python -import os + if __name__ == "__main__": + is_dev = os.environ.get("ZENML_ENVIRONMENT") == "dev" + ml_pipeline(is_dev=is_dev) + ``` -config_path = "config_dev.yaml" if os.environ.get("ZENML_ENVIRONMENT") == "dev" else "config_prod.yaml" -ml_pipeline.with_options(config_path=config_path)() -``` +3. **Using Environment Variables** + - Environment variables can dictate which variant to run: + ```python + import os -Run your pipeline with: -- `ZENML_ENVIRONMENT=dev python run.py` -- `ZENML_ENVIRONMENT=prod python run.py` + config_path = "config_dev.yaml" if os.environ.get("ZENML_ENVIRONMENT") == "dev" else "config_prod.yaml" + ml_pipeline.with_options(config_path=config_path)() + ``` + - Run the pipeline with: + ``` + ZENML_ENVIRONMENT=dev python run.py + ZENML_ENVIRONMENT=prod python run.py + ``` ### Development Variant Considerations For a development variant, optimize for faster iteration: @@ -7055,8 +7040,7 @@ For a development variant, optimize for faster iteration: - Reduce training epochs and batch size - Use a smaller base model -Example configuration: - +Example configuration for development: ```yaml parameters: dataset_path: "data/small_dataset.csv" @@ -7064,9 +7048,7 @@ epochs: 1 batch_size: 16 stack: local_stack ``` - Or in code: - ```python @pipeline def ml_pipeline(is_dev: bool = False): @@ -7078,13 +7060,13 @@ def ml_pipeline(is_dev: bool = False): train_model(epochs=epochs, batch_size=batch_size) ``` -By creating these variants, you can efficiently test and debug your code locally while ensuring a full-scale configuration for production. This enhances your development workflow without compromising production integrity. +By creating these variants, you can efficiently test and debug locally while maintaining a comprehensive configuration for production. This approach enhances your development workflow and allows for effective iteration without compromising production integrity. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/retrieve-used-configuration-of-a-run.md === -To extract the configuration used in a completed pipeline run, you can access the `config` attribute of the pipeline run or a specific step. +To extract the configuration used in a completed pipeline run, you can access the `config` attribute of the pipeline run or a specific step within it. ### Key Steps: 1. Load the pipeline run using its name. @@ -7097,31 +7079,35 @@ from zenml.client import Client pipeline_run = Client().get_pipeline_run(<PIPELINE_RUN_NAME>) # General configuration pipeline_run.config -# Step-specific configuration +# Configuration for a specific step pipeline_run.steps[<STEP_NAME>].config ``` -This allows you to retrieve the configurations effectively. +This allows you to retrieve the relevant configuration details for analysis or debugging. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/README.md === -ZenML allows for easy configuration and execution of pipelines using YAML configuration files. These files enable runtime adjustments for parameters, caching behavior, and stack component configurations. Key topics include: +ZenML allows for easy configuration and execution of pipelines using YAML files. These files enable runtime configuration of parameters, caching behavior, and stack components. Key topics include: -- **What can be configured**: Details on configurable elements in ZenML. -- **Configuration hierarchy**: Structure and precedence of configuration settings. -- **Autogenerate a template YAML file**: Instructions for creating a default configuration template. +- **What can be configured**: Details on configurable elements. +- **Configuration hierarchy**: Structure of configuration settings. +- **Autogenerate a template YAML file**: Instructions for generating a template. -For more information, refer to the respective sections linked in the documentation. +For more information, refer to the linked sections: +- [What can be configured](what-can-be-configured.md) +- [Configuration hierarchy](configuration-hierarchy.md) +- [Autogenerate a template YAML file](autogenerate-a-template-yaml-file.md) ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/autogenerate-a-template-yaml-file.md === -### Summary of Documentation on Autogenerating a YAML Configuration Template +### Summary of Documentation -To create a configuration file for your pipeline, you can use the `.write_run_configuration_template()` method, which generates a YAML file with all options commented out for customization. +#### Autogenerate a Template YAML File +To create a YAML configuration template for your pipeline, use the `.write_run_configuration_template()` method. This generates a YAML file with all options commented out, allowing you to select relevant settings. #### Code Example ```python @@ -7135,34 +7121,24 @@ def simple_ml_pipeline(parameter: int): simple_ml_pipeline.write_run_configuration_template(path="<Insert_path_here>") ``` -#### Generated YAML Configuration Template +#### Example of Generated YAML Configuration Template The generated YAML template includes various sections, such as: -- **Pipeline Settings**: - - `build`: Pipeline build configuration - - `enable_artifact_metadata`, `enable_artifact_visualization`, `enable_cache`, etc.: Optional boolean flags - - `model`: Model metadata (name, description, tags, etc.) - - `parameters`: Optional parameters for the pipeline - - `run_name`: Optional name for the run - - **Schedule**: Configuration for scheduling runs - -- **Settings**: - - **Docker**: Configuration for Docker settings (apt packages, environment variables, etc.) - - **Resources**: Resource allocation (CPU, GPU, memory) - -- **Steps**: - - Each step (e.g., `load_data`, `train_model`) includes: - - Metadata options - - Model configuration - - Output specifications - - Parameters - - Docker settings - - Resource allocation +- **build**: Pipeline build configuration. +- **enable_artifact_metadata**: Optional metadata settings. +- **model**: Model details including `name`, `version`, and `tags`. +- **parameters**: Optional parameters for the pipeline. +- **schedule**: Scheduling options like `cron_expression` and `interval_second`. +- **settings**: Docker settings including `apt_packages`, `environment`, and `resources` (CPU, GPU, memory). +- **steps**: Configuration for each step (e.g., `load_data`, `train_model`), including: + - **enable_cache**: Caching options. + - **model**: Model specifications. + - **settings**: Docker settings for each step. #### Additional Configuration You can specify a stack when generating the template using: ```python -simple_ml_pipeline.write_run_configuration_template(stack=<Insert_stack_here>) +...write_run_configuration_template(stack=<Insert_stack_here>) ``` This allows for tailored configurations based on the desired stack. @@ -7171,20 +7147,22 @@ This allows for tailored configurations based on the desired stack. === File: docs/book/how-to/pipeline-development/use-configuration-files/runtime-configuration.md === -### Summary: Using Settings for Runtime Configuration in ZenML +### Summary of ZenML Settings Configuration -**Overview**: Settings in ZenML configure runtime aspects of stack components and pipelines, including resource requirements, containerization processes, and specific configurations for stack components. +**Overview**: ZenML uses `Settings` to configure runtime parameters for stack components and pipelines, centralizing configuration through `BaseSettings`. + +**Key Configuration Areas**: +- **Resource Requirements**: Define resources needed for pipeline steps. +- **Containerization**: Specify requirements for Docker images. +- **Component-Specific Configurations**: Pass parameters like experiment names at runtime. -**Key Concepts**: -- **BaseSettings**: Central concept for managing settings, interchangeable with the term `settings`. - **Types of Settings**: -1. **General Settings**: Applicable across all ZenML pipelines. - - Examples: - - `DockerSettings`: Configures Docker settings. - - `ResourceSettings`: Specifies resource settings. - -2. **Stack-Component-Specific Settings**: Tailored for specific stack components, identified by keys like `<COMPONENT_CATEGORY>` or `<COMPONENT_CATEGORY>.<COMPONENT_FLAVOR>`. Settings for inactive components are ignored. +1. **General Settings**: Applicable to all ZenML pipelines. + - Examples: + - [`DockerSettings`](../../../how-to/customize-docker-builds/README.md) + - [`ResourceSettings`](../../../how-to/pipeline-development/training-with-gpus/README.md) + +2. **Stack-Component-Specific Settings**: Used for runtime configurations of specific stack components, identified by keys like `<COMPONENT_CATEGORY>` or `<COMPONENT_CATEGORY>.<COMPONENT_FLAVOR>`. - Examples: - `SkypilotAWSOrchestratorSettings` - `KubeflowOrchestratorSettings` @@ -7195,13 +7173,13 @@ This allows for tailored configurations based on the desired stack. - `VertexStepOperatorSettings` - `AzureMLStepOperatorSettings` -**Registration-Time vs. Real-Time Settings**: -- **Registration-Time**: Static configurations set during component registration (e.g., `tracking_url` for MLflow). -- **Real-Time**: Dynamic configurations that can change per pipeline run (e.g., `experiment_name`). +**Registration vs. Runtime Settings**: +- **Registration-Time Configuration**: Static and fixed for all runs (e.g., `tracking_url` for MLflow). +- **Runtime Settings**: Dynamic and can change per pipeline run (e.g., `experiment_name`). -**Default Values**: Default settings can be specified during component registration and can be overridden at runtime. +**Default Values**: Default settings can be established during stack component registration, which can be overridden at runtime. -**Key Specification**: When defining stack-component-specific settings, use the correct key format. If only the category is specified, ZenML applies settings to the corresponding component flavor in the stack. +**Key Specification**: When defining stack-component-specific settings, use the appropriate key format. If only the category is specified, settings apply to any flavor of that component in the stack. **Code Examples**: @@ -7227,7 +7205,7 @@ steps: instance_type: m7g.medium ``` -This documentation provides a concise overview of how to effectively utilize settings for configuring runtime behavior in ZenML pipelines. +This concise overview captures the essential details of configuring settings in ZenML, ensuring that critical information is retained for effective understanding and application. ================================================== @@ -7237,11 +7215,11 @@ This documentation provides a concise overview of how to effectively utilize set In ZenML, configurations can be set at both the pipeline and step levels, with specific rules governing their precedence: -- Code configurations override YAML file configurations. -- Step-level configurations take precedence over pipeline-level configurations. +- Code configurations take precedence over YAML file configurations. +- Step-level configurations override pipeline-level configurations. - For attributes, dictionaries are merged. -### Example Code +#### Example Code ```python from zenml import pipeline, step @@ -7259,7 +7237,7 @@ def train_model(data: dict) -> None: def simple_ml_pipeline(parameter: int): ... -# Configuration results +# Merged configurations train_model.configuration.settings["resources"] # -> cpu_count: 2, gpu_count: 1, memory: "2GB" @@ -7267,9 +7245,7 @@ simple_ml_pipeline.configuration.settings["resources"] # -> cpu_count: 2, memory: "1GB" ``` -### Key Points -- Step configurations can override pipeline configurations. -- Resource settings can be defined at both levels, with merging behavior for dictionaries. +In this example, the `train_model` step configuration overrides the `simple_ml_pipeline` settings for GPU count and memory while retaining the CPU count from the pipeline. ================================================== @@ -7277,21 +7253,24 @@ simple_ml_pipeline.configuration.settings["resources"] ### Configuration Files in ZenML -**Overview**: Configuration can be specified in a YAML file or directly in code, but using a YAML file is recommended for better separation of configuration and code. +**Best Practice:** Use YAML files for configuration to separate config from code, although configurations can also be specified directly in code. -**Usage**: To apply a configuration file to a pipeline, use the `with_options(config_path=<PATH_TO_CONFIG>)` pattern. +**Applying Configuration:** +Use the `with_options(config_path=<PATH_TO_CONFIG>)` pattern to apply configurations to a pipeline. -**Example YAML Configuration**: +**Example YAML Configuration:** ```yaml enable_cache: False + parameters: dataset_name: "best_dataset" + steps: load_data: enable_cache: False ``` -**Example Python Code**: +**Example Python Code:** ```python from zenml import step, pipeline @@ -7302,24 +7281,25 @@ def load_data(dataset_name: str) -> dict: @pipeline def simple_ml_pipeline(dataset_name: str): load_data(dataset_name) - + if __name__ == "__main__": simple_ml_pipeline.with_options(config_path=<INSERT_PATH_TO_CONFIG_YAML>)() ``` -**Functionality**: This setup runs `simple_ml_pipeline` with caching disabled for `load_data` and sets `dataset_name` to `best_dataset`. +**Functionality:** The above code runs `simple_ml_pipeline` with caching disabled for `load_data` and sets `dataset_name` to `best_dataset`. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/what-can-be-configured.md === -### Configuration Overview +# Configuration Overview -This documentation outlines the configuration options available in a YAML file for a ZenML pipeline. Below is a concise summary of the key components and their functionalities. +This documentation outlines the configuration of a YAML file for a ZenML pipeline, highlighting key components and parameters. For a complete list of possible keys, refer to the full template [here](./autogenerate-a-template-yaml-file.md). + +## Key Configuration Elements -#### Sample YAML Configuration ```yaml -build: dcd6fafb-c200-4e85-8328-428bef98d804 +build: dcd6fafb-c200-4e85-8328-428bef98d804 # Docker image ID enable_artifact_metadata: True enable_artifact_visualization: False @@ -7389,38 +7369,73 @@ steps: instance_type: m7g.medium ``` -### Key Configuration Elements +## Configuration Details + +### `enable_XXX` Parameters +- **`enable_artifact_metadata`**: Associates metadata with artifacts. +- **`enable_artifact_visualization`**: Attaches visualizations of artifacts. +- **`enable_cache`**: Utilizes caching. +- **`enable_step_logs`**: Enables tracking of step logs. -- **Build ID**: Specifies the Docker image to use. -- **Enable Flags**: Boolean flags to control various behaviors: - - `enable_artifact_metadata`: Attach metadata to artifacts. - - `enable_artifact_visualization`: Attach visualizations of artifacts. - - `enable_cache`: Use caching. - - `enable_step_logs`: Enable step logs. +### `build` ID +Specifies the UUID of the Docker image to use, skipping image building for remote orchestrators. -- **Model Configuration**: Defines the ZenML model with attributes like name, version, description, and tags. +### Configuring the `model` +Defines the ZenML model for the pipeline: + +```yaml +model: + name: "ModelName" + version: "production" + description: An example model + tags: ["classifier"] +``` -- **Parameters**: JSON-serializable parameters for the pipeline and individual steps. +### Pipeline and Step `parameters` +Parameters can be defined at both the pipeline and step levels: -- **Run Name**: Unique identifier for the run; should be dynamic to avoid conflicts. +```yaml +parameters: + gamma: 0.01 -- **Schedule**: Cron expression for scheduling runs. +steps: + trainer: + parameters: + gamma: 0.001 +``` -- **Settings**: - - **Docker Settings**: Configuration for Docker, including packages and environment variables. - - **Resource Settings**: Defines CPU, GPU, and memory allocations. +### Setting the `run_name` +Specify a unique run name to avoid conflicts: -- **Step Configuration**: Each step can have specific configurations, including: - - `experiment_tracker`: Tracker for experiments. - - `step_operator`: Operator for executing the step. - - `outputs`: Configuration for output artifacts. +```yaml +run_name: <INSERT_RUN_NAME_HERE> +``` -### Important Notes -- The `parameters` defined at the step level take precedence over those at the pipeline level. -- Ensure unique `run_name` values to prevent execution conflicts. -- Resource settings may not be applicable for all stack components; refer to specific orchestrator documentation for compatibility. +### Stack Component Runtime Settings +Settings for Docker and resource configurations can be specified under `settings`: -This summary captures the essential configurations and their purposes, enabling effective use and understanding of the ZenML YAML configuration file. +```yaml +settings: + docker: + requirements: + - pandas + resources: + cpu_count: 2 + gpu_count: 1 + memory: "4Gb" +``` + +### Step-specific Configuration +Certain configurations apply only at the step level, such as: + +- **`experiment_tracker`**: Name of the experiment tracker. +- **`step_operator`**: Name of the step operator. +- **`outputs`**: Configuration for output artifacts. + +### Hooks +Specify the source for failure and success hooks in the pipeline. + +This summary provides a concise overview of the YAML configuration structure and key parameters for ZenML pipelines, ensuring critical information is preserved for effective understanding and implementation. ================================================== @@ -7428,27 +7443,30 @@ This summary captures the essential configurations and their purposes, enabling ### Summary: Running Remote Pipelines from Jupyter Notebooks with ZenML -ZenML allows the definition and execution of steps and pipelines within Jupyter Notebooks, running them remotely. The process involves extracting code from notebook cells and executing it as Python modules in Docker containers. +ZenML allows users to define and execute steps and pipelines in Jupyter notebooks remotely. The code from notebook cells is extracted and run as Python modules within Docker containers. -#### Key Points: -- **Execution Environment**: Notebook cells must adhere to specific conditions for remote execution. -- **Documentation Links**: - - [Limitations of defining steps in notebook cells](limitations-of-defining-steps-in-notebook-cells.md) - - [Run a single step from a notebook](run-a-single-step-from-a-notebook.md) +**Key Points:** +- **Execution Environment:** Steps defined in notebooks are executed remotely in Docker containers. +- **Requirements:** Notebook cells must adhere to specific conditions for proper execution. +- **Documentation Links:** + - [Limitations of Defining Steps in Notebook Cells](limitations-of-defining-steps-in-notebook-cells.md) + - [Run a Single Step from a Notebook](run-a-single-step-from-a-notebook.md) -This setup facilitates the integration of Jupyter Notebooks with ZenML for streamlined remote pipeline management. +For more details, refer to the linked documentation. ================================================== === File: docs/book/how-to/pipeline-development/run-remote-notebooks/limitations-of-defining-steps-in-notebook-cells.md === -# Limitations of Defining Steps in Notebook Cells +# Limitations for ZenML Steps in Notebook Cells To run ZenML steps defined in notebook cells remotely (using a remote orchestrator or step operator), the following conditions must be met: -- The cell can only contain Python code; Jupyter magic commands or shell commands (starting with `%` or `!`) are not allowed. -- The cell **must not** call code from other notebook cells. However, importing functions or classes from Python files is permitted. -- The cell **must not** rely on imports from previous cells; it must perform all necessary imports itself, including ZenML imports like `from zenml import step`. +- The cell must contain only Python code; Jupyter magic commands or shell commands (starting with `%` or `!`) are not allowed. +- The cell must not call code from other notebook cells. However, importing functions or classes from Python files is permitted. +- The cell must handle all necessary imports independently, including ZenML imports (e.g., `from zenml import step`). + +These restrictions ensure compatibility with remote execution environments. ================================================== @@ -7456,9 +7474,9 @@ To run ZenML steps defined in notebook cells remotely (using a remote orchestrat ### Running a Single Step from a Notebook -To execute a single step remotely from a notebook, call the step like a standard Python function. ZenML will create a pipeline with this step and run it on the active stack. Be aware of the [limitations](limitations-of-defining-steps-in-notebook-cells.md) when defining remote steps in notebooks. +To execute a single step remotely from a notebook, call the step like a normal Python function. ZenML will create a pipeline with just that step and run it on the active stack. Be aware of the [limitations](limitations-of-defining-steps-in-notebook-cells.md) for remote steps in notebooks. -#### Code Example +#### Example Code ```python from zenml import step @@ -7472,7 +7490,10 @@ def svc_trainer( X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001, -) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: +) -> Tuple[ + Annotated[ClassifierMixin, "trained_model"], + Annotated[float, "training_acc"], +]: """Train a sklearn SVC classifier.""" model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) @@ -7480,14 +7501,15 @@ def svc_trainer( print(f"Train accuracy: {train_acc}") return model, train_acc -X_train = pd.DataFrame(...) # Define your training data -y_train = pd.Series(...) # Define your training labels +# Prepare training data +X_train = pd.DataFrame(...) +y_train = pd.Series(...) # Execute the step model, train_acc = svc_trainer(X_train=X_train, y_train=y_train) ``` -This code defines a step to train a Support Vector Classifier (SVC) and runs it directly, leveraging ZenML's pipeline capabilities. +This code snippet demonstrates how to define and call a step for training an SVC classifier using ZenML. ================================================== @@ -7495,30 +7517,32 @@ This code defines a step to train a Support Vector Classifier (SVC) and runs it # Configure Python Environments -ZenML deployments involve multiple environments for managing dependencies and configurations. +ZenML deployments involve multiple environments for managing dependencies and configurations. Here's a concise overview: ## Environments Overview -- **Client Environment**: Where ZenML pipelines are compiled (e.g., `run.py` script). Types include: +- **Client Environment (Runner Environment)**: Where ZenML pipelines are compiled (e.g., in a `run.py` script). Types include: - Local development - CI runner in production - ZenML Pro runner - Runner image orchestrated by the ZenML server ### Key Steps in Client Environment: -1. Compile pipeline representation with `@pipeline`. +1. Compile pipeline via `@pipeline` function. 2. Create/trigger pipeline and step build environments if running remotely. 3. Trigger a run in the orchestrator. **Note**: The `@pipeline` function is called only in the client environment, focusing on compile time logic. ## ZenML Server Environment -The ZenML server environment is a FastAPI application managing pipelines and metadata, including the ZenML Dashboard. Dependencies should be installed during ZenML deployment, especially for custom integrations. +A FastAPI application managing pipelines and metadata, including the ZenML Dashboard. Install dependencies during deployment if using custom integrations. Refer to [server environment configuration](./configure-the-server-environment.md) for more details. ## Execution Environments -When running locally, the client, server, and execution environments are the same. For remote pipelines, ZenML transfers code to a remote orchestrator by building Docker images known as execution environments. This process starts with a base image containing ZenML and Python, then adds pipeline dependencies. Refer to the [containerize your pipeline](../../../how-to/customize-docker-builds/README.md) guide for managing Docker configurations. +When running locally, the client, server, and execution environments are the same. For remote pipelines, ZenML transfers code to the remote orchestrator by creating Docker images (execution environments) starting from a [base image](https://hub.docker.com/r/zenmldocker/zenml) with ZenML and Python, then adding dependencies. Follow the [containerize your pipeline](../../../how-to/customize-docker-builds/README.md) guide for Docker image configuration. ## Image Builder Environment -Execution environments are typically created locally using the local Docker client. However, ZenML provides image builders, a specialized stack component, to build and push Docker images in a different image builder environment. If no image builder is configured, ZenML defaults to the local image builder to ensure consistency. +Execution environments are typically created locally using the Docker client, requiring installation and permissions. ZenML provides [image builders](../../../component-guide/image-builders/image-builders.md) for building and pushing Docker images in a specialized environment. If no image builder is configured, ZenML defaults to the [local image builder](../../../component-guide/image-builders/local.md) for consistency. + +This summary captures the essential technical details for managing ZenML environments and their configurations. ================================================== @@ -7526,14 +7550,14 @@ Execution environments are typically created locally using the local Docker clie ### Handling Dependency Conflicts in ZenML -This documentation addresses common issues with conflicting dependencies when using ZenML alongside other libraries. ZenML is designed to be stack- and integration-agnostic, which may lead to dependency conflicts. +This documentation addresses common issues with conflicting dependencies when using ZenML alongside other libraries. ZenML is designed to be stack- and integration-agnostic, allowing flexibility in pipeline execution, but this can lead to dependency conflicts. #### Installing Dependencies -ZenML allows installation of integration-specific dependencies using the command: +ZenML facilitates the installation of integration-specific dependencies using the command: ```bash zenml integration install ... ``` -To verify that all ZenML requirements are met after installing additional dependencies, run: +After installing additional dependencies, verify that ZenML's requirements are met by running: ```bash zenml integration list ``` @@ -7542,26 +7566,26 @@ Look for a green tick symbol next to your desired integrations. #### Suggestions for Resolving Dependency Conflicts 1. **Use `pip-compile` for Reproducibility**: - Use `pip-compile` from the [pip-tools package](https://pip-tools.readthedocs.io/) to create a static `requirements.txt`. For `uv` users, consider using `uv pip compile`. Refer to the [gitflow repository](https://github.com/zenml-io/zenml-gitflow#-software-requirements-management) for practical examples. + Utilize `pip-compile` from the [pip-tools package](https://pip-tools.readthedocs.io/) to create a static `requirements.txt` file for consistent environments. For users of [`uv`](https://github.com/astral-sh/uv), consider using `uv pip compile`. -2. **Use `pip check`**: - Run `pip check` to identify compatibility issues among your environment's dependencies. +2. **Run `pip check`**: + Execute `pip check` to identify compatibility issues among your environment's dependencies. This will list any conflicts that may affect your use case. -3. **Known Dependency Issues**: - ZenML has strict dependency requirements. For instance, it requires `click~=8.0.3` for its CLI, and using a higher version may lead to unexpected behaviors. +3. **Known Issues**: + ZenML has strict dependency requirements for some integrations. For example, it requires `click~=8.0.3` for its CLI, and using a higher version may lead to unexpected behaviors. -#### Manual Installation of Dependencies -You can bypass ZenML's integration installation and manually install dependencies, though this is not recommended. The command `zenml integration install ...` performs a `pip install` for the integration's dependencies. +#### Manual Dependency Installation +You can bypass ZenML's integration installation by manually installing dependencies, though this is not recommended. The command `zenml integration install ...` effectively runs a `pip install ...` for the specified integration. -To manually install dependencies: +To manually install dependencies, export the requirements with: ```bash -# Export requirements to a file +# Export to a file zenml integration export-requirements --output-file integration-requirements.txt INTEGRATION_NAME -# Print requirements to console +# Print to console zenml integration export-requirements INTEGRATION_NAME ``` -Modify the exported requirements as needed. If using a remote orchestrator, update the `DockerSettings` object with the new dependency versions to ensure proper functionality. +You can modify these requirements as needed. If using a remote orchestrator, update the `DockerSettings` object with the new dependency versions to ensure proper functionality. ================================================== @@ -7569,7 +7593,7 @@ Modify the exported requirements as needed. If using a remote orchestrator, upda ### Configure the Server Environment -The ZenML server environment is configured using environment variables, which must be set prior to deploying your server instance. For a complete list of available environment variables, refer to the [environment variables documentation](../../../reference/environment-variables.md). +The ZenML server environment is set up using environment variables, which must be configured before deploying your server instance. For a complete list of available environment variables, refer to the [environment variables documentation](../../../reference/environment-variables.md). ================================================== @@ -7579,39 +7603,35 @@ The ZenML server environment is configured using environment variables, which mu To run a pipeline with a different configuration, use the `pipeline.with_options` method. There are two primary ways to configure options: -1. Explicitly set options: +1. Explicitly configure options: ```python with_options(steps={"trainer": {"parameters": {"param1": 1}}}) ``` - -2. Pass a YAML configuration file: + +2. Pass a YAML file: ```python with_options(config_file="path_to_yaml_file") ``` -For more details on these options, refer to the [configuration files documentation](../../pipeline-development/use-configuration-files/README.md). +For more details, refer to the [configuration options documentation](../../pipeline-development/use-configuration-files/README.md). -**Exception:** When triggering a pipeline from a client or another pipeline, use the `PipelineRunConfiguration` object. More information can be found [here](../../trigger-pipelines/use-templates-python.md#advanced-usage-run-a-template-from-another-pipeline). - -### Additional Resource -- [Using Config Files](../../pipeline-development/use-configuration-files/README.md) +**Exception:** To trigger a pipeline from a client or another pipeline, use the `PipelineRunConfiguration` object. Additional information can be found [here](../../trigger-pipelines/use-templates-python.md#advanced-usage-run-a-template-from-another-pipeline). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/retry-steps.md === -### ZenML Step Retry Configuration +### Step Retry Configuration in ZenML -ZenML provides a built-in retry mechanism to automatically retry steps upon failure, useful for handling intermittent issues. This is particularly beneficial when running steps on GPU-backed hardware with potential resource constraints. +ZenML offers a built-in retry mechanism to automatically retry steps upon failure, useful for handling intermittent issues, especially in GPU-backed environments where resources may be temporarily unavailable. -#### Retry Parameters -You can configure the following parameters for step retries: -- **max_retries:** Maximum retry attempts on failure. +#### Configuration Parameters: +- **max_retries:** Maximum retry attempts for a failed step. - **delay:** Initial delay (in seconds) before the first retry. - **backoff:** Multiplier for the delay after each retry. -#### Using the @step Decorator -Specify the retry configuration directly in your step definition: +#### Using the @step Decorator: +You can define the retry configuration directly in your step: ```python from zenml.config.retry_config import StepRetryConfig @@ -7627,10 +7647,10 @@ def my_step() -> None: raise Exception("This is a test exception") ``` -#### Important Note -Infinite retries are not supported. Setting `max_retries` to a high value or omitting it will still enforce an internal limit to prevent infinite loops. Choose a reasonable `max_retries` based on your use case and expected transient failures. +#### Important Note: +Infinite retries are not supported. ZenML enforces an internal maximum to prevent infinite loops. It is advisable to set a reasonable `max_retries` based on your use case. -#### Related Documentation +#### Related Documentation: - [Failure/Success Hooks](use-failure-success-hooks.md) - [Configure Pipelines](../../pipeline-development/use-configuration-files/how-to-use-config.md) @@ -7640,10 +7660,10 @@ Infinite retries are not supported. Setting `max_retries` to a high value or omi ### Summary of ZenML Pipeline Documentation -**Overview**: Building pipelines in ZenML is straightforward using the `@step` and `@pipeline` decorators. - -#### Example Code +**Overview:** +Building pipelines in ZenML involves using the `@step` and `@pipeline` decorators. +**Code Example:** ```python from zenml import pipeline, step @@ -7660,19 +7680,19 @@ def train_model(data: dict) -> None: @pipeline def simple_ml_pipeline(): - train_model(load_data()) -``` - -To execute the pipeline, call: + dataset = load_data() + train_model(dataset) -```python simple_ml_pipeline() ``` -#### Logging and Dashboard -When executed, the pipeline run is logged to the ZenML dashboard, which requires a ZenML server (local or remote). The dashboard displays the Directed Acyclic Graph (DAG) and associated metadata. +**Execution:** +Calling `simple_ml_pipeline()` runs the pipeline, which logs its execution to the ZenML dashboard. A ZenML server must be running to access the dashboard. -#### Advanced Features +**Dashboard Features:** +The dashboard displays the Directed Acyclic Graph (DAG) and associated metadata. + +**Advanced Features:** - Configure pipeline/step parameters - Name and annotate step outputs - Control caching behavior @@ -7681,18 +7701,18 @@ When executed, the pipeline run is logged to the ZenML dashboard, which requires - Use failure/success hooks - Hyperparameter tuning - Attach and fetch metadata within steps and during pipeline composition -- Enable/disable log storage +- Enable or disable log storage - Access secrets in a step -For detailed instructions on these features, refer to the linked documentation sections. +For further details, refer to the linked documentation for each advanced feature. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/compose-pipelines.md === -### Summary of ZenML Pipeline Composition +### Summary: Reusing Steps Between Pipelines in ZenML -ZenML allows for the composition of pipelines to reuse steps and reduce code duplication. This is achieved by defining separate functions for common functionalities. +ZenML allows for the composition of pipelines to avoid code duplication by extracting common functionalities into separate functions. This is achieved by calling one pipeline from within another. #### Example Code: @@ -7712,12 +7732,10 @@ def training_pipeline(): evaluation_step(model=model, data=test_data) ``` -#### Key Points: -- The `data_loading_pipeline` is invoked within the `training_pipeline`, effectively integrating its steps into the latter. -- Only the parent pipeline (`training_pipeline`) will be visible in the dashboard. -- For triggering a pipeline from another, refer to the advanced usage documentation. +In this example, `data_loading_pipeline` is invoked within `training_pipeline`, effectively integrating its steps. Only the parent pipeline will be visible in the dashboard. For triggering a pipeline from another, refer to the advanced usage documentation. -For further details on orchestrators, see the [orchestrators guide](../../../component-guide/orchestrators/orchestrators.md). +#### Additional Resources: +- Learn more about orchestrators [here](../../../component-guide/orchestrators/orchestrators.md). ================================================== @@ -7725,12 +7743,12 @@ For further details on orchestrators, see the [orchestrators guide](../../../com ### Summary of Custom Step Invocation ID in ZenML -When invoking a ZenML step within a pipeline, each invocation is assigned a unique **invocation ID**. This ID can be used to define the execution order of steps or to fetch information about the invocation post-execution. +When invoking a ZenML step in a pipeline, a unique **invocation ID** is assigned. This ID can be used to define the execution order of steps or to fetch information about the invocation post-execution. #### Key Points: -- The first invocation of a step uses the step name as the invocation ID (e.g., `my_step`). -- Subsequent invocations append a suffix (e.g., `my_step_2`, `my_step_3`) to ensure uniqueness. -- A custom invocation ID can be specified by passing a unique ID when calling the step. +- The first invocation of a step uses the step's name as the ID (e.g., `my_step`). +- Subsequent invocations append a suffix (_2, _3, etc.) to ensure uniqueness (e.g., `my_step_2`). +- A custom invocation ID can be specified by passing an `id` parameter, which must be unique within the pipeline. #### Example Code: ```python @@ -7747,48 +7765,38 @@ def example_pipeline(): my_step(id="my_custom_invocation_id") # Custom ID ``` -This allows for flexible management of step invocations within ZenML pipelines. +This documentation allows users to effectively manage step invocations and their identifiers within ZenML pipelines. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/get-past-pipeline-step-runs.md === -To retrieve past pipeline or step runs, use the `get_pipeline` method with the `last_run` property or access runs by index. +To retrieve past pipeline or step runs in ZenML, use the `get_pipeline` method with the `last_run` property or index into the runs. Here's a concise example: -### Example Code: ```python from zenml.client import Client client = Client() - # Retrieve a pipeline by its name p = client.get_pipeline("mlflow_train_deploy_pipeline") - # Get the latest run of this pipeline latest_run = p.last_run - # Access the first run by index first_run = p[0] ``` -This code snippet demonstrates how to obtain the latest and first runs of a specified pipeline. +This code demonstrates how to access the latest and the first run of a specified pipeline. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/use-pipeline-step-parameters.md === -### Summary of Parameterization in ZenML Pipelines and Steps - -#### Overview -Steps and pipelines in ZenML can be parameterized similarly to Python functions. Inputs to a step can be either **artifacts** (outputs from other steps) or **parameters** (explicitly provided values). - -#### Step Parameters -- **Artifacts**: Outputs from previous steps, used to share data. -- **Parameters**: Explicit values provided during step invocation, independent of other steps. +### Summary of Parameterization in ZenML Pipelines -**Note**: Only JSON-serializable values (via Pydantic) can be passed as parameters. For non-JSON-serializable objects (e.g., NumPy arrays), use [External Artifacts](../../../user-guide/starter-guide/manage-artifacts.md#consuming-external-artifacts-within-a-pipeline). +**Parameterization Overview** +Steps and pipelines in ZenML can be parameterized similarly to Python functions. Parameters can be either **artifacts** (outputs from other steps) or **explicit values** (parameters). Only JSON-serializable values can be passed as parameters; for non-serializable objects, use **External Artifacts**. -#### Example Code +**Code Example for Steps and Pipelines** ```python from zenml import step, pipeline @@ -7802,11 +7810,12 @@ def my_pipeline(): my_step(input_1=int_artifact, input_2=42) ``` -#### YAML Configuration -Parameters can be defined in a YAML configuration file, allowing for easy updates without modifying the code. +**Using YAML Configuration Files** +Parameters can also be defined in YAML files, allowing for easy updates without modifying the code. -**Example YAML Configuration**: +**YAML Configuration Example** ```yaml +# config.yaml parameters: environment: production @@ -7816,7 +7825,7 @@ steps: input_2: 42 ``` -**Example Code Using YAML**: +**Python Code with YAML** ```python from zenml import step, pipeline @@ -7828,15 +7837,16 @@ def my_step(input_1: int, input_2: int) -> None: def my_pipeline(environment: str): ... -if __name__ == "__main__": +if __name__=="__main__": my_pipeline.with_options(config_path="config.yaml")() ``` -#### Handling Conflicts -Conflicts may arise if parameters are defined in both the YAML file and the code. The system will notify you of such conflicts. +**Conflict Handling** +Conflicts may arise if parameters are defined in both the YAML file and the code. ZenML will notify you of such conflicts. -**Example of Conflict**: +**Example of Conflict** ```yaml +# config.yaml parameters: some_param: 24 @@ -7846,16 +7856,26 @@ steps: input_2: 42 ``` ```python +# run.py +from zenml import step, pipeline + +@step +def my_step(input_1: int, input_2: int) -> None: + pass + @pipeline def my_pipeline(some_param: int): my_step(input_1=42, input_2=43) + +if __name__=="__main__": + my_pipeline(23) ``` -#### Caching Behavior +**Caching Behavior** - **Parameters**: A step is cached only if all parameter values match previous executions. -- **Artifacts**: A step is cached only if all input artifacts match previous executions. If upstream steps are not cached, the step will execute every time. +- **Artifacts**: A step is cached only if all input artifacts match previous executions. If upstream steps are not cached, the step will always execute. -### Related Topics +### Related Documentation - [Use configuration files to set parameters](use-pipeline-step-parameters.md) - [How caching works and how to control it](control-caching-behavior.md) @@ -7865,14 +7885,16 @@ def my_pipeline(some_param: int): ### Step Output Typing and Annotation in ZenML +**Overview**: Step outputs are stored in an artifact store. Annotating and naming them enhances clarity. + #### Type Annotations -- ZenML steps can function without type annotations, but adding them provides benefits: - - **Type validation**: Ensures correct input types from upstream steps. - - **Better serialization**: Type annotations allow ZenML to select the appropriate materializer for outputs. If built-in materializers are inadequate, custom materializers can be created. +- Functions can operate as ZenML steps without type annotations, but adding them provides: + - **Type Validation**: Ensures correct input types from upstream steps. + - **Better Serialization**: Type annotations allow ZenML to select the most suitable materializer for outputs. Custom materializers can be created if built-in options are inadequate. -**Warning**: The built-in `CloudpickleMaterializer` can serialize any object but is not production-ready due to compatibility issues across Python versions and potential security risks. +**Warning**: The `CloudpickleMaterializer` can handle any object but is not production-ready due to compatibility issues across Python versions and potential security risks. -#### Code Examples +#### Example Code ```python from typing import Tuple from zenml import step @@ -7886,15 +7908,19 @@ def divide(a: int, b: int) -> Tuple[int, int]: return a // b, a % b ``` -To enforce type annotations, set `ZENML_ENFORCE_TYPE_ANNOTATIONS=True`. ZenML will raise exceptions for missing annotations. +To enforce type annotations, set the environment variable `ZENML_ENFORCE_TYPE_ANNOTATIONS` to `True`. #### Tuple vs Multiple Outputs - ZenML distinguishes between single output artifacts of type `Tuple` and multiple outputs based on the return statement: - A tuple literal (e.g., `return (1, 2)`) indicates multiple outputs. - - Other cases are treated as a single output of type `Tuple`. + - Other cases are treated as single output of type `Tuple`. -**Examples**: +#### Example Code for Outputs ```python +from zenml import step +from typing_extensions import Annotated +from typing import Tuple + @step def my_step() -> Tuple[int, int]: return (0, 1) @@ -7914,31 +7940,23 @@ def my_step() -> Tuple[int, int]: #### Step Output Names - Default output names are `output` for single outputs and `output_0, output_1, ...` for multiple outputs. -- Custom names can be assigned using the `Annotated` type annotation. +- Custom names can be set using `Annotated`: -**Example**: ```python -from typing_extensions import Annotated -from typing import Tuple -from zenml import step - @step def square_root(number: int) -> Annotated[float, "custom_output_name"]: return number ** 0.5 @step -def divide(a: int, b: int) -> Tuple[ - Annotated[int, "quotient"], - Annotated[int, "remainder"] -]: +def divide(a: int, b: int) -> Tuple[Annotated[int, "quotient"], Annotated[int, "remainder"]]: return a // b, a % b ``` -If no custom names are provided, artifacts will be named `{pipeline_name}::{step_name}::output` or `{pipeline_name}::{step_name}::output_{i}`. +If no custom names are provided, artifacts are named as `{pipeline_name}::{step_name}::output`. ### Additional Resources -- For more on output annotation: [Return Multiple Outputs from a Step](../../data-artifact-management/handle-data-artifacts/return-multiple-outputs-from-a-step.md) -- For custom data types: [Handle Custom Data Types](../../data-artifact-management/handle-data-artifacts/handle-custom-data-types.md) +- For more on output annotation, see [return-multiple-outputs-from-a-step.md](../../data-artifact-management/handle-data-artifacts/return-multiple-outputs-from-a-step.md). +- For custom data types, refer to [handle-custom-data-types.md](../../data-artifact-management/handle-data-artifacts/handle-custom-data-types.md). ================================================== @@ -7957,14 +7975,14 @@ Not all orchestrators support scheduling. The following orchestrators do support - **SagemakerOrchestrator**: ✅ - **VertexOrchestrator**: ✅ -Orchestrators without scheduling support: +Orchestrators that do not support scheduling: - **LocalOrchestrator**: ⛔️ - **LocalDockerOrchestrator**: ⛔️ -- **Skypilot (AWS, Azure, GCP, Lambda)**: ⛔️ +- **SkypilotAWS/ Azure/ GCP/ Lambda Orchestrators**: ⛔️ - **TektonOrchestrator**: ⛔️ #### Setting a Schedule -You can set a schedule for a pipeline using either cron expressions or human-readable notations. Here’s a concise example: +To set a schedule for a pipeline, you can use either cron expressions or human-readable notations. Here’s a concise example: ```python from zenml.config.schedule import Schedule @@ -7975,9 +7993,9 @@ from datetime import datetime def my_pipeline(...): ... -# Schedule using cron expression +# Using cron expression schedule = Schedule(cron_expression="5 14 * * 3") -# or using human-readable notation +# or human-readable notation schedule = Schedule(start_time=datetime.now(), interval_second=1800) my_pipeline = my_pipeline.with_options(schedule=schedule) @@ -7987,25 +8005,22 @@ my_pipeline() For more scheduling options, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.schedule.Schedule). #### Pausing/Stopping a Schedule -The method for pausing or stopping a scheduled pipeline varies by orchestrator. For instance, with Kubeflow, you can use its UI. Users must consult their orchestrator's documentation for specific steps. +The method to pause or stop a scheduled run depends on the orchestrator. For instance, in Kubeflow, you can use its UI. Users should consult their orchestrator's documentation for specific instructions. -**Important Note**: ZenML schedules the run, but users are responsible for managing the lifecycle of the schedule. Running a pipeline with a schedule multiple times creates separate scheduled pipelines with unique names. +**Important Note**: ZenML schedules the run, but users are responsible for managing the lifecycle of the schedule. Running a pipeline with a schedule multiple times creates separate scheduled pipelines with unique names. #### Additional Resources -For more information on orchestrators, refer to the [orchestrators documentation](../../../component-guide/orchestrators/orchestrators.md). +- For more on orchestrators, refer to [orchestrators.md](../../../component-guide/orchestrators/orchestrators.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/fan-in-fan-out.md === -### Summary: Running Steps in Parallel with Fan-in and Fan-out Patterns +### Summary: Running Steps in Parallel - Fan-in and Fan-out Patterns -**Fan-in and Fan-out Pattern Overview** -The fan-out/fan-in pattern is a pipeline architecture that allows a single step to split into multiple parallel operations (fan-out) and then consolidate the results back into a single step (fan-in). This approach is beneficial for parallel processing, distributed workloads, and data transformations. - -**Example Code** -The following code illustrates the fan-out/fan-in pattern using ZenML: +The fan-out/fan-in pattern is a pipeline architecture where a single step splits into multiple parallel operations (fan-out) and consolidates the results back into a single step (fan-in). This pattern is effective for parallel processing, distributed workloads, and data transformations. +#### Example Code ```python from zenml import step, get_step_context, pipeline from zenml.client import Client @@ -8022,11 +8037,9 @@ def process_step(input_data: str) -> str: def combine_step(step_prefix: str, output_name: str) -> None: run_name = get_step_context().pipeline_run.name run = Client().get_pipeline_run(run_name) - processed_results = {step_info.name: step_info.outputs[output_name][0].load() for step_name, step_info in run.steps.items() if step_name.startswith(step_prefix)} - print(",".join([f"{k}: {v}" for k, v in processed_results.items()])) @pipeline(enable_cache=False) @@ -8038,17 +8051,21 @@ def fan_out_fan_in_pipeline(parallel_count: int) -> None: fan_out_fan_in_pipeline(parallel_count=8) ``` -**Key Points** -- **Fan-out**: Enables parallel processing, improving resource utilization. -- **Fan-in**: Aggregates results from parallel branches. -- **Use Cases**: Suitable for parallel data processing, distributed model training, ensemble methods, batch processing, and data validation. - -**Limitations** +#### Key Points +- **Fan-out** allows parallel processing, improving resource utilization. +- **Fan-in** consolidates results, useful for: + - Parallel data processing + - Distributed model training + - Ensemble methods + - Batch processing + - Data validation + - Hyperparameter tuning + +#### Limitations 1. Steps may run sequentially if the orchestrator does not support parallel execution. 2. The number of steps must be predetermined; dynamic step creation is not supported. -**Important Note** -When implementing the fan-in step, results from previous parallel steps must be queried using the ZenML Client, as direct passing of results is not possible. +Use the ZenML Client to query results from previous steps in the fan-in process instead of passing results directly. ================================================== @@ -8056,9 +8073,9 @@ When implementing the fan-in step, results from previous parallel steps must be # Accessing Secrets in ZenML Steps -ZenML secrets are secure groupings of **key-value pairs** stored in the ZenML secrets store, each identified by a **name** for easy reference in pipelines and stacks. For configuration and creation of secrets, refer to the [platform guide on secrets](../../../getting-started/deploying-zenml/secret-management.md). +ZenML secrets are collections of **key-value pairs** securely stored in the ZenML secrets store, each identified by a **name** for easy reference in pipelines and stacks. To learn about configuring and creating secrets, refer to the [platform guide on secrets](../../../getting-started/deploying-zenml/secret-management.md). -You can access secrets within your steps using the ZenML `Client` API, allowing you to use secrets for API queries without hard-coding access keys. +You can access secrets in your steps using the ZenML `Client` API, allowing you to utilize secrets for API queries without hard-coding access keys. ## Example Code ```python @@ -8101,7 +8118,7 @@ from zenml.client import Client Client().delete_pipeline(<PIPELINE_NAME>) ``` -**Note:** Deleting a pipeline does not remove associated runs or artifacts. For deleting multiple pipelines with the same prefix, you can use the following script: +**Note:** Deleting a pipeline does not remove its associated runs or artifacts. For bulk deletion, especially if pipelines share a prefix, use the following script: ```python from zenml.client import Client @@ -8110,17 +8127,14 @@ client = Client() pipelines_list = client.list_pipelines(name="startswith:test_pipeline", size=100) target_pipeline_ids = [p.id for p in pipelines_list.items] -print(f"Found {len(target_pipeline_ids)} pipelines to delete") -if input("Do you really want to delete these pipelines? (y/n): ").lower() == 'y': +confirmation = input("Do you really want to delete these pipelines? (y/n): ").lower() +if confirmation == 'y': for pid in target_pipeline_ids: client.delete_pipeline(pid) - print("Deletion complete") -else: - print("Deletion cancelled") ``` #### Delete a Pipeline Run -To delete a pipeline run, use the following commands: +To delete a pipeline run, use the CLI or the Python SDK: **CLI:** ```shell @@ -8132,15 +8146,17 @@ zenml pipeline runs delete <RUN_NAME_OR_ID> from zenml.client import Client Client().delete_pipeline_run(<RUN_NAME_OR_ID>) -``` +``` + +This documentation provides the necessary commands and code snippets for deleting pipelines and their runs, ensuring that users can manage their ZenML resources effectively. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/name-your-pipeline-runs.md === -### Naming Pipeline Runs +### Summary of Pipeline Run Naming in ZenML -Pipeline runs are automatically named using the current date and time, as shown in the example: +Pipeline run names are automatically generated using the current date and time, as shown in the example: ```bash Pipeline run training_pipeline-2023_05_24-12_41_04_576473 has finished in 3.742s. @@ -8155,11 +8171,8 @@ training_pipeline = training_pipeline.with_options( training_pipeline() ``` -**Key Points:** -- Run names must be unique. If running pipelines multiple times or on a schedule, compute the run name dynamically or use placeholders. -- Placeholders can be set in the `@pipeline` decorator or `pipeline.with_options` function. - -**Standard Placeholders:** +Run names must be unique. For multiple or scheduled runs, compute the name dynamically or use placeholders. Placeholders can be set in the `@pipeline` decorator or `pipeline.with_options()` method. Standard placeholders include: + - `{date}`: current date (e.g., `2024_11_27`) - `{time}`: current UTC time (e.g., `11_07_09_326492`) @@ -8176,13 +8189,15 @@ training_pipeline() === File: docs/book/how-to/pipeline-development/build-pipelines/use-failure-success-hooks.md === -### Summary of ZenML Hooks Documentation +### Summary of ZenML Failure and Success Hooks Documentation -**Overview:** -ZenML provides hooks to perform actions after step execution, useful for notifications, logging, or resource cleanup. There are two types of hooks: `on_failure` (triggers on step failure) and `on_success` (triggers on step success). +#### Overview +Hooks in ZenML allow actions to be performed after step execution, useful for notifications, logging, or resource cleanup. There are two types of hooks: +- **`on_failure`**: Triggered when a step fails. +- **`on_success`**: Triggered when a step succeeds. -**Defining Hooks:** -Hooks are defined as callback functions accessible within the pipeline repository. The `on_failure` hook can accept a `BaseException` argument to access the exception causing the failure. +#### Defining Hooks +Hooks are defined as callback functions and must be accessible within the repository. The `on_failure` hook can accept a `BaseException` argument to access the specific exception that caused the failure. **Example:** ```python @@ -8203,8 +8218,8 @@ def my_successful_step() -> int: return 1 ``` -**Pipeline-Level Hooks:** -Hooks can also be defined at the pipeline level, which apply to all steps unless overridden by step-level hooks. +#### Pipeline-Level Hooks +Hooks can also be defined at the pipeline level, which apply to all steps within the pipeline. Step-level hooks take precedence over pipeline-level hooks. **Example:** ```python @@ -8215,8 +8230,8 @@ def my_pipeline(...): ... ``` -**Accessing Step Information:** -Inside hooks, use `get_step_context()` to access the current pipeline run or step information. +#### Accessing Step Information in Hooks +You can use `get_step_context()` to access information about the current pipeline run or step within a hook. **Example:** ```python @@ -8225,34 +8240,33 @@ from zenml import get_step_context def on_failure(exception: BaseException): context = get_step_context() print(context.step_run.name) + print(type(exception)) + +@step(on_failure=on_failure) +def my_step(some_parameter: int = 1): + raise ValueError("My exception") ``` -**Using Alerter Component:** -Integrate the Alerter component in hooks to notify users about step outcomes. +#### Using the Alerter Component +You can integrate the Alerter component to send notifications on step success or failure. **Example:** ```python from zenml import get_step_context, Client -def on_failure(): - step_name = get_step_context().step_run.name - Client().active_stack.alerter.post(f"{step_name} just failed!") -``` - -**Standard Alerter Hooks:** -ZenML provides built-in hooks for alerter notifications. - -**Example:** -```python -from zenml.hooks import alerter_success_hook, alerter_failure_hook +def notify_on_failure() -> None: + step_context = get_step_context() + alerter = Client().active_stack.alerter + if alerter and step_context.pipeline_run.config.extra["notify_on_failure"]: + alerter.post(message="Step failed!") -@step(on_failure=alerter_failure_hook, on_success=alerter_success_hook) +@step(on_failure=notify_on_failure) def my_step(...): ... ``` -**OpenAI ChatGPT Failure Hook:** -This hook generates possible fixes for exceptions using OpenAI's API. Requires installation of the OpenAI integration and a valid API key stored in a ZenML secret. +#### OpenAI ChatGPT Failure Hook +This hook generates potential fixes for exceptions using OpenAI's API. Ensure you have the OpenAI integration installed and your API key stored in a ZenML secret. **Installation:** ```shell @@ -8269,7 +8283,7 @@ def my_step(...): ... ``` -This hook provides suggestions to help resolve issues in the code. For GPT-4 users, use `openai_gpt4_alerter_failure_hook` for enhanced suggestions. +This hook can provide suggestions to help fix issues in your code. If you have GPT-4 enabled, you can use `openai_gpt4_alerter_failure_hook` for enhanced capabilities. ================================================== @@ -8277,7 +8291,7 @@ This hook provides suggestions to help resolve issues in the code. For GPT-4 use # Running Individual Steps in ZenML -To execute a single step in your ZenML stack, call the step like a regular Python function. ZenML will create an unlisted pipeline for this step, which won't be linked to any existing pipeline but can be viewed in the "Runs" tab of the dashboard. +To execute an individual step in your ZenML stack, call the step like a regular Python function. ZenML will create an unlisted pipeline to run the step on the active stack, which can be viewed in the "Runs" tab of the dashboard. ## Example Code for Step Execution @@ -8310,15 +8324,18 @@ model, train_acc = svc_trainer(X_train=X_train, y_train=y_train) ## Running the Step Function Directly -To bypass ZenML and run the step function directly, use the `entrypoint(...)` method: +To run the step function without ZenML, use the `entrypoint(...)` method: ```python +X_train = pd.DataFrame(...) +y_train = pd.Series(...) + model, train_acc = svc_trainer.entrypoint(X_train=X_train, y_train=y_train) ``` -### Default Behavior Configuration +### Default Behavior -To set this direct execution as the default behavior, set the environment variable `ZENML_RUN_SINGLE_STEPS_WITHOUT_STACK` to `True`. After this, calling `svc_trainer(...)` will execute the underlying function without using the ZenML stack. +To make direct function calls the default behavior, set the environment variable `ZENML_RUN_SINGLE_STEPS_WITHOUT_STACK` to `True`. This will bypass the ZenML stack when calling the step. ================================================== @@ -8326,9 +8343,11 @@ To set this direct execution as the default behavior, set the environment variab ### Summary: Running Pipelines Asynchronously -By default, ZenML pipelines run synchronously, meaning the terminal displays logs in real-time during execution. To enable asynchronous execution, you can configure the orchestrator in two ways: +By default, pipelines run synchronously, meaning the terminal displays logs in real-time during execution. To run pipelines asynchronously, you can configure the orchestrator to set `synchronous=False`. This can be done either globally or at the pipeline configuration level. + +**Code Example for Asynchronous Pipeline:** -1. **Global Configuration**: Set `synchronous=False` in the orchestrator settings. +1. **Python Code:** ```python from zenml import pipeline @@ -8337,48 +8356,60 @@ By default, ZenML pipelines run synchronously, meaning the terminal displays log ... ``` -2. **YAML Configuration**: Modify the pipeline configuration in a YAML file. +2. **YAML Configuration:** ```yaml settings: orchestrator.<STACK_NAME>: synchronous: false ``` -For further details on orchestrators, refer to the [orchestrators documentation](../../../component-guide/orchestrators/orchestrators.md). +For more information on orchestrators, refer to the [orchestrators documentation](../../../component-guide/orchestrators/orchestrators.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/control-caching-behavior.md === -### ZenML Caching Behavior Overview +### ZenML Caching Behavior By default, ZenML caches steps in pipelines when code and parameters remain unchanged. #### Caching Control -- **Step Level Caching**: - ```python - @step(enable_cache=True) # Caches data loading - def load_data(parameter: int) -> dict: - ... +- **Step Level Caching**: + - Use `@step(enable_cache=True)` to enable caching. + - Use `@step(enable_cache=False)` to disable caching, overriding pipeline settings. - @step(enable_cache=False) # Overrides pipeline caching - def train_model(data: dict) -> None: - ... +- **Pipeline Level Caching**: + - Use `@pipeline(enable_cache=True)` to enable caching for the entire pipeline. - @pipeline(enable_cache=True) # Pipeline level caching - def simple_ml_pipeline(parameter: int): - ... - ``` +```python +@step(enable_cache=True) +def load_data(parameter: int) -> dict: + ... + +@step(enable_cache=False) +def train_model(data: dict) -> None: + ... + +@pipeline(enable_cache=True) +def simple_ml_pipeline(parameter: int): + ... +``` -Caching occurs only when code and parameters are unchanged. You can modify caching behavior after initial setup: +#### Configuration Changes + +Caching behavior can be modified after initial setup: ```python my_step.configure(enable_cache=...) my_pipeline.configure(enable_cache=...) ``` -For YAML configuration options, refer to the [configuration files documentation](../../pipeline-development/use-configuration-files/). +#### Additional Resources + +For YAML configuration options, refer to the [configuration files documentation](../../pipeline-development/use-configuration-files/). + +**Note**: Caching occurs only when code and parameters are unchanged. ================================================== @@ -8398,7 +8429,7 @@ def example_pipeline(): step_3(step_1_output, step_2_output) ``` -To enforce specific execution order constraints, you can specify non-data dependencies using the `after` argument. For a single step, use `my_step(after="other_step")`. For multiple steps, pass a list: `my_step(after=["other_step", "other_step_2"])`. Refer to the [documentation](using-a-custom-step-invocation-id.md) for details on invocation IDs. +To enforce specific execution order constraints, you can use non-data dependencies by passing invocation IDs. For example, to run `my_step` after `other_step`, use: `my_step(after="other_step")`. For multiple upstream steps, pass a list: `my_step(after=["other_step", "other_step_2"])`. For more details on invocation IDs, refer to the [documentation here](using-a-custom-step-invocation-id.md). ```python from zenml import pipeline @@ -8410,132 +8441,118 @@ def example_pipeline(): step_3(step_1_output, step_2_output) ``` -In this example, `step_1` will only start after `step_2` has completed. +In this modified pipeline, `step_1` will only start after `step_2` has completed. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/fetching-pipelines.md === -### Summary of Documentation on Inspecting Pipeline Runs and Outputs +### Summary: Inspecting a Finished Pipeline Run and Its Outputs #### Overview -This documentation describes how to inspect finished pipeline runs and their outputs in ZenML, focusing on accessing artifacts, metadata, and the lineage of pipeline runs. +This documentation outlines how to inspect completed pipeline runs and their outputs in ZenML, covering fetching pipelines, runs, steps, and artifacts. #### Pipeline Hierarchy -The structure of pipelines, runs, steps, and artifacts is represented as: -``` -pipelines -->|1:N| runs -runs -->|1:N| steps -steps -->|1:N| artifacts -``` - +- **Structure**: Pipelines have a 1-to-N relationship with runs, runs with steps, and steps with artifacts. + #### Fetching Pipelines -- **Get a Specific Pipeline:** +- **Get a Pipeline**: Use `Client.get_pipeline()` to fetch a specific pipeline. ```python from zenml.client import Client pipeline_model = Client().get_pipeline("first_pipeline") ``` - -- **List All Pipelines:** - - **Python:** +- **List Pipelines**: + - **Python**: ```python pipelines = Client().list_pipelines() ``` - - **CLI:** + - **CLI**: ```shell zenml pipeline list ``` -#### Working with Runs -- **Get All Runs of a Pipeline:** +#### Pipeline Runs +- **Get All Runs**: Access all runs of a pipeline via the `runs` property. ```python runs = pipeline_model.runs ``` - -- **Get the Last Run:** +- **Get Last Run**: Use `last_run` property or `runs[0]`. ```python last_run = pipeline_model.last_run # OR: pipeline_model.runs[0] ``` - -- **Execute Pipeline and Get Latest Run:** +- **Get Latest Run**: Execute the pipeline to get the latest run. ```python run = training_pipeline() ``` - -- **Fetch Specific Run:** +- **Get a Specific Run**: Use `Client.get_pipeline_run()` with the run ID. ```python pipeline_run = Client().get_pipeline_run("first_pipeline-2023_06_20-16_20_13_274466") ``` #### Run Information -- **Status:** +- **Status**: Check the run status. ```python - status = run.status # Possible states: initialized, failed, completed, running, cached + status = run.status ``` - -- **Configuration:** +- **Configuration**: Access pipeline configurations. ```python pipeline_config = run.config - pipeline_settings = run.config.settings ``` - -- **Component-Specific Metadata:** +- **Component Metadata**: Access additional metadata via `run_metadata`. ```python run_metadata = run.run_metadata - orchestrator_url = run_metadata["orchestrator_url"].value ``` -#### Steps and Artifacts -- **Get Steps of a Run:** +#### Steps +- **Access Steps**: Use `steps` attribute to get all steps of a run. ```python steps = run.steps - step = run.steps["first_step"] ``` - -- **Accessing Output Artifacts:** +- **Step Information**: Access parameters, settings, and metadata. ```python - output = step.outputs["output_name"] # or step.output for single output - my_pytorch_model = output.load() + step_parameters = step.config.parameters ``` -- **Fetching Artifacts Directly:** +#### Artifacts +- **Access Outputs**: Use `outputs` to get output artifacts. + ```python + output = step.outputs["output_name"] + ``` +- **Fetch Artifacts**: Use `Client` to get artifacts directly. ```python artifact = Client().get_artifact('iris_dataset') - output = artifact.versions['2022'] # Get specific version ``` #### Metadata and Visualizations -- **Access Metadata:** +- **Artifact Metadata**: Access metadata for artifacts. ```python output_metadata = output.run_metadata - storage_size_in_bytes = output_metadata["storage_size"].value ``` - -- **Visualizations:** +- **Visualizations**: Use `visualize()` for visualizations in Jupyter. ```python - output.visualize() # For Jupyter notebooks + output.visualize() ``` -#### Fetching Information During Run Execution -You can fetch information about previous runs while a pipeline is executing: -```python -from zenml import get_step_context -from zenml.client import Client +#### Fetching Information During Execution +- Fetch previous runs while a pipeline is executing using `get_step_context()`. + ```python + from zenml import get_step_context + from zenml.client import Client -@step -def my_step(): - current_run_name = get_step_context().pipeline_run.name - current_run = Client().get_pipeline_run(current_run_name) - previous_run = current_run.pipeline.runs[1] # Previous run -``` + @step + def my_step(): + current_run_name = get_step_context().pipeline_run.name + previous_run = Client().get_pipeline_run(current_run_name).pipeline.runs[1] + ``` #### Code Example -A complete example demonstrating how to load a trained model from a pipeline: +A complete script demonstrating the loading of a model from a pipeline: ```python from typing_extensions import Tuple, Annotated import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split +from sklearn.base import ClassifierMixin from sklearn.svm import SVC from zenml import pipeline, step from zenml.client import Client @@ -8559,20 +8576,17 @@ if __name__ == "__main__": last_run = training_pipeline() model = last_run.steps["svc_trainer"].outputs["trained_model"].load() ``` - -This summary retains critical technical details while condensing the content for clarity and brevity. +This summary captures the essential technical details and code snippets necessary for understanding how to inspect pipeline runs and their outputs in ZenML. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/reference-environment-variables-in-configurations.md === -### Summary of ZenML Environment Variable Configuration +### Reference Environment Variables in Configurations ZenML allows referencing environment variables in configurations using the syntax `${ENV_VARIABLE_NAME}`. -#### In-code Usage -You can reference an environment variable directly in your code as follows: - +#### In-Code Example ```python from zenml import step @@ -8581,16 +8595,14 @@ def my_step() -> None: ... ``` -#### Configuration File Usage -In a configuration file, you can reference environment variables like this: - +#### Configuration File Example ```yaml extra: value_from_environment: ${ENV_VAR} combined_value: prefix_${ENV_VAR}_suffix ``` -This feature enhances the flexibility of your configurations in both code and configuration files. +This feature enhances the flexibility of configurations in both code and configuration files. ================================================== @@ -8609,29 +8621,43 @@ You can specify tags for your pipeline runs in the following ways: 2. **Code**: - Using the `@pipeline` decorator: - ```python - @pipeline(tags=["tag_on_decorator"]) - def my_pipeline(): - ... - ``` + ```python + @pipeline(tags=["tag_on_decorator"]) + def my_pipeline(): + ... + ``` - Using the `with_options` method: - ```python - my_pipeline = my_pipeline.with_options(tags=["tag_on_with_options"]) - ``` + ```python + my_pipeline = my_pipeline.with_options(tags=["tag_on_with_options"]) + ``` -When you run the pipeline, tags from all specified locations will be merged and applied to the pipeline run. +When the pipeline is executed, tags from all specified locations will be merged and applied to the run. ================================================== -=== File: docs/book/how-to/pipeline-development/build-pipelines/hyper-parameter-tuning.md === +=== File: docs/book/how-to/pipeline-development/build-pipelines/hyper-parameter-tuning.md === + +### Summary of Hyperparameter Tuning with ZenML + +**Overview**: This documentation describes how to perform hyperparameter tuning using ZenML through a pipeline that implements a basic grid search for different learning rates. The process involves training models with various hyperparameters and selecting the best-performing model. + +**Key Components**: -### Hyperparameter Tuning with ZenML +1. **Train Step**: + - Function: `train_step(learning_rate: float) -> Annotated[ClassifierMixin, model_output_name]` + - Purpose: Trains a model using the specified learning rate and returns the trained model. -ZenML enables hyperparameter tuning through a simple pipeline. The example below demonstrates a basic grid search for different learning rates using a `train_step`. After training, a `selection_step` identifies the best-performing hyperparameters using the fan-in, fan-out pipeline method. +2. **Selection Step**: + - Function: `selection_step(step_prefix: str, output_name: str) -> None` + - Purpose: Retrieves trained models based on their learning rates and evaluates them to determine the best model. It uses the ZenML Client to access outputs from previous steps. -#### Code Example +3. **Pipeline Definition**: + - Function: `my_pipeline(step_count: int) -> None` + - Purpose: Defines a pipeline that runs multiple training steps with different learning rates, followed by the selection step to find the optimal model. + - Example Usage: `my_pipeline(step_count=4)` +**Code Example**: ```python from typing import Annotated from sklearn.base import ClassifierMixin @@ -8642,14 +8668,14 @@ model_output_name = "my_model" @step def train_step(learning_rate: float) -> Annotated[ClassifierMixin, model_output_name]: - return ... # Train a model with the specified learning rate. + return ... # Train model @step def selection_step(step_prefix: str, output_name: str) -> None: run_name = get_step_context().pipeline_run.name run = Client().get_pipeline_run(run_name) - trained_models_by_lr = {} + for step_name, step_info in run.steps.items(): if step_name.startswith(step_prefix): model = step_info.outputs[output_name][0].load() @@ -8657,7 +8683,7 @@ def selection_step(step_prefix: str, output_name: str) -> None: trained_models_by_lr[lr] = model for lr, model in trained_models_by_lr.items(): - ... # Evaluate models to determine the best one. + ... # Evaluate models @pipeline def my_pipeline(step_count: int) -> None: @@ -8671,20 +8697,21 @@ def my_pipeline(step_count: int) -> None: my_pipeline(step_count=4) ``` -#### Important Notes -- The current limitation is that a variable number of artifacts cannot be passed into a step programmatically; thus, the `selection_step` queries all artifacts from previous steps using the ZenML Client. -- For practical implementations, refer to the E2E example in the ZenML GitHub repository, specifically the `hp_tuning_single_search` and `hp_tuning_select_best_model` steps for tailored hyperparameter searches. +**Challenges**: Currently, a variable number of artifacts cannot be passed programmatically into a step, necessitating the use of the ZenML Client to query artifacts from previous steps. + +**Additional Resources**: For practical examples, refer to the E2E example in the ZenML GitHub repository, specifically the `hp_tuning_single_search` and `hp_tuning_select_best_model` steps, which provide templates for hyperparameter searches. ================================================== === File: docs/book/how-to/pipeline-development/training-with-gpus/README.md === -### Summary of GPU Resource Management in ZenML +# Summary of GPU Resource Management in ZenML -This documentation outlines how to configure ZenML to utilize GPU-backed hardware for machine learning pipelines, focusing on resource allocation and container settings. +## Overview +ZenML allows scaling machine learning pipelines to the cloud by utilizing GPU-backed hardware. This involves specifying resource requirements and ensuring proper container configurations. -#### 1. Specify Cloud Resources -To leverage powerful hardware or distribute tasks, use `ResourceSettings` to allocate resources for pipeline steps: +## Specifying Resource Requirements +To allocate resources for resource-intensive steps in a pipeline, use `ResourceSettings`: ```python from zenml.config import ResourceSettings @@ -8708,41 +8735,35 @@ def training_step(...) -> ...: # train a model ``` -Refer to each orchestrator's documentation for specific resource support. +Refer to orchestrator documentation for specific resource support. -#### 2. Ensure CUDA-Enabled Containers -To run GPU-accelerated steps, the container must have CUDA tools. Key configurations include: +### Container Configuration for GPU +1. **Use a CUDA-enabled Parent Image**: + Specify a CUDA-enabled image in `DockerSettings`: -- **Specify a CUDA-enabled parent image**: - -```python -from zenml import pipeline -from zenml.config import DockerSettings - -docker_settings = DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime") - -@pipeline(settings={"docker": docker_settings}) -def my_pipeline(...): - ... -``` + ```python + from zenml import pipeline + from zenml.config import DockerSettings -- **Add ZenML as a pip requirement**: + docker_settings = DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime") -```python -docker_settings = DockerSettings( - parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", - requirements=["zenml==0.39.1", "torchvision"] -) + @pipeline(settings={"docker": docker_settings}) + def my_pipeline(...): + ... + ``` -@pipeline(settings={"docker": docker_settings}) -def my_pipeline(...): - ... -``` +2. **Add ZenML as a Requirement**: + Ensure ZenML is included in the container requirements: -Ensure the chosen image is compatible with both local and remote environments. + ```python + docker_settings = DockerSettings( + parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", + requirements=["zenml==0.39.1", "torchvision"] + ) + ``` -#### 3. Reset CUDA Cache -To avoid GPU cache issues, consider resetting the CUDA cache between steps: +### Resetting CUDA Cache +To avoid GPU cache issues, reset the CUDA cache between steps: ```python import gc @@ -8758,23 +8779,23 @@ def training_step(...): # train a model ``` -#### 4. Multi-GPU Training -ZenML supports training across multiple GPUs on a single node. To do this effectively: - -- Create a script that handles training logic for parallel execution. -- Call this script from within the ZenML step, ensuring no multiple instances of ZenML are spawned. +### Multi-GPU Training +ZenML supports training across multiple GPUs on a single node. To implement this: +- Create a script/function for parallel training. +- Call this function from within the step, ensuring no multiple ZenML instances are spawned. -For assistance, users can connect via Slack. +For assistance, connect via [Slack](https://zenml.io/slack). -This summary captures the essential technical details for configuring GPU resources in ZenML, ensuring efficient execution of machine learning pipelines. +## Conclusion +Proper configuration of resources and containers is crucial for effective GPU utilization in ZenML pipelines. Follow the guidelines for resource allocation, container settings, and cache management to optimize performance. ================================================== === File: docs/book/how-to/pipeline-development/training-with-gpus/accelerate-distributed-training.md === -### Summary of Distributed Training with Hugging Face's Accelerate in ZenML +### Summary: Distributed Training with Hugging Face's Accelerate in ZenML -ZenML integrates with Hugging Face's Accelerate library to facilitate distributed training in machine learning pipelines, allowing for the use of multiple GPUs or nodes. +ZenML integrates with Hugging Face's Accelerate library to facilitate distributed training in machine learning pipelines, allowing users to leverage multiple GPUs or nodes effectively. #### Using 🤗 Accelerate in ZenML Steps To enable distributed execution in training steps, use the `run_with_accelerate` decorator: @@ -8793,56 +8814,52 @@ def training_pipeline(some_param: int, ...): training_step(some_param, ...) ``` -The decorator accepts arguments similar to the `accelerate launch` CLI command. For a complete list, refer to the [Accelerate CLI documentation](https://huggingface.co/docs/accelerate/en/package_reference/cli#accelerate-launch). +The decorator accepts arguments similar to the `accelerate launch` CLI command. For a full list of arguments, refer to the [Accelerate CLI documentation](https://huggingface.co/docs/accelerate/en/package_reference/cli#accelerate-launch). #### Configuration Key arguments for `run_with_accelerate` include: - `num_processes`: Number of processes for distributed training. - `cpu`: Force training on CPU. - `multi_gpu`: Enable distributed GPU training. -- `mixed_precision`: Set mixed precision mode ('no', 'fp16', or 'bf16'). +- `mixed_precision`: Set mixed precision mode ('no', 'fp16', 'bf16'). **Important Notes:** -1. Use the `@` syntax for decorators; do not call it as a function inside pipeline definitions. -2. Use keyword arguments for step calls. -3. Misuse raises a `RuntimeError` with usage guidance. - -For a full example, see the [llm-lora-finetuning project](https://github.com/zenml-io/zenml-projects/blob/main/llm-lora-finetuning/README.md). - -#### Container Configuration -To run steps with Accelerate, ensure your environment is set up correctly: - -1. **Specify a CUDA-enabled parent image** in `DockerSettings`: +1. Use `run_with_accelerate` directly on steps with the '@' syntax; it cannot be used as a function in pipeline definitions. +2. Use keyword arguments for calling steps. +3. Misuse raises a `RuntimeError` with guidance. -```python -from zenml.config import DockerSettings +For a comprehensive example, refer to the [llm-lora-finetuning](https://github.com/zenml-io/zenml-projects/blob/main/llm-lora-finetuning/README.md) project. -docker_settings = DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime") +#### Container Configuration for Accelerate +To run steps with Accelerate, ensure your environment is properly configured: -@pipeline(settings={"docker": docker_settings}) -def my_pipeline(...): - ... -``` +1. **Specify a CUDA-enabled parent image in `DockerSettings`:** + ```python + from zenml.config import DockerSettings -2. **Add Accelerate as a pip requirement**: + docker_settings = DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime") -```python -docker_settings = DockerSettings( - parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", - requirements=["accelerate", "torchvision"] -) + @pipeline(settings={"docker": docker_settings}) + def my_pipeline(...): + ... + ``` -@pipeline(settings={"docker": docker_settings}) -def my_pipeline(...): - ... -``` +2. **Add Accelerate as a pip requirement:** + ```python + docker_settings = DockerSettings( + parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", + requirements=["accelerate", "torchvision"] + ) -#### Multi-GPU Training -ZenML's Accelerate integration supports training with multiple GPUs on single or multiple nodes, enhancing performance for large datasets or complex models. Ensure your training step is wrapped with `run_with_accelerate` and configure the necessary arguments. + @pipeline(settings={"docker": docker_settings}) + def my_pipeline(...): + ... + ``` -For assistance with distributed training, connect with ZenML support via [Slack](https://zenml.io/slack). +#### Training Across Multiple GPUs +ZenML's Accelerate integration allows training with multiple GPUs on a single node or across nodes, which is beneficial for large datasets or complex models. Ensure your training step is wrapped with `run_with_accelerate`, configure the necessary arguments, and verify compatibility with distributed training. -By leveraging Accelerate in ZenML, you can efficiently scale training processes while maintaining pipeline structure. +For assistance, connect with ZenML support via [Slack](https://zenml.io/slack). By utilizing Accelerate, you can efficiently scale your training processes while maintaining the structure of ZenML pipelines. ================================================== @@ -8852,8 +8869,8 @@ By leveraging Accelerate in ZenML, you can efficiently scale training processes To use a private PyPI repository that requires authentication, follow these steps: -1. **Store Credentials Securely**: Use environment variables for sensitive information. -2. **Configure Package Managers**: Set up `pip` or `poetry` to utilize these credentials during package installation. +1. **Store Credentials Securely**: Use environment variables for credentials. +2. **Configure Package Managers**: Set up `pip` or `poetry` to utilize these credentials during package installations. 3. **Custom Docker Images**: Consider using Docker images pre-configured with authentication. #### Example Code for Authentication Setup @@ -8881,7 +8898,7 @@ if __name__ == "__main__": my_pipeline() ``` -**Note**: Handle credentials with care and use secure methods for managing and distributing authentication information within your team. +**Note**: Handle credentials with care and use secure methods for managing and sharing authentication information within your team. ================================================== @@ -8890,19 +8907,19 @@ if __name__ == "__main__": ### Summary: Using Docker Images to Run Your Pipeline #### Overview -When running a pipeline with a remote orchestrator, ZenML dynamically generates a Dockerfile to build a Docker image. This process includes: +When running a pipeline with a remote orchestrator, a Dockerfile is dynamically generated to build a Docker image using ZenML. This process includes: 1. **Parent Image**: Starts from a ZenML-installed parent image, defaulting to the official ZenML image for the active Python environment. Custom parent images can be specified. -2. **Pip Dependencies**: Automatically installs required dependencies based on integrations used in the stack. Additional requirements can be included as needed. +2. **Pip Dependencies**: Automatically installs required dependencies based on stack integrations. Additional requirements can be included. 3. **Source Files**: Optionally copies source files into the Docker container for execution. 4. **Environment Variables**: Sets user-defined environment variables. -For customization options, refer to the [DockerSettings object](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.docker_settings.DockerSettings). +For customization, refer to the [DockerSettings object](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.docker_settings.DockerSettings). #### Configuring Docker Settings Docker settings can be configured in several ways: -- **Pipeline Level**: Applies settings to all steps. +- **Pipeline Level**: ```python from zenml.config import DockerSettings docker_settings = DockerSettings() @@ -8912,14 +8929,16 @@ Docker settings can be configured in several ways: my_step() ``` -- **Step Level**: Allows for specialized Docker images for individual steps. +- **Step Level**: ```python + docker_settings = DockerSettings() + @step(settings={"docker": docker_settings}) def my_step() -> None: pass ``` -- **YAML Configuration**: Use a YAML file to define settings. +- **YAML Configuration**: ```yaml settings: docker: @@ -8931,6 +8950,8 @@ Docker settings can be configured in several ways: ... ``` +Refer to the configuration hierarchy for precedence details. + #### Specifying Docker Build Options To pass build options to the image builder: ```python @@ -8941,15 +8962,15 @@ def my_pipeline(...): ... ``` -**Note**: On MacOS with ARM architecture, specify the target platform: +**Note**: For MacOS ARM architecture, specify the target platform: ```python docker_settings = DockerSettings(build_config={"build_options": {"platform": "linux/amd64"}}) ``` -#### Custom Parent Images -You can specify a custom parent image or Dockerfile. Ensure the image has Python, pip, and ZenML installed. +#### Using a Custom Parent Image +You can specify a custom parent image or Dockerfile. Ensure it has Python, pip, and ZenML installed. -- **Using a Pre-built Parent Image**: +- **Pre-built Parent Image**: ```python docker_settings = DockerSettings(parent_image="my_registry.io/image_name:tag") @@ -8958,7 +8979,7 @@ You can specify a custom parent image or Dockerfile. Ensure the image has Python ... ``` -- **Skip Docker Builds**: +- **Skip Build**: ```python docker_settings = DockerSettings( parent_image="my_registry.io/image_name:tag", @@ -8970,47 +8991,51 @@ You can specify a custom parent image or Dockerfile. Ensure the image has Python ... ``` -**Warning**: Using pre-built images can lead to unintended behavior. Ensure code files are included correctly. For more details, refer to the guide on [using a prebuilt image](./use-a-prebuilt-image.md). +**Warning**: Using a pre-built image may lead to unintended behavior; ensure code files are included in the specified image. For more details, refer to the [prebuilt image documentation](./use-a-prebuilt-image.md). ================================================== === File: docs/book/how-to/customize-docker-builds/README.md === -### Using Docker Images to Run Your Pipeline +### Customize Docker Builds -ZenML executes pipeline steps sequentially in the active Python environment locally. For remote orchestrators or step operators, ZenML builds Docker images to run pipelines in an isolated environment. This section covers how to customize the Docker build process. +ZenML runs pipeline steps sequentially in the local Python environment. For remote orchestrators or step operators, it builds Docker images to execute pipelines in isolated environments. This section outlines how to manage the dockerization process. **Key Points:** -- **Local Execution**: Runs in the active Python environment. -- **Remote Execution**: Utilizes Docker images for isolation. -- **Customization**: Users can control the Dockerization process. +- **Execution Environment:** Local vs. Docker images for remote execution. +- **Docker Usage:** Essential for isolated and defined environments in pipeline execution. -For more details on orchestrators and step operators, refer to the respective guides. +For more details, refer to the documentation on [orchestrators](../../user-guide/production-guide/cloud-orchestration.md) and [step operators](../../component-guide/step-operators/step-operators.md). ================================================== === File: docs/book/how-to/customize-docker-builds/which-files-are-built-into-the-image.md === -### ZenML Image Building Overview +### ZenML Image Build Process -ZenML determines the root directory for source files in the following order: -1. If `zenml init` has been run in the current or parent directory, that directory is used as the root. -2. If not, the parent directory of the executing Python file is used. +ZenML determines the root directory of your source files in the following order: +1. If `zenml init` has been executed in the current or parent directory, that directory is used as the root. +2. If not, the parent directory of the executing Python file is used as the source root. -### DockerSettings Attributes -You can control file handling in the Docker image using the following `DockerSettings` attributes: +For example, executing `python /path/to/file.py` sets the source root to `/path/to`. -- **`allow_download_from_code_repository`**: If `True`, files in a registered code repository with no local changes will be downloaded instead of included in the image. -- **`allow_download_from_artifact_store`**: If the previous option is `False`, and no suitable code repository exists, files will be archived and uploaded to the artifact store if this is `True`. -- **`allow_including_files_in_images`**: If both previous options are `False`, files will be included in the Docker image if this is `True`. Modifications to code files will require a new Docker image build. +#### DockerSettings Attributes +You can control how files in the root directory are handled using the following attributes in `DockerSettings`: -**Warning**: Setting all attributes to `False` is not recommended, as it may lead to unexpected behavior. You must ensure all files are correctly positioned in the Docker images used for pipeline execution. +- **`allow_download_from_code_repository`**: If `True`, files in a registered code repository without local changes will be downloaded instead of included in the image. + +- **`allow_download_from_artifact_store`**: If the previous option is `False`, and this is `True`, ZenML will archive and upload your code to the artifact store. + +- **`allow_including_files_in_images`**: If both previous options are `False`, and this is `True`, files will be included in the Docker image, necessitating a new image build for each code modification. -### File Management +> **Warning**: Setting all attributes to `False` is not recommended as it may lead to unexpected behavior. You must ensure files are correctly placed in Docker images for pipeline execution. + +#### File Management - **Excluding Files**: Use a `.gitignore` file to exclude files when downloading from a code repository. -- **Including Files**: Use a `.dockerignore` file to exclude files when including them in the Docker image. This can be done by: - - Creating a `.dockerignore` in the source root. - - Specifying a `.dockerignore` file explicitly: + +- **Including Files**: Use a `.dockerignore` file to exclude files when building the Docker image. You can either: + - Place a `.dockerignore` file in the source root. + - Specify a `.dockerignore` file explicitly: ```python docker_settings = DockerSettings(build_config={"dockerignore": "/path/to/.dockerignore"}) @@ -9020,19 +9045,19 @@ def my_pipeline(...): ... ``` -This setup helps manage which files are included or excluded during the Docker image build process in ZenML. +This setup ensures efficient management of files included in Docker images and their handling during the build process. ================================================== === File: docs/book/how-to/customize-docker-builds/use-a-prebuilt-image.md === -### Skip Building an Image for ZenML Pipeline Execution +### Skip Building an Image for ZenML Pipeline -ZenML allows you to skip the Docker image build process when running a pipeline on a remote Stack by using a prebuilt image. This can save time and costs, especially if your dependencies are large or your local system is slow. However, using a prebuilt image means you won't receive updates to your code or dependencies unless they are included in the image. +#### Overview +ZenML allows you to skip building a Docker image for your pipeline by using a prebuilt image. This can save time and costs, especially if your dependencies are large or your local system is slow. However, using a prebuilt image means you won't receive updates to your code or dependencies unless they are included in the image. #### Using Prebuilt Images - -To utilize a prebuilt image, configure the `DockerSettings` class by setting the `parent_image` and `skip_build` attributes: +To use a prebuilt image, configure the `DockerSettings` class as follows: ```python docker_settings = DockerSettings( @@ -9045,70 +9070,65 @@ def my_pipeline(...): ... ``` -Ensure that the specified image is available in a registry accessible by the orchestrator and other components. +Ensure the image is pushed to a registry accessible by your orchestrator. #### Requirements for the Parent Image - When using a prebuilt image, it must contain: +- All dependencies required for your pipeline. +- Any code files if no code repository is registered and `allow_download_from_artifact_store` is set to `False`. -1. **All Dependencies**: Ensure the image includes all necessary dependencies for your pipeline. -2. **Code Files**: If no code repository is registered and `allow_download_from_artifact_store` is `False`, include your code files in the image. +If the image was built by ZenML for the same stack, it can be reused directly. #### Stack and Integration Requirements +To ensure your image meets stack and integration requirements: -To build your image correctly, consider the following: - -- **Stack Requirements**: Retrieve the requirements of your active ZenML Stack: - - ```python - from zenml.client import Client - - stack_name = <YOUR_STACK> - Client().set_active_stack(stack_name) - active_stack = Client().active_stack - stack_requirements = active_stack.requirements() - ``` - -- **Integration Requirements**: Gather dependencies for required integrations: - - ```python - from zenml.integrations.registry import integration_registry - from zenml.integrations.constants import HUGGINGFACE, PYTORCH - import itertools - - required_integrations = [PYTORCH, HUGGINGFACE] - integration_requirements = set( - itertools.chain.from_iterable( - integration_registry.select_integration_requirements( - integration_name=integration, - target_os=OperatingSystemType.LINUX, - ) - for integration in required_integrations - ) - ) - ``` +1. **Stack Requirements**: + ```python + from zenml.client import Client -- **Project-Specific Requirements**: Install additional dependencies in your `Dockerfile`: + stack_name = <YOUR_STACK> + Client().set_active_stack(stack_name) + active_stack = Client().active_stack + stack_requirements = active_stack.requirements() + ``` - ```Dockerfile - RUN pip install <ANY_ARGS> -r FILE - ``` +2. **Integration Requirements**: + ```python + from zenml.integrations.registry import integration_registry + from zenml.integrations.constants import HUGGINGFACE, PYTORCH + import itertools + + required_integrations = [PYTORCH, HUGGINGFACE] + integration_requirements = set( + itertools.chain.from_iterable( + integration_registry.select_integration_requirements( + integration_name=integration, + target_os=OperatingSystemType.LINUX, + ) + for integration in required_integrations + ) + ) + ``` -- **System Packages**: Include any necessary `apt` packages: +#### Project-Specific and System Packages +- **Project-Specific Requirements**: + Include dependencies in your `Dockerfile`: + ```Dockerfile + RUN pip install <ANY_ARGS> -r FILE + ``` - ```Dockerfile - RUN apt-get update && apt-get install -y --no-install-recommends YOUR_APT_PACKAGES - ``` +- **System Packages**: + Include necessary `apt` packages: + ```Dockerfile + RUN apt-get update && apt-get install -y --no-install-recommends YOUR_APT_PACKAGES + ``` #### Code Files +- If a code repository is registered, ZenML will handle code files. +- If `allow_download_from_artifact_store` is `True`, ZenML will upload your code. +- If both options are disabled, include your code files in the image, preferably in the `/app` directory. -Your pipeline code must be accessible in the execution environment. Options include: - -- Registering a code repository to automatically fetch code. -- Setting `allow_download_from_artifact_store` to `True` to upload code to the artifact store. -- Including code files directly in the image (not recommended). - -Ensure your code is in the `/app` directory and that Python, `pip`, and `zenml` are installed in your image. +Ensure Python, `pip`, and `zenml` are installed in your image. ================================================== @@ -9116,21 +9136,20 @@ Ensure your code is in the `/app` directory and that Python, `pip`, and `zenml` # Using Custom Docker Files in ZenML -ZenML allows users to specify a custom Dockerfile, build context directory, and build options to dynamically create a parent Docker image for each pipeline execution. The build process is as follows: - -- **No Dockerfile Specified**: If requirements, environment variables, or file copying necessitate an image build, ZenML will create one. Otherwise, it uses the existing `parent_image`. - -- **Dockerfile Specified**: ZenML builds an image from the specified Dockerfile. If additional requirements necessitate another image, ZenML will build a second image; otherwise, it uses the first image for the pipeline. +ZenML allows you to specify a custom Dockerfile, build context directory, and build options for dynamic parent image creation during pipeline execution. The build process is as follows: -The `DockerSettings` object controls the installation order of requirements: +- **No Dockerfile Specified**: If requirements or environment variables necessitate an image build, ZenML builds an image; otherwise, it uses the `parent_image`. +- **Dockerfile Specified**: ZenML builds an image from the specified Dockerfile. If further requirements necessitate another image, ZenML builds a second image; otherwise, it uses the first image for the pipeline. -1. Packages from the local Python environment. +The `DockerSettings` object installs requirements in this order (each step optional): +1. Local Python environment packages. 2. Packages from the `requirements` attribute. 3. Packages from `required_integrations` and stack requirements. -**Note**: The intermediate image may also be used directly for executing pipeline steps based on Docker settings. +**Note**: The intermediate image may also be used directly for executing pipeline steps. ### Example Code + ```python docker_settings = DockerSettings( dockerfile="/path/to/dockerfile", @@ -9150,148 +9169,158 @@ def my_pipeline(...): === File: docs/book/how-to/customize-docker-builds/how-to-reuse-builds.md === -### Summary: Reusing Builds in ZenML +### Summary of ZenML Build Reuse Documentation #### Overview -This guide explains how to reuse builds in ZenML to enhance pipeline efficiency. A build encapsulates a pipeline and its stack, including Docker images and optionally the pipeline code. +This guide explains how to reuse builds in ZenML to enhance pipeline efficiency. A build encapsulates a pipeline and its stack, including Docker images with requirements and optionally the pipeline code. #### What is a Build? -A build is a combination of a pipeline and the stack it runs on, containing necessary Docker images and configurations. To list builds for a specific pipeline, use: - +A build represents a specific execution of a pipeline with its associated stack. You can list builds using the CLI: ```bash zenml pipeline builds list --pipeline_id='startswith:ab53ca' ``` - To create a build manually: - ```bash zenml pipeline build --stack vertex-stack my_module.my_pipeline_instance ``` #### Reusing Builds -ZenML automatically reuses existing builds if they match the current pipeline and stack. You can specify a build by passing its ID in the pipeline configuration. Note that reusing a build executes the code in the Docker image, not local changes. To include local changes, disconnect your code from the build by registering a code repository or using the artifact store. +ZenML automatically reuses existing builds if they match the pipeline and stack. You can specify a build ID in the pipeline configuration to force the use of a specific build. Note that reusing a build executes the code in the Docker image, not local changes. To incorporate local changes, disconnect your code from the build by registering a code repository or using the artifact store. -#### Artifact Store -Using the artifact store to upload code is the default behavior unless a code repository is detected and the `allow_download_from_artifact_store` flag is set to `False`. +#### Using the Artifact Store +ZenML can upload your code to the artifact store by default if no code repository is detected and the `allow_download_from_artifact_store` flag is not set to `False`. -#### Code Repositories -Connecting a code repository speeds up Docker builds by avoiding the need to rebuild images for every pipeline run. ZenML can build Docker images without source files and download them before execution. This approach is recommended for efficiency. To register a GitHub repository, install the integration: +#### Connecting Code Repositories +Registering a code repository speeds up Docker builds by allowing ZenML to build images without source files and download them before execution. This enables image reuse among team members. If you have a clean repository state and a connected git repository, ZenML will automatically reuse builds. +To install the necessary integration (e.g., GitHub): ```sh zenml integration install github ``` -#### Detecting Local Repositories -ZenML checks if the files used in a pipeline are tracked in registered repositories by computing the source root and verifying its inclusion in a local checkout. +#### Detecting Local Code Repository Checkouts +ZenML checks if the files used in a pipeline are tracked in registered repositories by computing the source root and verifying its inclusion in local checkouts. #### Tracking Code Versions -If a local checkout is detected, ZenML stores a reference to the current commit for reproducibility. This reference is only tracked if the local checkout is clean (no untracked or uncommitted files). To ignore untracked files, set the environment variable: - -```sh -export ZENML_CODE_REPOSITORY_IGNORE_UNTRACKED_FILES=True -``` +When a local code repository is detected, ZenML stores a reference to the current commit for the pipeline run. This reference is only tracked if the local checkout is clean. To ignore untracked files, set the `ZENML_CODE_REPOSITORY_IGNORE_UNTRACKED_FILES` environment variable to `True`. #### Best Practices -- Ensure the local checkout is clean and the latest commit is pushed to avoid download failures. -- For options to enforce or disable file downloading, refer to the Docker settings documentation. +- Ensure the local checkout is clean and the latest commit is pushed to avoid file download failures in Docker. +- For options to enforce or disable file downloading, refer to the relevant documentation on Docker settings. -This guide provides essential practices for effectively reusing builds in ZenML, enhancing pipeline performance while ensuring code integrity. +This summary encapsulates the key points and technical details necessary for understanding build reuse in ZenML. ================================================== === File: docs/book/how-to/customize-docker-builds/specify-pip-dependencies-and-apt-packages.md === -# Specifying Pip and Apt Dependencies +# Summary of Specifying Pip Dependencies and Apt Packages -**Note:** The configuration for pip and apt dependencies is applicable only in remote pipelines and ignored in local pipelines. +## Overview +The configuration for specifying pip and apt dependencies is applicable only in remote pipelines, not local ones. When using a remote orchestrator, a Dockerfile is generated at runtime to build the Docker image. -When a pipeline runs with a remote orchestrator, a Dockerfile is dynamically generated to build the Docker image. You can import `DockerSettings` using `from zenml.config import DockerSettings`. By default, ZenML installs all packages required by your active stack, but you can specify additional packages in several ways: +## Importing DockerSettings +Import `DockerSettings` using: +```python +from zenml.config import DockerSettings +``` -1. **Replicate Local Python Environment:** - ```python - docker_settings = DockerSettings(replicate_local_python_environment="pip_freeze") +## Default Behavior +ZenML installs all packages required by the active stack automatically. Additional packages can be specified in several ways: - @pipeline(settings={"docker": docker_settings}) - def my_pipeline(...): - ... - ``` +### 1. Replicate Local Python Environment +To install all packages from the local environment: +```python +docker_settings = DockerSettings(replicate_local_python_environment="pip_freeze") -2. **Custom Command for Requirements:** - ```python - docker_settings = DockerSettings(replicate_local_python_environment=[ - "poetry", "export", "--extras=train", "--format=requirements.txt" - ]) +@pipeline(settings={"docker": docker_settings}) +def my_pipeline(...): + ... +``` - @pipeline(settings={"docker": docker_settings}) - def my_pipeline(...): - ... - ``` +### 2. Custom Command for Requirements +Specify a custom command to output requirements: +```python +docker_settings = DockerSettings(replicate_local_python_environment=[ + "poetry", "export", "--extras=train", "--format=requirements.txt" +]) -3. **Specify Requirements in Code:** - ```python - docker_settings = DockerSettings(requirements=["torch==1.12.0", "torchvision"]) +@pipeline(settings={"docker": docker_settings}) +def my_pipeline(...): + ... +``` - @pipeline(settings={"docker": docker_settings}) - def my_pipeline(...): - ... - ``` +### 3. List of Requirements in Code +Define specific packages: +```python +docker_settings = DockerSettings(requirements=["torch==1.12.0", "torchvision"]) -4. **Specify a Requirements File:** - ```python - docker_settings = DockerSettings(requirements="/path/to/requirements.txt") +@pipeline(settings={"docker": docker_settings}) +def my_pipeline(...): + ... +``` - @pipeline(settings={"docker": docker_settings}) - def my_pipeline(...): - ... - ``` +### 4. Requirements File +Specify a requirements file: +```python +docker_settings = DockerSettings(requirements="/path/to/requirements.txt") -5. **Specify ZenML Integrations:** - ```python - from zenml.integrations.constants import PYTORCH, EVIDENTLY +@pipeline(settings={"docker": docker_settings}) +def my_pipeline(...): + ... +``` - docker_settings = DockerSettings(required_integrations=[PYTORCH, EVIDENTLY]) +### 5. ZenML Integrations +List required ZenML integrations: +```python +from zenml.integrations.constants import PYTORCH, EVIDENTLY - @pipeline(settings={"docker": docker_settings}) - def my_pipeline(...): - ... - ``` +docker_settings = DockerSettings(required_integrations=[PYTORCH, EVIDENTLY]) -6. **Specify Apt Packages:** - ```python - docker_settings = DockerSettings(apt_packages=["git"]) +@pipeline(settings={"docker": docker_settings}) +def my_pipeline(...): + ... +``` - @pipeline(settings={"docker": docker_settings}) - def my_pipeline(...): - ... - ``` +### 6. List of Apt Packages +Specify apt packages: +```python +docker_settings = DockerSettings(apt_packages=["git"]) -7. **Prevent Automatic Installation of Stack Requirements:** - ```python - docker_settings = DockerSettings(install_stack_requirements=False) +@pipeline(settings={"docker": docker_settings}) +def my_pipeline(...): + ... +``` - @pipeline(settings={"docker": docker_settings}) - def my_pipeline(...): - ... - ``` +### 7. Disable Automatic Stack Requirements Installation +Prevent automatic installation of stack requirements: +```python +docker_settings = DockerSettings(install_stack_requirements=False) -8. **Custom Docker Settings for Steps:** - ```python - docker_settings = DockerSettings(requirements=["tensorflow"]) +@pipeline(settings={"docker": docker_settings}) +def my_pipeline(...): + ... +``` - @step(settings={"docker": docker_settings}) - def my_training_step(...): - ... - ``` +### 8. Custom Docker Settings for Steps +Specify custom settings for specific steps: +```python +docker_settings = DockerSettings(requirements=["tensorflow"]) -**Important:** You can combine these methods, but ensure there is no overlap in specified requirements. +@step(settings={"docker": docker_settings}) +def my_training_step(...): + ... +``` -**Installation Order:** -- Local Python environment packages -- Stack requirements (unless disabled) -- Required integrations -- Specified requirements +## Installation Order +ZenML installs requirements in the following order: +1. Local Python environment packages +2. Stack requirements (if not disabled) +3. Required integrations +4. Explicit requirements -**Additional Installer Arguments:** +## Additional Installer Arguments +Specify arguments for the package installer: ```python docker_settings = DockerSettings(python_package_installer_args={"timeout": 1000}) @@ -9300,7 +9329,8 @@ def my_pipeline(...): ... ``` -**Experimental Feature:** Use `uv` for faster package resolution: +## Experimental: Using `uv` for Package Installation +To use `uv` for faster package installation: ```python docker_settings = DockerSettings(python_package_installer="uv") @@ -9308,7 +9338,9 @@ docker_settings = DockerSettings(python_package_installer="uv") def my_pipeline(...): ... ``` -**Caution:** `uv` is less stable than `pip`. If issues arise, revert to `pip`. For more details on using `uv` with PyTorch, refer to the [Astral Docs](https://docs.astral.sh/uv/guides/integration/pytorch/). +Note: `uv` is experimental and may cause installation errors; revert to `pip` if issues arise. + +For more details on `uv` with PyTorch, refer to the [Astral Docs](https://docs.astral.sh/uv/guides/integration/pytorch/). ================================================== @@ -9316,12 +9348,9 @@ def my_pipeline(...): ### Summary of Docker Settings Customization in ZenML -In ZenML, you can customize Docker settings at the step level, allowing different steps in a pipeline to use distinct Docker images as needed. By default, all steps inherit the Docker image defined at the pipeline level. - -#### Customizing Docker Settings in Step Decorator - -You can specify a different Docker image for a step by using the `DockerSettings` in the step decorator: +You can customize Docker settings at the step level in a ZenML pipeline, allowing different steps to use distinct Docker images. By default, all steps inherit the Docker image defined at the pipeline level. To specify a different image for a step, use the `DockerSettings` in the step decorator or in a configuration file. +#### Step Decorator Example ```python from zenml import step from zenml.config import DockerSettings @@ -9337,10 +9366,7 @@ def training(...): ... ``` -#### Customizing Docker Settings in Configuration File - -Alternatively, you can define Docker settings in a configuration file: - +#### Configuration File Example ```yaml steps: training: @@ -9355,79 +9381,72 @@ steps: - numpy ``` -This allows you to manage dependencies and integrations effectively for each step. +This allows for tailored environments based on the specific needs of each step. ================================================== === File: docs/book/how-to/customize-docker-builds/define-where-an-image-is-built.md === -### Image Builder Definition in ZenML +### Summary: Defining the Image Builder in ZenML -ZenML executes pipeline steps sequentially in the local Python environment. For remote orchestrators or step operators, it builds Docker images for isolated execution. By default, execution environments are created using the local Docker client, which requires Docker installation and permissions. +ZenML executes pipeline steps sequentially in the local Python environment but builds Docker images for remote orchestrators or step operators to ensure an isolated environment. By default, it uses the local Docker client, which requires Docker installation and permissions. -ZenML provides **image builders**, a specialized stack component for building and pushing Docker images in a dedicated environment. If no image builder is configured in your stack, ZenML defaults to the **local image builder** to ensure consistency across builds. In this scenario, the image builder environment matches the client environment. +ZenML provides **image builders**, a stack component that allows users to build and push Docker images in a specialized environment. If no image builder is configured, ZenML defaults to the **local image builder** to maintain consistency across builds, using the same environment as the client. -You do not need to directly interact with the image builder in your code. As long as the desired image builder is part of your active ZenML stack, it will be automatically utilized by any component requiring container image builds. +Users do not need to interact directly with image builders in their code; as long as the desired image builder is included in the active ZenML stack, it will be automatically utilized by any component requiring container image builds. ================================================== === File: docs/book/how-to/manage-zenml-server/README.md === -# Manage Your ZenML Server +# Manage your ZenML Server -This section provides best practices for upgrading your ZenML server, tips for production use, and troubleshooting guidance. It includes recommended upgrade steps and migration guides for transitioning between specific versions. +This section provides best practices for upgrading your ZenML server, production usage tips, and troubleshooting guidance. It includes recommended upgrade steps and migration guides for transitioning between specific versions. -### Key Points: -- **Upgrading**: Follow the recommended steps for upgrading your ZenML server. -- **Production Use**: Tips for effectively using ZenML in a production environment. -- **Troubleshooting**: Guidance on resolving common issues. -- **Migration Guides**: Instructions for moving between certain versions of ZenML. - -For visual reference, an image of the ZenML Scarf is included. + ================================================== === File: docs/book/how-to/manage-zenml-server/best-practices-upgrading-zenml.md === -### Best Practices for Upgrading ZenML +# Best Practices for Upgrading ZenML -#### Upgrading Your Server +## Upgrading Your Server -1. **Data Backups** - - **Database Backup**: Create a backup of your MySQL database before upgrading. - - **Automated Backups**: Set up daily automated backups using services like AWS RDS, Google Cloud SQL, or Azure Database for MySQL. +### Data Backups +- **Database Backup**: Create a backup of your MySQL database before upgrading to allow rollback if necessary. +- **Automated Backups**: Set up daily automated backups using services like AWS RDS, Google Cloud SQL, or Azure Database for MySQL. -2. **Upgrade Strategies** - - **Staged Upgrade**: Use two ZenML server instances (old and new) for gradual migration. - - **Team Coordination**: Synchronize upgrade timing among teams to reduce disruption. - - **Separate ZenML Servers**: Consider dedicated servers for different teams to allow flexible upgrade schedules. +### Upgrade Strategies +- **Staged Upgrade**: Use two ZenML server instances (old and new) to migrate services incrementally. +- **Team Coordination**: Coordinate upgrade timing among teams sharing a ZenML server to minimize disruption. +- **Separate ZenML Servers**: For teams needing different upgrade schedules, consider using dedicated ZenML server instances. - > ZenML Pro supports multi-tenancy, enabling multiple servers for different teams. +### Minimizing Downtime +- **Upgrade Timing**: Schedule upgrades during low-activity periods. +- **Avoid Mid-Pipeline Upgrades**: Prevent interruptions to long-running pipelines during automated upgrades. -3. **Minimizing Downtime** - - **Upgrade Timing**: Schedule upgrades during low-activity periods. - - **Avoid Mid-Pipeline Upgrades**: Prevent interruptions to long-running pipelines during upgrades. +## Upgrading Your Code -#### Upgrading Your Code +### Testing and Compatibility +- **Local Testing**: Test locally after upgrading with `pip install zenml --upgrade` and run old pipelines to check compatibility. +- **End-to-End Testing**: Develop simple tests to ensure the new version works with your pipeline code. Refer to ZenML's [extensive test suite](https://github.com/zenml-io/zenml/tree/main/tests). +- **Artifact Compatibility**: Be cautious with pickle-based materializers. Load older artifacts to check compatibility using: -1. **Testing and Compatibility** - - **Local Testing**: Test locally after upgrading (`pip install zenml --upgrade`) and run old pipelines for compatibility. - - **End-to-End Testing**: Develop simple tests to ensure compatibility with your pipeline code. - - **Artifact Compatibility**: Be cautious with pickle-based materializers. Use version-agnostic methods for critical artifacts. Load older artifacts using: - ```python - from zenml.client import Client +```python +from zenml.client import Client - artifact = Client().get_artifact_version('YOUR_ARTIFACT_ID') - loaded_artifact = artifact.load() - ``` +artifact = Client().get_artifact_version('YOUR_ARTIFACT_ID') +loaded_artifact = artifact.load() +``` -2. **Dependency Management** - - **Python Version**: Ensure compatibility with the ZenML version. Refer to the [installation guide](../../getting-started/installation.md). - - **External Dependencies**: Check for compatibility issues with external dependencies in the [release notes](https://github.com/zenml-io/zenml/releases). +### Dependency Management +- **Python Version**: Ensure your Python version is compatible with the new ZenML version. Refer to the [installation guide](../../getting-started/installation.md). +- **External Dependencies**: Check for compatibility of external dependencies with the new ZenML version in the [release notes](https://github.com/zenml-io/zenml/releases). -3. **Handling API Changes** - - **Changelog Review**: Review the [changelog](https://github.com/zenml-io/zenml/releases) for breaking changes and new syntax. - - **Migration Scripts**: Utilize provided [migration scripts](migration-guide/migration-guide.md) for database schema changes. +### Handling API Changes +- **Changelog Review**: Review the [changelog](https://github.com/zenml-io/zenml/releases) for new syntax and breaking changes. +- **Migration Scripts**: Use available [migration scripts](migration-guide/migration-guide.md) for database schema changes. By following these best practices, you can minimize risks and ensure a smoother upgrade process for your ZenML server. Adapt these guidelines to your specific environment and infrastructure. @@ -9437,91 +9456,81 @@ By following these best practices, you can minimize risks and ensure a smoother # Best Practices for Using ZenML Server in Production -This guide provides best practices for setting up a ZenML server in production environments, focusing on performance, scalability, and reliability. +## Overview +This guide provides best practices for deploying a ZenML server in production environments, focusing on performance, scalability, and reliability. + +### Key Recommendations -## Autoscaling Replicas +#### Autoscaling Replicas To handle larger and longer-running pipelines, enable autoscaling based on your deployment environment: -### Kubernetes with Helm -Use the following configuration in your Helm chart: -```yaml -autoscaling: - enabled: true - minReplicas: 1 - maxReplicas: 10 - targetCPUUtilizationPercentage: 80 -``` +- **Kubernetes with Helm**: + ```yaml + autoscaling: + enabled: true + minReplicas: 1 + maxReplicas: 10 + targetCPUUtilizationPercentage: 80 + ``` -### ECS (AWS) -1. Go to the ECS console and select your ZenML service. -2. Click "Update Service" and enable autoscaling in the "Service auto scaling - optional" section. +- **ECS**: Use the ECS console to enable autoscaling and set minimum/maximum tasks. -### Cloud Run (GCP) -1. Access the Cloud Run console and select your service. -2. Click "Edit & Deploy new Revision" and set minimum and maximum instances in the "Revision auto-scaling" section. +- **Cloud Run**: Set minimum instances to at least 1 for warm starts. -### Docker Compose -Scale your service using: -```bash -docker compose up --scale zenml-server=N -``` +- **Docker Compose**: + ```bash + docker compose up --scale zenml-server=N + ``` -## High Connection Pool Values -Increase the thread pool size for better performance: +#### High Connection Pool Values +Increase server thread usage by adjusting `zenml.threadPoolSize`: ```yaml zenml: threadPoolSize: 100 ``` -Set `ZENML_SERVER_THREAD_POOL_SIZE` for other deployments. Adjust `zenml.database.poolSize` and `zenml.database.maxOverflow` accordingly. +Ensure database settings (`zenml.database.poolSize` and `zenml.database.maxOverflow`) accommodate this increase. -## Scaling the Backing Database +#### Scaling the Backing Database Monitor and scale your database based on: - **CPU Utilization**: Scale if consistently above 50%. - **Freeable Memory**: Scale if below 100-200 MB. -## Setting Up Ingress/Load Balancer -Securely expose your ZenML server using an ingress/load balancer: +#### Setting Up Ingress/Load Balancer +Securely expose your ZenML server: -### Kubernetes with Helm -Enable ingress with: -```yaml -zenml: - ingress: - enabled: true - className: "nginx" -``` +- **Kubernetes with Helm**: + ```yaml + zenml: + ingress: + enabled: true + className: "nginx" + ``` -### ECS -Use Application Load Balancers for traffic routing. +- **ECS**: Use Application Load Balancers. -### Cloud Run -Utilize Cloud Load Balancing for service traffic. +- **Cloud Run**: Use Cloud Load Balancing. -### Docker Compose -Set up an NGINX reverse proxy for routing. +- **Docker Compose**: Set up NGINX as a reverse proxy. -## Monitoring -Ensure smooth operation with monitoring tools: +#### Monitoring +Use appropriate tools for monitoring based on your deployment: -### Kubernetes with Helm -Use Prometheus and Grafana: -```plaintext -sum by(namespace) (rate(container_cpu_usage_seconds_total{namespace=~"zenml.*"}[5m])) -``` +- **Kubernetes**: Set up Prometheus and Grafana. + ```plaintext + sum by(namespace) (rate(container_cpu_usage_seconds_total{namespace=~"zenml.*"}[5m])) + ``` -### ECS -Utilize CloudWatch for metrics on CPU and memory utilization. +- **ECS**: Utilize CloudWatch for metrics. -### Cloud Run -Use Cloud Monitoring for metrics on container performance. +- **Cloud Run**: Use Cloud Monitoring for resource metrics. -## Backups +#### Backups Implement a backup strategy to protect critical data: -- Automate backups with a retention period (e.g., 30 days). -- Periodically export data to external storage (e.g., S3, GCS). -- Perform manual backups before upgrades. +- Automated backups with a retention period (e.g., 30 days). +- Periodic data exports to external storage (e.g., S3, GCS). +- Manual backups before server upgrades. -These practices will help ensure that your ZenML server operates efficiently and reliably in a production environment. +By following these practices, you can ensure a robust and efficient deployment of your ZenML server in production. ================================================== @@ -9529,105 +9538,117 @@ These practices will help ensure that your ZenML server operates efficiently and ### ZenML Server Upgrade Guide -#### Overview -Upgrading your ZenML server varies based on the deployment method. Always refer to the [best practices for upgrading ZenML](best-practices-upgrading-zenml.md) before proceeding. Upgrade promptly after new versions are released to benefit from improvements and fixes. +This guide details how to upgrade your ZenML server based on the deployment method. Always refer to the [best practices for upgrading ZenML](best-practices-upgrading-zenml.md) before proceeding. + +#### General Recommendation +Upgrade your ZenML server promptly after a new version release to benefit from improvements and fixes. + +--- + +### Upgrade Methods -#### Upgrade Methods +#### 1. Docker -##### Docker -1. **Backup Data**: Ensure data is persisted (on persistent storage or external MySQL). Optionally, perform a backup. -2. **Delete Existing Container**: +To upgrade using Docker: +- **Ensure data persistence** (on persistent storage or external MySQL) and consider backing up before upgrading. + +**Steps:** +1. Find and stop the existing container: ```bash - docker ps # Find your container ID + docker ps docker stop <CONTAINER_ID> docker rm <CONTAINER_ID> ``` -3. **Deploy New Version**: +2. Deploy the new version of the `zenml-server` image: ```bash docker run -it -d -p 8080:8080 --name <CONTAINER_NAME> zenmldocker/zenml-server:<VERSION> ``` -##### Kubernetes with Helm -- **In-Place Upgrade** (no configuration changes): +--- + +#### 2. Kubernetes with Helm + +**Simple In-Place Upgrade:** +If no configuration changes are needed: +```bash +helm -n <namespace> upgrade zenml-server oci://public.ecr.aws/zenml/zenml --version <VERSION> --reuse-values +``` + +**Upgrade with Configuration Changes:** +1. Extract current configuration: + ```bash + helm -n <namespace> get values zenml-server > custom-values.yaml + ``` +2. Modify `custom-values.yaml` as needed. +3. Upgrade using the modified values: ```bash - helm -n <namespace> upgrade zenml-server oci://public.ecr.aws/zenml/zenml --version <VERSION> --reuse-values + helm -n <namespace> upgrade zenml-server oci://public.ecr.aws/zenml/zenml --version <VERSION> -f custom-values.yaml ``` -- **Upgrade with Configuration Changes**: - 1. Extract current configuration: - ```bash - helm -n <namespace> get values zenml-server > custom-values.yaml - ``` - 2. Modify `custom-values.yaml` as needed. - 3. Upgrade using modified values: - ```bash - helm -n <namespace> upgrade zenml-server oci://public.ecr.aws/zenml/zenml --version <VERSION> -f custom-values.yaml - ``` +> **Note:** Avoid changing the container image tag in the Helm chart unless necessary, as compatibility is not guaranteed. -**Note**: Avoid changing the container image tag in the Helm chart unless necessary, as each chart version is tested with the default image tag. +--- -#### Important Considerations -- **Downgrading**: Not supported and may cause unexpected behavior. -- **Python Client Version**: Should match the server version. +### Important Notes +- **Downgrading** to an older version is unsupported and may cause issues. +- Ensure the **Python client version** matches the server version for compatibility. ================================================== === File: docs/book/how-to/manage-zenml-server/troubleshoot-your-deployed-server.md === -# Troubleshooting ZenML Deployment +# Troubleshooting Tips for ZenML Deployment ## Viewing Logs -Logs are essential for debugging ZenML deployment issues. The method to view logs varies based on your deployment type. +To debug issues in ZenML deployment, check the logs based on your deployment method: ### Kubernetes -1. Check running pods: +1. List running pods: ```bash kubectl -n <KUBERNETES_NAMESPACE> get pods ``` -2. If pods aren't running, get logs for all pods: +2. If pods aren't running, view logs for all pods: ```bash kubectl -n <KUBERNETES_NAMESPACE> logs -l app.kubernetes.io/name=zenml ``` -3. For specific container logs (either `zenml-db-init` or `zenml`): +3. For specific container logs (use `zenml-db-init` for failing pods in `Init` state): ```bash kubectl -n <KUBERNETES_NAMESPACE> logs -l app.kubernetes.io/name=zenml -c <CONTAINER_NAME> ``` - Use `--tail` to limit lines or `--follow` for real-time logs. + - Use `--tail` to limit lines or `--follow` for real-time logs. ### Docker -1. For ZenML server deployed with CLI: +- For `zenml login --local --docker`: ```shell zenml logs -f ``` -2. For `docker run`: +- For `docker run`: ```shell docker logs zenml -f ``` -3. For `docker compose`: +- For `docker compose`: ```shell docker compose -p zenml logs -f ``` ## Fixing Database Connection Problems -Common MySQL connection issues can be identified through `zenml-db-init` logs: - +Common MySQL connection issues: - **Access Denied**: Check username/password. -- **Can't Connect to MySQL**: Verify the host. +- **Can't Connect**: Verify host/IP. Test connection: ```bash mysql -h <HOST> -u <USER> -p ``` -For Kubernetes, use `kubectl port-forward` to connect to the database locally. +- For Kubernetes, use `kubectl port-forward` to connect locally. ## Fixing Database Initialization Problems -If migrating from a newer to an older ZenML version results in `Revision not found` errors, follow these steps: - +If migrating from a newer to an older ZenML version results in `Revision not found` errors: 1. Log in to MySQL: ```bash mysql -h <HOST> -u <NAME> -p ``` -2. Drop the existing database: +2. Drop the database: ```sql drop database <NAME>; ``` @@ -9635,190 +9656,158 @@ If migrating from a newer to an older ZenML version results in `Revision not fou ```sql create database <NAME>; ``` -4. Restart Kubernetes pods or Docker container to reinitialize the database. - -================================================== - -=== File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-thirty.md === - -### Migration Guide from ZenML 0.20.0-0.23.0 to 0.30.0-0.39.1 - -**Warning:** Migrating to `0.30.0` results in non-reversible database changes; downgrading to `<=0.23.0` is not possible. If using an older version, follow the [0.20.0 Migration Guide](migration-zero-twenty.md) first to avoid database migration issues. - -**Key Changes:** -- ZenML 0.30.0 removes the `ml-pipelines-sdk` dependency. -- Pipeline runs and artifacts are now stored natively in the ZenML database. - -**Migration Steps:** -1. Install ZenML 0.30.0: - ```bash - pip install zenml==0.30.0 - zenml version # Should show 0.30.0 - ``` -2. Database migration occurs automatically upon executing any `zenml ...` CLI command after installation. +4. Restart Kubernetes pods or Docker container to reinitialize the database. ================================================== -=== File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-twenty.md === - -# Migration Guide: ZenML 0.13.2 to 0.20.0 - -**Last Updated: 2023-07-24** - -ZenML 0.20.0 introduces significant architectural changes, some of which are not backward compatible. This guide outlines the migration process for existing ZenML stacks and pipelines. - -## Key Changes - -1. **Metadata Store**: ZenML now manages its own Metadata Store, eliminating the need for external stores. If using remote Metadata Stores, switch to a ZenML server deployment. - -2. **ZenML Dashboard**: A new dashboard is available with all ZenML deployments. - -3. **Profiles Removal**: ZenML Profiles are replaced by Projects. Existing Profiles must be manually migrated. - -4. **Decoupled Configuration**: Stack Component configuration is now separate from implementation. Custom components may require updates. +=== File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-thirty.md === -5. **Collaborative Features**: The updated ZenML server allows sharing of stacks and components among users. +### Migration Guide: ZenML 0.20.0-0.23.0 to 0.30.0-0.39.1 -## Migration Steps +**Important Note:** Migrating to ZenML `0.30.0` involves non-reversible database changes; downgrading to `<=0.23.0` post-migration is not possible. If using an older version, first complete the [0.20.0 Migration Guide](migration-zero-twenty.md) to avoid database migration issues. -### 1. Update ZenML +**Key Changes:** +- ZenML `0.30.0` removes the `ml-pipelines-sdk` dependency, enabling native storage of pipeline runs and artifacts in the ZenML database. +- Database migration occurs automatically upon executing any `zenml ...` CLI command after installation. -To revert to version 0.13.2 if needed: +**Installation Example:** ```bash -pip install zenml==0.13.2 +pip install zenml==0.30.0 +zenml version # Should display 0.30.0 ``` -### 2. Migrate Pipeline Runs +================================================== -Use the `zenml pipeline runs migrate` command to transfer existing runs: -- Backup metadata stores before upgrading. -- Upgrade to ZenML 0.22.0, then migrate runs. +=== File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-twenty.md === -**Local SQLite Migration**: -```bash -zenml pipeline runs migrate PATH/TO/LOCAL/STORE/metadata.db -``` +### Migration Guide: ZenML 0.13.2 to 0.20.0 -**Other Stores**: -```bash -zenml pipeline runs migrate DATABASE_NAME --database_type=mysql --mysql_host=URL/TO/MYSQL --mysql_username=MYSQL_USERNAME --mysql_password=MYSQL_PASSWORD -``` +**Last Updated:** 2023-07-24 -### 3. ZenML Server Deployment +ZenML 0.20.0 introduces significant architectural changes that are not backward compatible. This guide provides essential instructions for migrating existing ZenML stacks and pipelines. -- Deploy a ZenML server using: -```bash -zenml up -``` -- Connect to a server: -```bash -zenml connect -``` +#### Key Changes: +- **Metadata Store:** ZenML now manages its own Metadata Store, eliminating the need for external stores. Existing remote Metadata Stores must be replaced with a ZenML server deployment. +- **ZenML Dashboard:** A new dashboard is available for all ZenML deployments. +- **Profiles Removal:** ZenML Profiles are replaced by Projects. Existing Profiles must be manually migrated. +- **Decoupled Configuration:** Stack component configuration is now separate from implementation, requiring updates for custom components. +- **Collaborative Features:** The ZenML server allows sharing of stacks and components among users. -### 4. Migrate Profiles to Projects +#### Migration Steps: +1. **Backup Metadata:** Before upgrading, back up all existing metadata stores. +2. **Upgrade ZenML:** Use `pip install zenml==0.20.0`. +3. **Migrate Pipeline Runs:** + - For local SQLite: + ```bash + zenml pipeline runs migrate PATH/TO/LOCAL/STORE/metadata.db + ``` + - For other stores (MySQL): + ```bash + zenml pipeline runs migrate DATABASE_NAME --database_type=mysql --mysql_host=URL/TO/MYSQL --mysql_username=MYSQL_USERNAME --mysql_password=MYSQL_PASSWORD + ``` -1. Update ZenML to 0.20.0 (Profiles will be invalidated). -2. Connect to the ZenML server. -3. Use: -```bash -zenml profile list -zenml profile migrate /path/to/profile -``` +4. **Deploy ZenML Server:** + - Local: `zenml up` + - Cloud: `zenml deploy --aws` -### 5. Configuration Changes +5. **Migrate Profiles:** + - Update ZenML to 0.20.0. + - Connect to the ZenML server: `zenml connect`. + - Use: + ```bash + zenml profile list + zenml profile migrate /path/to/profile + ``` -- **Rename Classes**: - - `Repository` to `Client` - - `BaseStepConfig` to `BaseParameters` +#### Configuration Changes: +- **Class Renaming:** + - `Repository` → `Client` + - `BaseStepConfig` → `BaseParameters` -- **New Configuration Method**: - - Use `BaseSettings` for runtime configurations. +- **Configuration Method Changes:** + - Remove `@enable_xxx` decorators; use settings directly in the `@step` decorator. + - Replace `pipeline.with_config(...)` with `pipeline.run(config_path=...)`. -**Example**: +#### Example Migration: +**Old Decorator:** +```python +@step(experiment_tracker="mlflow_stack_comp_name") +``` +**New Format:** ```python @step( experiment_tracker="mlflow_stack_comp_name", - settings={"experiment_tracker.mlflow": {"experiment_name": "name", "nested": False}} + settings={"experiment_tracker.mlflow": {"experiment_name": "name"}} ) ``` -### 6. Shared Stacks and Components - -Stacks can be shared with: -```bash -zenml stack register mystack --share -``` - -### 7. Other Changes - -- **PipelineSpec**: Pipelines are uniquely identified post-execution. -- **Post-execution Workflow**: Use new methods for fetching pipelines and runs: -```python -from zenml.post_execution import get_pipelines, get_pipeline -``` - -## Future Changes - -Expect further changes, including potential moves of the secrets manager out of the stack and deprecation of `StepContext`. +#### Important Notes: +- The ZenML Dashboard currently displays only the `default` project. +- The Metadata Store is now integrated into ZenML, and previous implementations are deprecated. +- Ensure that the ZenML server is deployed in proximity to the pipelines for optimal performance. -## Reporting Bugs +#### Future Changes: +- Potential removal of the secrets manager from the stack. +- Deprecation of `StepContext`. -For issues or feature requests, engage with the ZenML community on [Slack](https://zenml.io/slack) or submit a [GitHub Issue](https://github.com/zenml-io/zenml/issues/new/choose). +For further assistance, engage with the ZenML community on Slack or report issues on GitHub. ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-guide.md === -### ZenML Migration Guide +# ZenML Migration Guide -Migrations are required for ZenML releases with breaking changes, specifically for minor version increments (e.g., `0.X` to `0.Y`) and major version increments (e.g., `0.1.X` to `0.2.X`). +## Overview +Migrations are required for ZenML releases with breaking changes, specifically for minor version increments (e.g., `0.X` to `0.Y`) and major version changes (e.g., `0.1.X` to `0.2.X`). -#### Release Type Examples +## Release Type Examples - **No Breaking Changes**: `0.40.2` to `0.40.3` (no migration needed) - **Minor Breaking Changes**: `0.40.3` to `0.41.0` (migration required) -- **Major Breaking Changes**: `0.39.1` to `0.40.0` (significant code changes) +- **Major Breaking Changes**: `0.39.1` to `0.40.0` (significant code changes required) -#### Major Migration Guides -Follow these sequential guides for major version migrations: +## Major Migration Guides +Follow these guides sequentially for major version migrations: - [0.13.2 → 0.20.0](migration-zero-twenty.md) - [0.23.0 → 0.30.0](migration-zero-thirty.md) - [0.39.1 → 0.41.0](migration-zero-forty.md) - [0.58.2 → 0.60.0](migration-zero-sixty.md) -#### Release Notes -For minor breaking changes, refer to the official [ZenML Release Notes](https://github.com/zenml-io/zenml/releases) for details on changes introduced. +## Release Notes +For minor breaking changes, refer to the [ZenML Release Notes](https://github.com/zenml-io/zenml/releases) for details on changes introduced. ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-sixty.md === -### Migration Guide: ZenML 0.58.2 to 0.60.0 (Pydantic 2 Edition) +### Migration from ZenML 0.58.2 to 0.60.0 (Pydantic 2 Edition) -**Overview** -ZenML has upgraded to Pydantic v2, introducing critical updates and stricter validation. While user experience remains largely unchanged, validation errors may arise due to these changes. For issues, contact us on [GitHub](https://github.com/zenml-io/zenml) or [Slack](https://zenml.io/slack-invite). +#### Overview +ZenML has upgraded to Pydantic v2, introducing critical updates and stricter validation processes. Users may experience unexpected behavior or validation errors due to these changes. -**Dependency Changes** -- **SQLModel**: Upgraded from `0.0.8` to `0.0.18` for compatibility with Pydantic v2. -- **SQLAlchemy**: Upgraded from v1 to v2. Users of SQLAlchemy may need to migrate their code; refer to [SQLAlchemy migration guide](https://docs.sqlalchemy.org/en/20/changelog/migration_20.html). +#### Key Dependency Changes +- **SQLModel**: Upgraded from `0.0.8` to `0.0.18` to ensure compatibility with Pydantic v2. +- **SQLAlchemy**: Upgraded from v1 to v2; users of SQLAlchemy may need to migrate their code. Refer to [SQLAlchemy migration guide](https://docs.sqlalchemy.org/en/20/changelog/migration_20.html). -**Pydantic v2 Features** -Pydantic v2 introduces performance improvements and new features in model design, configuration, validation, and serialization. For a complete list of changes, see the [Pydantic migration guide](https://docs.pydantic.dev/2.7/migration/). +#### Pydantic v2 Features +- Enhanced performance through Rust integration. +- New features in model design, configuration, validation, and serialization. For detailed changes, see the [Pydantic migration guide](https://docs.pydantic.dev/2.7/migration/). -**Integration Changes** +#### Integration Updates - **Airflow**: Removed dependencies due to incompatibility with SQLAlchemy v1. Use ZenML for pipeline creation and a separate environment for Airflow. -- **AWS**: Upgraded `sagemaker` to `2.172.0` for compatibility with `protobuf` 4. -- **Evidently**: Updated to versions `0.4.16` to `0.4.22` for Pydantic v2 compatibility. +- **AWS**: Upgraded `sagemaker` to `2.172.0` to support `protobuf` 4. +- **Evidently**: Updated to versions `0.4.16` to `0.4.22` for compatibility with Pydantic v2. - **Feast**: Removed extra `redis` dependency for compatibility. -- **GCP/Kubeflow**: Upgraded `kfp` to v2, eliminating Pydantic dependencies. Expect functional changes in vertex step operator. -- **Great Expectations**: Updated dependency to `great-expectations>=0.17.15,<1.0`. -- **MLflow**: Compatible with both Pydantic versions, but may downgrade to v1 if installed incorrectly. -- **Label Studio**: Updated to support Pydantic v2. -- **Skypilot**: Compatibility issues with `azurecli`; `skypilot_azure` integration deactivated. -- **TensorFlow**: Requires `tensorflow>=2.12.0` due to dependency issues with `protobuf`. -- **Tekton**: Updated to use `kfp` v2, ensuring compatibility. +- **GCP & Kubeflow**: Upgraded `kfp` dependency to v2, removing Pydantic v1 requirement. Functional changes may occur; see the [Kubeflow migration guide](https://www.kubeflow.org/docs/components/pipelines/v2/migration/). +- **Great Expectations**: Set dependency to `great-expectations>=0.17.15,<1.0` for Pydantic v2 compatibility. +- **MLflow**: Compatible with both Pydantic versions, but may downgrade Pydantic to v1 during installation. Expect deprecation warnings. +- **Label Studio**: Updated to support Pydantic v2 in its 1.0 version. +- **Skypilot**: `skypilot[azure]` integration deactivated due to incompatibility with Azure CLI. Users should stay on the previous ZenML version until resolved. +- **TensorFlow**: Requires TensorFlow `>=2.12.0` due to dependency changes. Issues may arise on Python 3.8; consider using a higher Python version. +- **Tekton**: Updated to use `kfp` v2, resolving previous compatibility issues. -**Important Note** -Upgrading to ZenML 0.60.0 may cause dependency issues, especially with integrations not supporting Pydantic v2. It is recommended to set up a fresh Python environment for a smoother transition. +#### Upgrade Recommendations +When upgrading to ZenML 0.60.0, users may face dependency issues, especially with integrations not supporting Pydantic v2. It is recommended to set up a fresh Python environment for the upgrade. ================================================== @@ -9826,15 +9815,15 @@ Upgrading to ZenML 0.60.0 may cause dependency issues, especially with integrati # Migration Guide: ZenML 0.39.1 to 0.41.0 -## Overview -ZenML versions 0.40.0 to 0.41.0 introduced a new syntax for defining steps and pipelines. The old syntax is deprecated and will be removed in future releases. +ZenML versions 0.40.0 to 0.41.0 introduced a new syntax for defining steps and pipelines. While the old syntax is still functional, it is deprecated and will be removed in future releases. -## Old Syntax vs. New Syntax +## Overview -### Step Definition -**Old Syntax:** +### Old Syntax Example ```python +from typing import Optional from zenml.steps import BaseParameters, Output, StepContext, step +from zenml.pipelines import pipeline class MyStepParameters(BaseParameters): param_1: int @@ -9845,114 +9834,77 @@ def my_step(params: MyStepParameters, context: StepContext) -> Output(int_output result = int(params.param_1 * (params.param_2 or 1)) result_uri = context.get_output_artifact_uri() return result, result_uri + +@pipeline +def my_pipeline(my_step): + my_step() + +step_instance = my_step(params=MyStepParameters(param_1=17)) +pipeline_instance = my_pipeline(my_step=step_instance) +pipeline_instance.run() ``` -**New Syntax:** +### New Syntax Example ```python -from typing import Annotated, Optional, Tuple -from zenml import get_step_context, step +from typing import Optional, Tuple +from zenml import get_step_context, pipeline, step @step -def my_step(param_1: int, param_2: Optional[float] = None) -> Tuple[Annotated[int, "int_output"], Annotated[str, "str_output"]]: +def my_step(param_1: int, param_2: Optional[float] = None) -> Tuple[int, str]: result = int(param_1 * (param_2 or 1)) result_uri = get_step_context().get_output_artifact_uri() return result, result_uri -``` - -### Pipeline Definition -**Old Syntax:** -```python -from zenml.pipelines import pipeline - -@pipeline -def my_pipeline(my_step): - my_step() -``` - -**New Syntax:** -```python -from zenml import pipeline @pipeline def my_pipeline(): my_step(param_1=17) -``` -### Running Steps and Pipelines -**Old Syntax:** -```python -my_step.entrypoint() # Call step -pipeline_instance = my_pipeline(my_step=my_step()) -pipeline_instance.run(schedule=schedule) +my_pipeline() ``` -**New Syntax:** -```python -my_step() # Call step directly -my_pipeline().with_options(enable_cache=False, schedule=schedule)() -``` +## Key Changes -### Fetching Pipeline Runs -**Old Syntax:** -```python -last_run = pipeline_instance.get_runs()[0] -int_output = last_run.get_step["my_step"].outputs["int_output"].read() -``` +### Defining Steps +- **Old Syntax**: Use `BaseParameters` for parameters. +- **New Syntax**: Define parameters directly in the step function. Optionally, use `pydantic.BaseModel` for grouping. -**New Syntax:** -```python -last_run = my_pipeline.last_run -int_output = last_run.steps["my_step"].outputs["int_output"].load() -``` +### Running Steps +- **Old Syntax**: Use `my_step.entrypoint()`. +- **New Syntax**: Call the step directly with `my_step()`. -### Controlling Step Execution Order -**Old Syntax:** -```python -@pipeline -def my_pipeline(step_1, step_2, step_3): - step_3.after(step_1) - step_3.after(step_2) -``` +### Defining Pipelines +- **Old Syntax**: Steps are arguments of the pipeline function. +- **New Syntax**: Call steps directly within the pipeline function. -**New Syntax:** -```python -@pipeline -def my_pipeline(): - step_3(after=["step_1", "step_2"]) -``` +### Configuring Pipelines +- **Old Syntax**: Use `pipeline_instance.configure(...)`. +- **New Syntax**: Use `with_options(...)` method on the pipeline. -### Steps with Multiple Outputs -**Old Syntax:** -```python -@step -def my_step() -> Output(int_output=int, str_output=str): - ... -``` +### Running Pipelines +- **Old Syntax**: Create an instance and call `run()`. +- **New Syntax**: Call the pipeline directly. -**New Syntax:** -```python -@step -def my_step() -> Tuple[Annotated[int, "int_output"], Annotated[str, "str_output"]]: - ... -``` +### Scheduling Pipelines +- **Old Syntax**: Set schedule in `run(schedule=...)`. +- **New Syntax**: Use `with_options(schedule=schedule)`. -### Accessing Run Information Inside Steps -**Old Syntax:** -```python -@step -def my_step(context: StepContext) -> Any: - step_name = context.step_name -``` +### Fetching Pipeline Runs +- **Old Syntax**: Access runs via `get_runs()`. +- **New Syntax**: Use `last_run` or `get_pipeline()` methods. -**New Syntax:** -```python -@step -def my_step() -> Any: - context = get_step_context() - step_name = context.step_name -``` +### Controlling Step Execution Order +- **Old Syntax**: Use `step.after(...)`. +- **New Syntax**: Pass `after` argument when calling a step. + +### Defining Steps with Multiple Outputs +- **Old Syntax**: Use `Output` class. +- **New Syntax**: Use `Tuple` with optional annotations. + +### Accessing Run Information Inside Steps +- **Old Syntax**: Pass `StepContext` as an argument. +- **New Syntax**: Use `get_step_context()` to access context information. -For more detailed information on parameterization, scheduling, and fetching metadata, refer to the respective sections in the ZenML documentation. +For detailed information on parameterizing steps, scheduling pipelines, and fetching metadata, refer to the respective documentation pages. ================================================== @@ -9960,18 +9912,18 @@ For more detailed information on parameterization, scheduling, and fetching meta ### ZenML Server Connection Guide -**Authentication with ZenML CLI:** -To connect to the ZenML server, use the following command: +**Overview**: Authenticate with the ZenML Server using the ZenML CLI and web-based login. +**Login Command**: ```bash zenml login https://... ``` +- This command initiates a browser-based validation process. +- You can choose to trust the device: + - **Trust**: 30-day token issued. + - **Do not trust**: 24-hour token issued. -This command initiates a browser-based authentication process. You can choose to trust your device, which will issue a 30-day token, or not trust it, resulting in a 24-hour token. - -**Note:** Device management for ZenML Pro tenants is not currently supported but is planned for future updates. - -**Device Management Commands:** +**Device Management**: - List authorized devices: ```bash zenml authorized-device list @@ -9980,19 +9932,23 @@ This command initiates a browser-based authentication process. You can choose to ```bash zenml authorized-device describe <DEVICE_ID> ``` -- Invalidate a token for a specific device: +- Invalidate a token for a device: ```bash zenml authorized-device lock <DEVICE_ID> ``` -### Summary of Steps: -1. Run `zenml login <URL>` to connect. +**Steps to Connect**: +1. Run `zenml login <URL>`. 2. Decide whether to trust the device. -3. Check authorized devices with `zenml authorized-device list`. -4. Lock a device token with `zenml authorized-device lock <DEVICE_ID>`. +3. List permitted devices with `zenml authorized-device list`. +4. Lock a device if needed with `zenml authorized-device lock <DEVICE_ID>`. + +**Security Notice**: +- Use only trusted devices to maintain security. +- Regularly manage device trust levels. +- Lock devices immediately if trust needs to be revoked to protect access to data and infrastructure. -**Important Security Notice:** -Always use trusted devices for security. Regularly manage device trust levels and lock any device if trust needs to be revoked, as each token can access sensitive data and infrastructure. +**Note**: Device management for ZenML Pro tenants is not yet supported but will be available soon. ================================================== @@ -10000,7 +9956,9 @@ Always use trusted devices for security. Regularly manage device trust levels an # Connecting to ZenML -Once [ZenML is deployed](../../../user-guide/production-guide/deploying-zenml.md), you can connect to it using various methods. +Once ZenML is deployed, there are multiple methods to connect to it. + +For detailed deployment instructions, refer to the [production guide](../../../user-guide/production-guide/deploying-zenml.md).  @@ -10008,17 +9966,18 @@ Once [ZenML is deployed](../../../user-guide/production-guide/deploying-zenml.md === File: docs/book/how-to/manage-zenml-server/connecting-to-zenml/connect-with-an-api-token.md === -### Connecting to ZenML Server with an API Token +### Connect with an API Token -API tokens authenticate with the ZenML server for temporary automation tasks, valid for up to 1 hour and scoped to your user account. +API tokens are used to authenticate with the ZenML server for temporary automation tasks, valid for up to 1 hour and scoped to your user account. #### Generating an API Token -1. Go to the ZenML dashboard's Settings page (or the tenant's Settings in ZenML Pro). +To generate a new API token: +1. Go to the server's Settings page in your ZenML dashboard. 2. Select "API Tokens" from the left sidebar. 3. Click "Create new token." A dialog will display your new API token. #### Programmatic Access -Use the generated API tokens for programmatic access to the ZenML server's REST API. This method is ideal when not using the ZenML CLI or Python client and avoids setting up a service account. Detailed usage is documented in the [API reference section](../../../reference/api-reference.md#using-a-short-lived-api-token). +The generated API tokens allow programmatic access to the ZenML server's REST API, useful when not using the ZenML CLI or Python client. For detailed instructions, refer to the [API reference section](../../../reference/api-reference.md#using-a-short-lived-api-token). ================================================== @@ -10026,75 +9985,78 @@ Use the generated API tokens for programmatic access to the ZenML server's REST ### ZenML Service Account and API Key Authentication -To connect to a ZenML server from non-interactive environments (e.g., CI/CD, serverless functions), use a service account and an API key for authentication. +To authenticate with a ZenML server in non-interactive environments (e.g., CI/CD), create a service account and API key: -#### Creating a Service Account -Create a service account and generate an API key: ```bash zenml service-account create <SERVICE_ACCOUNT_NAME> ``` -The API key will be displayed but cannot be retrieved later. -#### Connecting to ZenML Server -You can connect using the API key in two ways: +The API key is displayed upon creation and cannot be retrieved later. Use it to connect your ZenML client via: 1. **CLI Method**: ```bash zenml login https://... --api-key ``` -2. **Environment Variables** (suitable for CI/CD): +2. **Environment Variables** (suitable for automated environments): ```bash export ZENML_STORE_URL=https://... export ZENML_STORE_API_KEY=<API_KEY> ``` - No need to run `zenml login` after setting these variables. -#### Managing Service Accounts and API Keys -- List service accounts: +Setting these variables allows immediate interaction without needing to run `zenml login`. + +### Managing Service Accounts and API Keys + +- **List Service Accounts**: ```bash zenml service-account list - ``` -- List API keys for a service account: - ```bash zenml service-account api-key <SERVICE_ACCOUNT_NAME> list ``` -- Describe a service account or API key: + +- **Describe Service Account or API Key**: ```bash zenml service-account describe <SERVICE_ACCOUNT_NAME> zenml service-account api-key <SERVICE_ACCOUNT_NAME> describe <API_KEY_NAME> ``` -#### API Key Rotation -API keys do not expire, but it's recommended to rotate them regularly: +### API Key Management + +API keys do not expire but should be rotated regularly for security: + ```bash zenml service-account api-key <SERVICE_ACCOUNT_NAME> rotate <API_KEY_NAME> ``` + To retain the old API key for a specified period (e.g., 60 minutes): + ```bash zenml service-account api-key <SERVICE_ACCOUNT_NAME> rotate <API_KEY_NAME> --retain 60 ``` -#### Deactivating Service Accounts and API Keys +### Deactivating Accounts and Keys + To deactivate a service account or API key: + ```bash zenml service-account update <SERVICE_ACCOUNT_NAME> --active false zenml service-account api-key <SERVICE_ACCOUNT_NAME> update <API_KEY_NAME> --active false ``` -Deactivation takes immediate effect. -#### Summary of Steps -1. Create a service account: `zenml service-account create`. -2. Connect using API key: `zenml login <url> --api-key`. -3. List service accounts: `zenml service-account list`. -4. List API keys: `zenml service-account api-key <SERVICE_ACCOUNT_NAME> list`. -5. Rotate API keys: `zenml service-account api-key <SERVICE_ACCOUNT_NAME> rotate`. -6. Deactivate accounts/keys: `zenml service-account update` or `zenml service-account api-key <SERVICE_ACCOUNT_NAME> update`. +### Summary Steps -#### Programmatic Access -Use the API key to obtain short-lived API tokens for secure programmatic access to the ZenML REST API. Refer to the [API reference section](../../../reference/api-reference.md#using-a-service-account-and-an-api-key) for detailed documentation. +1. Create a service account and API key. +2. Connect using the API key via CLI or environment variables. +3. List service accounts and API keys. +4. Rotate API keys regularly. +5. Deactivate unused accounts or keys. + +### Programmatic Access + +API keys can be used to obtain short-lived tokens for secure programmatic access to the ZenML server's REST API. For detailed instructions, refer to the [API reference section](../../../reference/api-reference.md#using-a-service-account-and-an-api-key). + +### Security Notice -#### Security Note Regularly rotate API keys and deactivate or delete unused service accounts and keys to protect your data and infrastructure. ================================================== @@ -10105,33 +10067,29 @@ Regularly rotate API keys and deactivate or delete unused service accounts and k This section details the infrastructure setup and deployment processes in ZenML. -### Key Components: -- **Infrastructure Setup**: Involves configuring cloud resources, networking, and security settings necessary for ZenML operations. -- **Deployment**: Covers methods for deploying ZenML pipelines and components to various environments (e.g., local, cloud). +## Key Components -### Important Considerations: -- **Cloud Providers**: ZenML supports multiple cloud providers (AWS, GCP, Azure). Choose based on project requirements. -- **Networking**: Ensure proper network configurations for secure communication between components. -- **Security**: Implement best practices for authentication and authorization to protect data and resources. +1. **Infrastructure Setup**: + - ZenML supports various cloud providers (AWS, GCP, Azure) and local environments. + - Users can configure their infrastructure using YAML files or through the ZenML CLI. -### Deployment Steps: -1. **Choose Environment**: Select local or cloud deployment. -2. **Configure Resources**: Set up necessary cloud resources (e.g., compute instances, storage). -3. **Deploy Pipelines**: Use ZenML CLI or SDK for deploying pipelines. +2. **Deployment Options**: + - **Local Deployment**: Ideal for development and testing. + - **Cloud Deployment**: Suitable for production workloads, leveraging cloud services for scalability and reliability. -### Example Code Snippet: -```python -from zenml import pipeline +3. **Configuration**: + - Users define their environment settings in a configuration file. + - Important parameters include resource allocation, environment variables, and service endpoints. -@pipeline -def my_pipeline(): - # Define pipeline steps here - pass +4. **Integration**: + - ZenML integrates with CI/CD pipelines for automated deployments. + - Supports version control for tracking changes in infrastructure configurations. -my_pipeline.run() -``` +5. **Monitoring and Maintenance**: + - Tools for monitoring resource usage and performance metrics. + - Regular updates and maintenance are recommended to ensure optimal performance. -This summary encapsulates the critical aspects of infrastructure setup and deployment in ZenML, ensuring that essential technical details are retained for further inquiries. +By following these guidelines, users can effectively set up and deploy their ZenML infrastructure tailored to their specific needs. ================================================== @@ -10139,9 +10097,11 @@ This summary encapsulates the critical aspects of infrastructure setup and deplo ### Integrate with Infrastructure as Code -**Infrastructure as Code (IaC)** is the practice of managing and provisioning infrastructure through code rather than manual processes. This section outlines how to integrate ZenML with popular IaC tools like [Terraform](https://www.terraform.io/). +**Infrastructure as Code (IaC)** is the practice of managing and provisioning infrastructure through code rather than manual processes. This section covers how to integrate ZenML with popular IaC tools like [Terraform](https://www.terraform.io/). + + - +By leveraging IaC, you can effectively manage your ZenML stacks and components. ================================================== @@ -10150,15 +10110,14 @@ This summary encapsulates the critical aspects of infrastructure setup and deplo ### Summary: Registering Existing Infrastructure with ZenML for Terraform Users #### Overview -This guide assists advanced users in integrating ZenML with existing Terraform infrastructure. It focuses on managing custom Terraform code using the ZenML provider. +This guide is for advanced users integrating ZenML with existing Terraform setups, focusing on managing custom Terraform code using the ZenML provider. #### Two-Phase Approach -1. **Infrastructure Deployment**: Create cloud resources. -2. **ZenML Registration**: Register resources as ZenML stack components. +1. **Infrastructure Deployment**: Create cloud resources (managed by platform teams). +2. **ZenML Registration**: Register these resources as ZenML stack components. #### Phase 1: Infrastructure Deployment -Existing Terraform configurations may include resources like: - +Example of existing GCP infrastructure: ```hcl resource "google_storage_bucket" "ml_artifacts" { name = "company-ml-artifacts" @@ -10173,9 +10132,8 @@ resource "google_artifact_registry_repository" "ml_containers" { #### Phase 2: ZenML Registration -##### Setup the ZenML Provider -Configure the ZenML provider to connect to your ZenML server: - +**Setup the ZenML Provider** +Configure the ZenML provider to connect with your ZenML server: ```hcl terraform { required_providers { @@ -10184,41 +10142,49 @@ terraform { } provider "zenml" { - # Load configuration from environment variables + # Configuration from environment variables } ``` - -Generate an API key with: - +Generate an API key: ```bash zenml service-account create <SERVICE_ACCOUNT_NAME> ``` -##### Create Service Connectors -Service connectors manage authentication: - +**Create Service Connectors** +Establish authentication between components: ```hcl resource "zenml_service_connector" "gcp_connector" { name = "gcp-${var.environment}-connector" type = "gcp" - auth_method = "service-account" + auth_method = "service-account" configuration = { - project_id = var.project_id - service_account_json = file("service-account.json") + project_id = var.project_id + service_account_json = file("service-account.json") } } ``` -##### Register Stack Components -Register various components using a generic pattern: - +**Register Stack Components** +Register various components: ```hcl locals { component_configs = { - artifact_store = { type = "artifact_store", flavor = "gcp", configuration = { path = "gs://${google_storage_bucket.ml_artifacts.name}" } } - container_registry = { type = "container_registry", flavor = "gcp", configuration = { uri = "${var.region}-docker.pkg.dev/${var.project_id}/${google_artifact_registry_repository.ml_containers.repository_id}" } } - orchestrator = { type = "orchestrator", flavor = "vertex", configuration = { project = var.project_id, region = var.region } } + artifact_store = { + type = "artifact_store" + flavor = "gcp" + configuration = { path = "gs://${google_storage_bucket.ml_artifacts.name}" } + } + container_registry = { + type = "container_registry" + flavor = "gcp" + configuration = { uri = "${var.region}-docker.pkg.dev/${var.project_id}/${google_artifact_registry_repository.ml_containers.repository_id}" } + } + orchestrator = { + type = "orchestrator" + flavor = "vertex" + configuration = { project = var.project_id, region = var.region } + } } } @@ -10233,9 +10199,8 @@ resource "zenml_stack_component" "components" { } ``` -##### Assemble the Stack +**Assemble the Stack** Combine components into a stack: - ```hcl resource "zenml_stack" "ml_stack" { name = "${var.environment}-ml-stack" @@ -10252,8 +10217,6 @@ resource "zenml_stack" "ml_stack" { - Vertex AI enabled **Step 1: Variables Configuration** -Define variables in `variables.tf`: - ```hcl variable "zenml_server_url" { type = string } variable "zenml_api_key" { type = string, sensitive = true } @@ -10264,11 +10227,23 @@ variable "gcp_service_account_key" { type = string, sensitive = true } ``` **Step 2: Main Configuration** -In `main.tf`, configure providers and resources: - ```hcl -provider "zenml" { server_url = var.zenml_server_url; api_key = var.zenml_api_key } -provider "google" { project = var.project_id; region = var.region } +terraform { + required_providers { + zenml = { source = "zenml-io/zenml" } + google = { source = "hashicorp/google" } + } +} + +provider "zenml" { + server_url = var.zenml_server_url + api_key = var.zenml_api_key +} + +provider "google" { + project = var.project_id + region = var.region +} resource "google_storage_bucket" "artifacts" { name = "${var.project_id}-zenml-artifacts-${var.environment}" @@ -10285,36 +10260,43 @@ resource "zenml_service_connector" "gcp" { type = "gcp" auth_method = "service-account" configuration = { - project_id = var.project_id - service_account_json = var.gcp_service_account_key + project_id = var.project_id + service_account_json = var.gcp_service_account_key } } -# Register components (artifact store, container registry, orchestrator) similarly as shown above. +resource "zenml_stack_component" "artifact_store" { + name = "gcs-${var.environment}" + type = "artifact_store" + flavor = "gcp" + configuration = { path = "gs://${google_storage_bucket.artifacts.name}/artifacts" } + connector_id = zenml_service_connector.gcp.id +} + +resource "zenml_stack" "gcp_stack" { + name = "gcp-${var.environment}" + components = { + artifact_store = zenml_stack_component.artifact_store.id + container_registry = zenml_stack_component.container_registry.id + orchestrator = zenml_stack_component.orchestrator.id + } +} ``` **Step 3: Outputs Configuration** -Define outputs in `outputs.tf`: - ```hcl output "stack_id" { value = zenml_stack.gcp_stack.id } output "stack_name" { value = zenml_stack.gcp_stack.name } -output "artifact_store_path" { value = "${google_storage_bucket.artifacts.name}/artifacts" } -output "container_registry_uri" { value = "${var.region}-docker.pkg.dev/${var.project_id}/${google_artifact_registry_repository.containers.repository_id}" } ``` **Step 4: terraform.tfvars Configuration** -Create a `terraform.tfvars` file for variable values: - ```hcl zenml_server_url = "https://your-zenml-server.com" project_id = "your-gcp-project-id" region = "us-central1" environment = "dev" ``` - Store sensitive variables in environment variables: - ```bash export TF_VAR_zenml_api_key="your-zenml-api-key" export TF_VAR_gcp_service_account_key=$(cat path/to/service-account-key.json) @@ -10329,7 +10311,7 @@ export TF_VAR_gcp_service_account_key=$(cat path/to/service-account-key.json) ```bash zenml integration install gcp ``` -3. Review changes: +3. Review planned changes: ```bash terraform plan ``` @@ -10337,36 +10319,50 @@ export TF_VAR_gcp_service_account_key=$(cat path/to/service-account-key.json) ```bash terraform apply ``` -5. Set the active stack: +5. Set the stack as active: ```bash zenml stack set $(terraform output -raw stack_name) ``` -6. Verify: +6. Verify configuration: ```bash zenml stack describe ``` -### Conclusion -This guide provides a streamlined approach to registering existing GCP infrastructure with ZenML using Terraform. Adaptations for AWS and Azure are possible by adjusting provider configurations. Follow best practices for security and version control. For more details, refer to the [ZenML provider documentation](https://registry.terraform.io/providers/zenml-io/zenml/latest). +#### Best Practices +- Use appropriate IAM roles and permissions. +- Follow security practices for credential handling. +- Consider Terraform workspaces for multiple environments. +- Regularly back up Terraform state files. +- Version control Terraform configurations (excluding sensitive files). + +For more details on the ZenML Terraform provider, visit the [ZenML provider documentation](https://registry.terraform.io/providers/zenml-io/zenml/latest). ================================================== === File: docs/book/how-to/infrastructure-deployment/infrastructure-as-code/best-practices.md === -# Best Practices for Using IaC with ZenML +# Summary: Best Practices for Using IaC with ZenML ## Overview -This documentation outlines best practices for architecting scalable ML infrastructure using ZenML and Terraform, addressing challenges such as supporting multiple teams, environments, security, and compliance. +This documentation outlines best practices for architecting scalable ML infrastructure using ZenML and Terraform. Key challenges include supporting multiple ML teams, maintaining security, and enabling rapid iteration without infrastructure bottlenecks. ## ZenML Approach -ZenML uses stack components as abstractions over infrastructure resources, enabling a component-based architecture for reusability and consistency. +ZenML utilizes stack components as abstractions over infrastructure resources, allowing for a component-based architecture that promotes reusability and consistency. ### Part 1: Stack Component Architecture -- **Problem**: Different teams require varied ML infrastructure configurations. -- **Solution**: Break down infrastructure into reusable modules. +**Problem:** Different teams require varied ML infrastructure configurations. +**Solution:** Create reusable modules that correspond to ZenML stack components. + +**Base Infrastructure Example:** ```hcl -# Base infrastructure module +terraform { + required_providers { + zenml = { source = "zenml-io/zenml" } + google = { source = "hashicorp/google" } + } +} + resource "random_id" "suffix" { byte_length = 6 } module "base_infrastructure" { @@ -10388,7 +10384,6 @@ resource "zenml_service_connector" "base_connector" { } } -# Base stack components resource "zenml_stack_component" "artifact_store" { name = "${var.environment}-artifact-store" type = "artifact_store" @@ -10405,36 +10400,18 @@ resource "zenml_stack" "base_stack" { } ``` -Teams can extend the base stack with specific components: - -```hcl -# Training-specific stack -resource "zenml_stack_component" "training_orchestrator" { - name = "${var.environment}-training-orchestrator" - type = "orchestrator" - flavor = "vertex" - configuration = { location = var.region; machine_type = "n1-standard-8"; gpu_enabled = true; synchronous = true } - connector_id = zenml_service_connector.base_connector.id -} - -resource "zenml_stack" "training_stack" { - name = "${var.environment}-training-stack" - components = { - artifact_store = zenml_stack_component.artifact_store.id - orchestrator = zenml_stack_component.training_orchestrator.id - } -} -``` +Teams can extend this base stack with specific configurations. ### Part 2: Environment Management and Authentication -- **Problem**: Different environments require distinct configurations and authentication methods. -- **Solution**: Use environment-specific configurations and smart authentication. +**Problem:** Different environments require distinct authentication and resource configurations. + +**Solution:** Use an environment configuration pattern with adaptable service connectors. ```hcl locals { env_config = { - dev = { machine_type = "n1-standard-4"; gpu_enabled = false; auth_method = "service-account"; auth_configuration = { service_account_json = file("dev-sa.json") } } - prod = { machine_type = "n1-standard-8"; gpu_enabled = true; auth_method = "external-account"; auth_configuration = { external_account_json = file("prod-sa.json") } } + dev = { machine_type = "n1-standard-4", gpu_enabled = false, auth_method = "service-account", auth_configuration = { service_account_json = file("dev-sa.json") } } + prod = { machine_type = "n1-standard-8", gpu_enabled = true, auth_method = "external-account", auth_configuration = { external_account_json = file("prod-sa.json") } } } } @@ -10447,23 +10424,16 @@ resource "zenml_service_connector" "env_connector" { content { key = configuration.key; value = configuration.value } } } - -resource "zenml_stack_component" "env_orchestrator" { - name = "${var.environment}-orchestrator" - type = "orchestrator" - flavor = "vertex" - configuration = { location = var.region; machine_type = local.env_config[var.environment].machine_type; gpu_enabled = local.env_config[var.environment].gpu_enabled } - connector_id = zenml_service_connector.env_connector.id -} ``` ### Part 3: Resource Sharing and Isolation -- **Problem**: Need for strict isolation of resources across ML projects. -- **Solution**: Implement resource scoping with project isolation. +**Problem:** ML projects need strict isolation to prevent unauthorized access. + +**Solution:** Implement resource scoping with project isolation. ```hcl locals { - project_paths = { fraud_detection = "projects/fraud_detection/${var.environment}"; recommendation = "projects/recommendation/${var.environment}" } + project_paths = { fraud_detection = "projects/fraud_detection/${var.environment}", recommendation = "projects/recommendation/${var.environment}" } } resource "zenml_stack_component" "project_artifact_stores" { @@ -10483,55 +10453,55 @@ resource "zenml_stack" "project_stacks" { ``` ### Part 4: Advanced Stack Management Practices -1. **Stack Component Versioning** - ```hcl - locals { stack_version = "1.2.0"; common_labels = { version = local.stack_version; managed_by = "terraform"; environment = var.environment } } - resource "zenml_stack" "versioned_stack" { name = "stack-v${local.stack_version}"; labels = local.common_labels } - ``` +1. **Stack Component Versioning:** +```hcl +locals { stack_version = "1.2.0" } +resource "zenml_stack" "versioned_stack" { name = "stack-v${local.stack_version}" } +``` -2. **Service Connector Management** - ```hcl - resource "zenml_service_connector" "env_connector" { - name = "${var.environment}-${var.purpose}-connector" - type = var.connector_type - auth_method = var.environment == "prod" ? "workload-identity" : "service-account" - resource_type = var.resource_type - resource_id = var.resource_id - labels = merge(local.common_labels, { purpose = var.purpose }) - } - ``` +2. **Service Connector Management:** +```hcl +resource "zenml_service_connector" "env_connector" { + name = "${var.environment}-${var.purpose}-connector" + type = var.connector_type + auth_method = var.environment == "prod" ? "workload-identity" : "service-account" +} +``` -3. **Component Configuration Management** - ```hcl - locals { - base_configs = { orchestrator = { location = var.region; project = var.project_id }; artifact_store = { path_prefix = "gs://${var.bucket_name}" } } - env_configs = { dev = { orchestrator = { machine_type = "n1-standard-4" } }; prod = { orchestrator = { machine_type = "n1-standard-8" } } } - } - resource "zenml_stack_component" "configured_component" { - name = "${var.environment}-${var.component_type}" - type = var.component_type - configuration = merge(local.base_configs[var.component_type], try(local.env_configs[var.environment][var.component_type], {})) - } - ``` +3. **Component Configuration Management:** +```hcl +locals { + base_configs = { orchestrator = { location = var.region, project = var.project_id } } +} +resource "zenml_stack_component" "configured_component" { + name = "${var.environment}-${var.component_type}" + type = var.component_type + configuration = merge(local.base_configs[var.component_type], try(local.env_configs[var.environment][var.component_type], {})) +} +``` -4. **Stack Organization and Dependencies** - ```hcl - module "ml_stack" { - source = "./modules/ml_stack" - depends_on = [module.base_infrastructure, module.security] - components = { artifact_store = module.storage.artifact_store_id; container_registry = module.container.registry_id } - labels = merge(local.common_labels, { stack_type = "ml-platform" }) - } - ``` +4. **Stack Organization and Dependencies:** +```hcl +module "ml_stack" { + source = "./modules/ml_stack" + depends_on = [module.base_infrastructure] + components = { artifact_store = module.storage.artifact_store_id } +} +``` -5. **State Management** - ```hcl - terraform { backend "gcs" { prefix = "terraform/state" }; workspace_prefix = "zenml-" } - data "terraform_remote_state" "infrastructure" { backend = "gcs"; config = { bucket = var.state_bucket; prefix = "terraform/infrastructure" } } - ``` +5. **State Management:** +```hcl +terraform { + backend "gcs" { prefix = "terraform/state" } +} +data "terraform_remote_state" "infrastructure" { + backend = "gcs" + config = { bucket = var.state_bucket } +} +``` ## Conclusion -Using ZenML and Terraform for ML infrastructure enables the creation of a flexible, maintainable, and secure environment. Following these best practices ensures a clean infrastructure codebase while supporting efficient ML operations. +Using ZenML with Terraform allows for a flexible, maintainable, and secure ML infrastructure. Following these best practices ensures a clean codebase and effective management of ML operations. ================================================== @@ -10539,120 +10509,144 @@ Using ZenML and Terraform for ML infrastructure enables the creation of a flexib # Service Connectors Guide Summary -This documentation provides a comprehensive guide to managing Service Connectors in ZenML, enabling connections to external resources. Key sections include: - -## Getting Started -- **Terminology**: Familiarize with terms related to Service Connectors, including types, resource types, and resource names. -- **Service Connector Types**: Understand different implementations and their capabilities, such as AWS, GCP, Azure, Kubernetes, and Docker connectors. +This documentation provides a comprehensive guide for managing Service Connectors, enabling ZenML to connect with external resources. Key sections include: -## Key Commands -- **List Available Types**: +## Overview +- **Service Connectors** facilitate authentication and connection to various external resources. +- Recommended navigation: + - **Terminology**: Understand key terms related to Service Connectors. + - **Service Connector Types**: Learn about different implementations and their use cases. + - **Registering Service Connectors**: Quick setup for evaluating features. + - **Connecting Stack Components**: Direct connections to resources like Kubernetes or Docker. + +## Terminology +- **Service Connector Types**: Define capabilities and supported resources. Examples include AWS, Azure, and GCP connectors. +- **Resource Types**: Logical classifications of resources (e.g., `kubernetes-cluster`, `docker-registry`). +- **Resource Names**: Unique identifiers for resource instances. + +### Example Commands +- List available Service Connector Types: ```sh zenml service-connector list-types ``` -- **Describe a Type**: +- Describe a specific Service Connector Type: ```sh - zenml service-connector describe-type <type-name> - ``` -- **Register a Service Connector**: - ```sh - zenml service-connector register <name> --type <type> --auto-configure + zenml service-connector describe-type aws ``` ## Service Connector Types -- **Types**: Each connector type supports various resource types and authentication methods. -- **Examples**: AWS Service Connector supports multiple authentication methods (e.g., secret keys, STS tokens) and resource types (e.g., S3 buckets, EKS clusters). +- Built-in types include AWS, GCP, Azure, Kubernetes, and Docker. +- Each type supports various authentication methods and resource types. -## Resource Management -- **Resource Types**: Organizes resources into classes based on access protocols or vendors (e.g., `kubernetes-cluster`, `docker-registry`). -- **Resource Names**: Unique identifiers for resource instances, such as S3 bucket names. - -## Connecting Stack Components -- **Connect Components**: Use the interactive CLI to connect Stack Components to resources: +## Registering Service Connectors +- Service Connectors can be registered in multi-type or single-instance configurations. +- Example command for a multi-type AWS Service Connector: ```sh - zenml artifact-store connect <component-name> -i + zenml service-connector register aws-multi-type --type aws --auto-configure ``` -## Auto-Configuration -- **Feature**: Automatically discovers and extracts configuration from local environments using cloud provider CLIs (e.g., AWS CLI, GCP SDK). - ## Verification -- **Verify Connectors**: Check if Service Connectors are correctly configured and can access specified resources: +- Verify the configuration and credentials of Service Connectors: ```sh zenml service-connector verify <connector-name> ``` -## Local Client Configuration -- **Configure Local CLIs**: Set up local CLI tools (e.g., `kubectl`, Docker) with credentials from Service Connectors: +## Connecting Stack Components +- Connect Stack Components to resources using Service Connectors. +- Example for connecting an artifact store: ```sh - zenml service-connector login <connector-name> --resource-type <type> --resource-id <id> + zenml artifact-store connect <component-name> --connector <connector-name> ``` ## Resource Discovery -- **List Resources**: Discover accessible resources through configured Service Connectors: +- Discover accessible resources via Service Connectors: ```sh zenml service-connector list-resources ``` +## Auto-Configuration +- Automatically extract configuration from local environments using CLI tools. + +## Local Client Configuration +- Configure local CLI tools (e.g., `kubectl`, Docker) with Service Connector credentials. + ## End-to-End Examples -- Detailed examples are provided for AWS, GCP, and Azure Service Connectors, demonstrating the complete process from registration to running pipelines. +- Detailed examples for AWS, GCP, and Azure Service Connectors are available for practical guidance. -This guide serves as a foundational resource for effectively managing Service Connectors within ZenML, facilitating secure and efficient connections to various external resources. +This summary encapsulates the essential technical information and commands needed to effectively manage Service Connectors in ZenML. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/gcp-service-connector.md === -### Summary of GCP Service Connector Documentation +### GCP Service Connector Documentation Summary -The **ZenML GCP Service Connector** enables seamless connection to various GCP resources, including GCS buckets, GKE clusters, and GCR registries. It supports multiple authentication methods: GCP user accounts, service accounts, short-lived OAuth 2.0 tokens, and implicit authentication. By default, it issues short-lived OAuth 2.0 tokens to enhance security. +The **GCP Service Connector** in ZenML facilitates authentication and access to various GCP resources, including Google Cloud Storage (GCS) buckets, Google Kubernetes Engine (GKE) clusters, and Google Container Registry (GCR). It supports multiple authentication methods, including user accounts, service accounts, and OAuth 2.0 tokens, prioritizing security by issuing short-lived tokens by default. #### Key Features: -- **Resource Types**: - - **Generic GCP Resource**: Connects to any GCP service, providing a Google-auth credentials object. - - **GCS Bucket**: Requires specific permissions (e.g., `storage.buckets.list`, `storage.objects.create`). - - **GKE Cluster**: Requires permissions like `container.clusters.list`. - - **GAR/GCR Registry**: Supports both Google Artifact Registry and legacy GCR, requiring permissions for repository management. - - **Authentication Methods**: - - **Implicit Authentication**: Uses Application Default Credentials (ADC) but is disabled by default due to security risks. - - **GCP User Account**: Generates temporary OAuth 2.0 tokens from user credentials. - - **GCP Service Account**: Similar to user accounts but uses service account keys. - - **Service Account Impersonation**: Generates temporary credentials by impersonating another service account. - - **External Account**: Uses workload identity federation for authentication with AWS or Azure credentials. + - **Implicit Authentication**: Uses Application Default Credentials (ADC) but is disabled by default for security. + - **User Account**: Long-lived credentials, generating temporary tokens for clients. + - **Service Account**: Requires a service account key JSON, generating temporary tokens. + - **Impersonation**: Generates temporary STS credentials by impersonating another service account. + - **External Account**: Uses GCP Workload Identity for authentication with AWS or Azure credentials. - **OAuth 2.0 Token**: Requires manual token management. -#### Configuration: +#### Resource Types: +1. **Generic GCP Resource**: For accessing any GCP service using OAuth 2.0 tokens. +2. **GCS Bucket**: Requires specific permissions (e.g., `storage.buckets.list`, `storage.objects.create`). +3. **GKE Cluster**: Requires permissions like `container.clusters.list`. +4. **GAR/GCR Registry**: Supports both Google Artifact Registry and legacy GCR, requiring specific permissions for access. + +#### Prerequisites: - Install the GCP Service Connector using: - ```shell + ```bash pip install "zenml[connectors-gcp]" ``` -- Register a service connector: - ```shell - zenml service-connector register <name> --type gcp --auth-method <method> --auto-configure + or the full integration: + ```bash + zenml integration install gcp + ``` + +#### Example Commands: +- **List Service Connector Types**: + ```bash + zenml service-connector list-types --type gcp ``` -#### Local Client Configuration: -- The GCP Service Connector can configure local `gcloud`, `kubectl`, and Docker CLIs with short-lived credentials. -- Example for Kubernetes CLI: - ```shell - zenml service-connector login <connector-name> --resource-type kubernetes-cluster --resource-id <cluster-name> +- **Register a GCP Service Connector**: + ```bash + zenml service-connector register gcp-implicit --type gcp --auth-method implicit --auto-configure ``` -#### Stack Components: -- Connect Stack Components like GCS Artifact Store, GKE Orchestrator, and GCR Container Registry through the GCP Service Connector. -- Example of connecting a GCS Artifact Store: - ```shell - zenml artifact-store register <name> --flavor gcp --path=gs://<bucket-name> +- **Verify a Service Connector**: + ```bash + zenml service-connector verify gcp-user-account --resource-type kubernetes-cluster ``` -#### End-to-End Example: -1. Configure local GCP CLI and install ZenML integration. -2. Register a multi-type GCP Service Connector. -3. Connect various Stack Components (GCS, GKE, GCR) using the registered connector. -4. Run a simple pipeline to validate the setup. +- **Connect Stack Components**: + - Register a GCS Artifact Store: + ```bash + zenml artifact-store register gcs-zenml-bucket-sl --flavor gcp --path=gs://zenml-bucket-sl + ``` + - Connect to a GKE Cluster: + ```bash + zenml orchestrator register gke-zenml-test-cluster --flavor kubernetes --synchronous=true --kubernetes_namespace=zenml-workloads + ``` + +#### Auto-Configuration: +The GCP Service Connector can automatically configure credentials from the local GCP CLI, simplifying setup: +```bash +zenml service-connector register gcp-auto --type gcp --auto-configure +``` + +#### Local Client Provisioning: +The connector can configure local clients (e.g., `kubectl`, Docker) with credentials extracted from the service connector, although these credentials have a short lifetime. -This documentation provides comprehensive guidance on configuring and utilizing the ZenML GCP Service Connector for efficient access to GCP resources. +#### End-to-End Workflow Example: +A complete ZenML stack can be set up to connect multiple components (e.g., GKE, GCS, GCR) using a single service connector, enabling seamless integration and resource management. + +This summary captures the essential technical details of the GCP Service Connector, its configuration, usage, and examples, ensuring that critical information is retained while maintaining conciseness. ================================================== @@ -10660,37 +10654,60 @@ This documentation provides comprehensive guidance on configuring and utilizing ### ZenML Service Connectors Overview -ZenML facilitates the integration of MLOps platforms with various cloud providers and infrastructure services (AWS, GCP, Azure, Kubernetes, etc.) through **Service Connectors**. These connectors simplify the management of authentication and authorization, allowing seamless access to resources such as AWS S3 buckets, Kubernetes clusters, and more. +ZenML allows seamless integration with various cloud providers (AWS, GCP, Azure, Kubernetes) to facilitate MLOps workflows. Service Connectors abstract the complexities of authentication and authorization, enabling secure access to infrastructure resources. -#### Key Features of Service Connectors: -- **Abstraction of Complexity**: Service Connectors handle authentication and authorization, reducing the burden on developers. -- **Security Best Practices**: They implement security measures, such as generating short-lived credentials, to minimize risks associated with long-lived credentials. -- **Multi-Resource Access**: Multiple Stack Components can use the same Service Connector to access different resources. +#### Key Points: -#### Use Case Example: Connecting to AWS S3 -1. **Listing Available Service Connector Types**: - ```sh - zenml service-connector list-types +- **Service Connectors**: Simplify the connection between ZenML and external services, handling authentication securely. +- **Use Case**: An example demonstrates connecting ZenML to an AWS S3 bucket using the AWS Service Connector. + +#### Alternatives to Service Connectors: +1. **Direct Authentication**: Embedding credentials directly in Stack Components (not recommended for security). + ```shell + zenml artifact-store register s3 --flavor s3 --path=s3://BUCKET_NAME --key=AWS_ACCESS_KEY --secret=AWS_SECRET_KEY + ``` +2. **Using ZenML Secrets**: Storing credentials in ZenML secrets. + ```shell + zenml secret create aws --aws_access_key_id=AWS_ACCESS_KEY --aws_secret_access_key=AWS_SECRET_KEY + zenml artifact-store register s3 --flavor s3 --path=s3://BUCKET_NAME --key='{{aws.aws_access_key_id}}' --secret='{{aws.aws_secret_access_key}}' + ``` +3. **Referencing Secrets**: A better approach by directly referencing the secret. + ```shell + zenml artifact-store register s3 --flavor s3 --path=s3://BUCKET_NAME --authentication_secret=aws ``` -2. **Describing the AWS Service Connector Type**: +**Drawbacks of Alternatives**: +- Limited support for secrets in some Stack Components. +- Portability issues with credentials tied to specific environments. +- Security risks with long-lived credentials. +- Lack of validation for configured credentials. + +#### Service Connector Benefits: +- Acts as a broker for authentication, keeping main credentials secure on the ZenML server. +- Supports multiple Stack Components using the same Service Connector. +- Facilitates temporary credential generation for enhanced security. + +#### Steps to Use Service Connectors: +1. **List Available Service Connector Types**: ```sh - zenml service-connector describe-type aws + zenml service-connector list-types ``` -3. **Registering a Service Connector**: - This connects ZenML to AWS using auto-detected credentials from the local AWS CLI configuration. +2. **Register a Service Connector**: + Example for AWS S3: ```sh zenml service-connector register aws-s3 --type aws --auto-configure --resource-type s3-bucket ``` -4. **Connecting an Artifact Store to S3**: +3. **Connect Stack Components**: + Example for S3 Artifact Store: ```sh zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles zenml artifact-store connect s3-zenfiles --connector aws-s3 ``` -5. **Running a Simple Pipeline**: +4. **Run a Pipeline**: + Example pipeline code: ```python from zenml import step, pipeline @@ -10711,27 +10728,13 @@ ZenML facilitates the integration of MLOps platforms with various cloud provider simple_pipeline() ``` -6. **Executing the Pipeline**: +5. **Execute the Pipeline**: ```sh python run.py ``` -#### Alternatives to Service Connectors: -- **Embedding Credentials**: Directly embedding authentication information in Stack Components is discouraged due to security risks. -- **Using ZenML Secrets**: Storing credentials in ZenML Secrets is better but not universally supported across all Stack Components. - -#### Security Considerations: -- Long-lived credentials pose risks; Service Connectors mitigate this by issuing temporary credentials. -- Proper IAM permissions are required for accessing resources like S3 buckets. - -#### Additional Resources: -- [Service Connector Guide](service-connectors-guide.md) -- [Security Best Practices](best-security-practices.md) -- [AWS Service Connector Documentation](aws-service-connector.md) -- [GCP Service Connector Documentation](gcp-service-connector.md) -- [Azure Service Connector Documentation](azure-service-connector.md) - -This overview encapsulates the essential elements of ZenML's Service Connectors, their use cases, and security considerations, providing a clear pathway for integration with cloud services. +#### Conclusion +ZenML's Service Connectors provide a robust framework for integrating with cloud services while maintaining security and usability. For further details, refer to the complete guide on Service Connectors and security best practices. ================================================== @@ -10739,60 +10742,59 @@ This overview encapsulates the essential elements of ZenML's Service Connectors, ### Summary of Best Practices for Service Connector Authentication Methods -This documentation outlines best practices for various authentication methods used by Service Connectors, particularly for cloud providers. It emphasizes the importance of selecting appropriate authentication methods based on security and usability. +This documentation outlines best practices for various authentication methods used by Service Connectors, particularly for cloud providers. It emphasizes the importance of security and provides guidelines for selecting appropriate authentication methods. + +#### General Guidelines +- Avoid using primary account passwords as authentication credentials. Prefer alternatives like session tokens, API keys, or API tokens. +- Be cautious about sharing passwords, ensuring they remain within secure environments. -#### Key Authentication Methods +#### Authentication Methods 1. **Username and Password** - - Avoid using primary account passwords for authentication. Opt for session tokens, API keys, or API tokens instead. - - Passwords are the least secure method and should not be shared or used for automated workloads. + - Commonly used but the least secure method. + - Cloud platforms typically do not allow direct use of passwords for API authentication; they require exchanging them for long-lived credentials. 2. **Implicit Authentication** - - Provides immediate access to cloud resources without configuration but may limit portability. - - Disabled by default; must be enabled via environment variables or Helm chart settings. - - Utilizes locally stored credentials or environment variables. + - Provides immediate access to cloud resources without configuration. + - Security risk as it may grant access to resources configured for the ZenML Server. + - Requires enabling via environment variables or Helm chart settings. 3. **Long-lived Credentials (API Keys, Account Keys)** - - Preferred for production use, especially when combined with mechanisms for generating short-lived tokens or impersonating accounts. - - Cloud platforms typically do not use passwords directly; instead, they exchange them for long-lived credentials. + - Preferred method for production use. + - Cloud platforms use processes to exchange account credentials for long-lived credentials. + - Different cloud providers have varying names for these credentials (e.g., AWS Access Keys, GCP Service Account Credentials). + - Aim to use service credentials over user credentials for better security. 4. **Generating Temporary and Down-scoped Credentials** - - Temporary credentials are issued to clients, reducing the risk of exposing long-lived credentials. - - Downscoped credentials limit access to only necessary resources, enhancing security. + - Long-lived credentials can be used to issue temporary credentials with limited permissions. + - Example for AWS: + ```sh + zenml service-connector register eks-zenhacks-cluster --type aws --auth-method session-token + ``` 5. **Impersonating Accounts and Assuming Roles** - Requires setup of multiple accounts and roles but offers flexibility and control. - - Long-lived credentials are used to obtain short-lived tokens with specific permissions. + - Long-lived credentials are used to obtain short-lived tokens with restricted permissions. 6. **Short-lived Credentials** - - Temporary credentials can be manually configured or automatically generated, providing temporary access without exposing long-lived credentials. - - Less practical due to the need for frequent updates. - -#### Examples - -- **GCP Implicit Authentication Example:** - ```sh - zenml service-connector register gcp-implicit --type gcp --auth-method implicit --project_id=zenml-core - ``` - -- **AWS Temporary Credentials Example:** - ```sh - AWS_PROFILE=connectors zenml service-connector register aws-sts-token --type aws --auto-configure --auth-method sts-token - ``` + - Use temporary credentials for granting limited access without exposing long-lived credentials. + - Example for AWS: + ```sh + AWS_PROFILE=connectors zenml service-connector register aws-sts-token --type aws --auto-configure --auth-method sts-token + ``` -- **GCP Account Impersonation Example:** - ```sh - zenml service-connector register gcp-impersonate-sa --type gcp --auth-method impersonation --service_account_json=@empty-connectors@zenml-core.json --project_id=zenml-core --target_principal=zenml-bucket-sl@zenml-core.iam.gserviceaccount.com --resource-type gcs-bucket --resource-id gs://zenml-bucket-sl - ``` +### Key Takeaways +- Prioritize security by avoiding direct use of passwords and leveraging long-lived and temporary credentials. +- Understand the implications of each authentication method on portability and usability. +- Use impersonation and role assumption for enhanced security and access control. -### Conclusion -Choosing the right authentication method is crucial for security and usability in cloud environments. Best practices emphasize minimizing risk by avoiding direct use of passwords, leveraging long-lived credentials, and utilizing temporary or down-scoped credentials where possible. +This summary provides a concise overview of the best practices for authentication methods in Service Connectors, ensuring that critical information is retained while eliminating redundancy. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/hyperai-service-connector.md === -### HyperAI Service Connector Documentation Summary +### HyperAI Service Connector Overview The ZenML HyperAI Service Connector enables authentication with HyperAI instances for deploying pipeline runs. It provides pre-authenticated Paramiko SSH clients to linked Stack Components. @@ -10801,134 +10803,111 @@ The ZenML HyperAI Service Connector enables authentication with HyperAI instance $ zenml service-connector list-types --type hyperai ``` -#### Supported Resource Types and Authentication Methods -The connector supports HyperAI instances and multiple SSH authentication methods: -- RSA key -- DSA (DSS) key -- ECDSA key -- ED25519 key +#### Supported Authentication Methods +The connector supports the following SSH key-based authentication methods: +1. RSA +2. DSA (DSS) +3. ECDSA +4. ED25519 -**Warning:** SSH private keys used in the connector will be shared with all clients running pipelines, granting unrestricted access to HyperAI instances. +**Note:** SSH private keys are distributed to all clients running pipelines with the HyperAI orchestrator, granting unrestricted access to HyperAI instances. #### Configuration Requirements -When configuring the Service Connector, provide: -- At least one `hostname` -- `username` for login -- Optional `ssh_passphrase` +- At least one `hostname` and a `username` must be provided for login. +- An optional `ssh_passphrase` can be included. -You can: +#### Usage Options 1. Create a separate service connector for each HyperAI instance with different SSH keys. -2. Use a single SSH key for multiple instances, selecting the instance when creating the HyperAI orchestrator component. +2. Use a single SSH key for multiple instances, selecting the instance during HyperAI orchestrator component creation. + +#### Prerequisites +To use the HyperAI Service Connector, install the HyperAI integration: +```shell +$ zenml integration install hyperai +``` + +#### Resource Types +The connector supports HyperAI instances. -#### Auto-configuration Note -The Service Connector does not support auto-discovery of authentication credentials. Feedback regarding this feature can be provided via [Slack](https://zenml.io/slack) or by creating an issue on [GitHub](https://github.com/zenml-io/zenml/issues). +#### Auto-Configuration +The Service Connector does not support auto-discovery of authentication credentials from HyperAI instances. Feedback can be provided via [Slack](https://zenml.io/slack) or by creating an issue on [GitHub](https://github.com/zenml-io/zenml/issues). -#### Stack Component Usage -The HyperAI Service Connector is utilized by the HyperAI Orchestrator to deploy pipeline runs to HyperAI instances. +#### Stack Components Usage +The HyperAI Service Connector is utilized by the HyperAI Orchestrator for deploying pipeline runs to HyperAI instances. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/aws-service-connector.md === -### Summary of AWS Service Connector Documentation for ZenML +### Summary of AWS Service Connector Documentation -The **AWS Service Connector** in ZenML enables authentication and access to various AWS resources such as S3 buckets, EKS clusters, and ECR registries. It supports multiple authentication methods, including long-lived AWS secret keys, IAM roles, STS tokens, and implicit authentication. The connector allows for the generation of temporary STS tokens with minimized permissions for enhanced security and can auto-configure credentials from the AWS CLI. +The **ZenML AWS Service Connector** enables authentication and access to AWS resources (S3, EKS, ECR). It supports various authentication methods, including long-lived AWS secret keys, IAM roles, STS tokens, and implicit authentication. The connector generates temporary STS tokens scoped to minimum permissions for resource access and can auto-configure credentials from the AWS CLI. #### Key Features: -- **Resource Types Supported**: - - **Generic AWS Resource**: Connects to any AWS service using a pre-authenticated boto3 session. - - **S3 Bucket**: Requires specific IAM permissions (e.g., `s3:ListBucket`, `s3:GetObject`, etc.) for access. - - **EKS Cluster**: Requires permissions like `eks:ListClusters` and `eks:DescribeCluster`. - - **ECR Registry**: Requires permissions such as `ecr:DescribeRepositories` and `ecr:PutImage`. +- **Resource Types**: + - **Generic AWS Resource**: Connects to any AWS service using a pre-configured boto3 session. + - **S3 Bucket**: Requires specific IAM permissions (e.g., `s3:ListBucket`, `s3:GetObject`). + - **EKS Cluster**: Requires permissions like `eks:ListClusters`. + - **ECR Registry**: Requires permissions such as `ecr:DescribeRepositories`. - **Authentication Methods**: - - **Implicit Authentication**: Uses environment variables or local AWS CLI configurations. + - **Implicit Authentication**: Uses environment variables or IAM roles. Requires enabling via `ZENML_ENABLE_IMPLICIT_AUTH_METHODS`. - **AWS Secret Key**: Long-lived credentials, not recommended for production. - - **STS Token**: Temporary tokens that require regular updates. - - **IAM Role**: Generates temporary STS credentials by assuming a role. - - **Session Token**: Generates temporary session tokens for IAM users. - - **Federation Token**: Generates temporary tokens for federated users. + - **AWS STS Token**: Temporary tokens, must be refreshed regularly. + - **AWS IAM Role**: Assumes a role to generate temporary STS tokens. + - **AWS Session Token**: Generates temporary tokens for IAM users. + - **AWS Federation Token**: Generates tokens for federated users. #### Configuration and Usage: -1. **Prerequisites**: Install ZenML with AWS integration: - ```bash - pip install "zenml[connectors-aws]" - ``` - or - ```bash - zenml integration install aws - ``` - -2. **Registering a Service Connector**: - ```bash - AWS_PROFILE=your_profile zenml service-connector register your-connector-name --type aws --auth-method implicit --region=us-east-1 - ``` - -3. **List Available Resource Types**: - ```bash - zenml service-connector list-types --type aws - ``` - -4. **Auto-Configuration**: Automatically fetch credentials from the AWS CLI: - ```bash - AWS_PROFILE=your_profile zenml service-connector register your-connector-name --type aws --auto-configure - ``` +- **Prerequisites**: Install ZenML AWS integration via: + ```bash + pip install "zenml[connectors-aws]" + ``` + or + ```bash + zenml integration install aws + ``` -5. **Connecting Stack Components**: Connect various stack components like S3 Artifact Store, EKS Orchestrator, and ECR Container Registry to the registered service connector. +- **Registering Service Connector**: + ```bash + AWS_PROFILE=connectors zenml service-connector register aws-implicit --type aws --auth-method implicit --region=us-east-1 + ``` -#### Example Commands: -- **Verify Access to Resources**: - ```bash - zenml service-connector verify your-connector-name --resource-type s3-bucket - ``` +- **Verifying Access**: + ```bash + AWS_PROFILE=connectors zenml service-connector verify aws-implicit --resource-type s3-bucket + ``` -- **Register an S3 Artifact Store**: - ```bash - zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://your-bucket - ``` +- **Local Client Configuration**: Configure local AWS CLI, Kubernetes, and Docker clients with credentials from the AWS Service Connector. -- **Connect an Orchestrator**: +#### Example Workflow: +1. **Setup AWS CLI** with valid credentials. +2. **Register Service Connector**: ```bash - zenml orchestrator register eks-zenml --flavor kubernetes --kubernetes_namespace=your-namespace + AWS_PROFILE=connectors zenml service-connector register aws-demo-multi --type aws --auto-configure ``` +3. **Connect Stack Components** (S3, EKS, ECR) using the registered connector. +4. **Run a Pipeline** to validate the setup. -- **Run a Simple Pipeline**: - ```python - from zenml import pipeline, step - - @step - def step_1() -> str: - return "world" - - @step - def step_2(input_one: str) -> None: - print(f"Hello {input_one}!") - - @pipeline - def my_pipeline(): - output = step_1() - step_2(input_one=output) - - if __name__ == "__main__": - my_pipeline() - ``` +#### Important Notes: +- The Service Connector will not function if MFA is enabled on the AWS CLI profile. +- Local AWS CLI profiles are created based on the Service Connector UUID. +- Credentials issued by the Service Connector have a short lifetime and need regular refreshing. -This concise guide captures the essential information about configuring and using the AWS Service Connector with ZenML, ensuring no critical details are omitted. +This documentation provides a comprehensive guide for integrating ZenML with AWS services, ensuring secure and efficient resource management. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/docker-service-connector.md === -# Docker Service Connector Overview +### Docker Service Connector Overview +The ZenML Docker Service Connector facilitates authentication with Docker/OCI container registries and manages Docker clients. It provides pre-authenticated Python clients for Stack Components linked to it. -The ZenML Docker Service Connector enables authentication with Docker or OCI container registries and manages Docker clients for these registries. It provides pre-authenticated Python clients to Stack Components linked to it. - -## Command to List Connector Types +#### Command to List Docker Service Connector Types ```shell zenml service-connector list-types --type docker ``` - -### Output Example +**Output Example:** ``` ┏━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━┯━━━━━━━┯━━━━━━━━┓ ┃ NAME │ TYPE │ RESOURCE TYPES │ AUTH METHODS │ LOCAL │ REMOTE ┃ @@ -10937,24 +10916,21 @@ zenml service-connector list-types --type docker ┗━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━┷━━━━━━━┷━━━━━━━━┛ ``` -## Prerequisites +### Prerequisites - No additional Python packages are needed; all are included in the ZenML package. - Docker must be installed in environments where images are built and pushed. -## Resource Types -The connector supports Docker/OCI container registries identified by the `docker-registry` Resource Type. Formats include: +### Resource Types +The connector supports Docker/OCI container registries, identified by the `docker-registry` resource type. Formats: - DockerHub: `docker.io` or `https://index.docker.io/v1/<repository-name>` - Generic OCI: `https://host:port/<repository-name>` -## Authentication Methods -Authentication uses a username and password or access token, with API tokens recommended over passwords. - -### Registering a DockerHub Connector +### Authentication Methods +Authentication is via username and password or access token, with API tokens recommended. Example command to register: ```sh zenml service-connector register dockerhub --type docker -in ``` - -### Example Command Output +**Example Command Output:** ``` Please enter a name for the service connector [dockerhub]: ... @@ -10966,187 +10942,140 @@ Successfully registered service connector `dockerhub` with access to: ┗━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛ ``` -**Note:** Credentials will be distributed directly to clients; short-lived credentials are not supported. +**Warning:** Credentials are distributed directly to clients and not short-lived. -## Auto-configuration -The connector does not auto-discover or extract credentials from local Docker clients. Feedback is welcome for this feature. +### Auto-configuration +The connector does not support auto-discovery of credentials from local Docker clients. Feedback is welcome via [Slack](https://zenml.io/slack) or [GitHub](https://github.com/zenml-io/zenml/issues). -## Local Client Provisioning -To configure the local Docker client with credentials: +### Local Client Provisioning +To configure the local Docker client: ```sh zenml service-connector login dockerhub ``` - -### Example Command Output +**Example Command Output:** ``` WARNING! Your password will be stored unencrypted in /home/stefan/.docker/config.json. ... -The 'dockerhub' Docker Service Connector was used to configure the local client. +The 'dockerhub' Docker Service Connector was used to configure the local Docker client. ``` -## Stack Components Usage -The Docker Service Connector can be utilized by all Container Registry stack components to authenticate with remote registries, allowing image building and publishing without explicit Docker credentials in the environment. +### Stack Components Usage +The Docker Service Connector can be utilized by all Container Registry stack components for authentication, enabling image building and publishing to private registries without explicit Docker credentials in the environment. -**Warning:** ZenML currently does not support automatic Docker credential configuration in container runtimes like Kubernetes. This feature will be added in a future release. +**Warning:** ZenML does not currently support automatic Docker credential configuration in container runtimes like Kubernetes. This feature will be added in a future release. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/azure-service-connector.md === -### Azure Service Connector Overview - -The ZenML Azure Service Connector enables authentication and access to Azure resources like Blob storage, AKS Kubernetes clusters, and ACR container registries. It supports automatic credential configuration via the Azure CLI and specialized authentication for various Azure services. - -### Prerequisites - -- Install the Azure Service Connector: - - `pip install "zenml[connectors-azure]"` for the connector only. - - `zenml integration install azure` for the entire Azure integration. -- Azure CLI setup is recommended for quick configuration, but not mandatory. - -### Resource Types +### Azure Service Connector Documentation Summary -1. **Generic Azure Resource**: Connects to any Azure service using generic credentials. -2. **Azure Blob Storage Container**: Requires specific IAM permissions (e.g., `Storage Blob Data Contributor`). Resource names can be specified in different formats (URI or name). -3. **AKS Kubernetes Cluster**: Requires permissions to list AKS clusters. Resource names can include resource group details. -4. **ACR Container Registry**: Requires permissions to pull/push images and list registries. Resource names can be specified as URIs or names. +The **ZenML Azure Service Connector** enables authentication and access to Azure resources such as Blob storage, AKS Kubernetes clusters, and ACR container registries. It supports automatic credential configuration via the Azure CLI and can manage specialized authentication for various Azure services. -### Authentication Methods +#### Key Features: +- **Resource Types**: + - **Generic Azure Resource**: Connects to any Azure service using generic azure-identity credentials. + - **Azure Blob Storage**: Requires IAM permissions for read/write access and listing storage accounts/containers. + - **AKS Kubernetes Cluster**: Requires permissions to list AKS clusters and fetch credentials. + - **ACR Container Registry**: Requires permissions to pull/push images and list registries. +#### Authentication Methods: 1. **Implicit Authentication**: Uses environment variables or Azure CLI credentials. Requires explicit enabling due to security risks. - - Example command: + - Example Command: ```sh zenml service-connector register azure-implicit --type azure --auth-method implicit --auto-configure ``` -2. **Service Principal**: Uses Azure client ID and secret for authentication. Requires prior creation of an Azure service principal. - - Example command: +2. **Azure Service Principal**: Uses client ID and secret for authentication. + - Example Command: ```sh zenml service-connector register azure-service-principal --type azure --auth-method service-principal --tenant_id=<tenant_id> --client_id=<client_id> --client_secret=<client_secret> ``` -3. **Access Token**: Uses temporary tokens from Azure CLI, suitable for short-term access. Not recommended for blob storage resources. - -### Local Client Provisioning - -The local Azure CLI, Kubernetes `kubectl`, and Docker CLI can be configured with credentials from the Azure Service Connector. - -- **Kubernetes CLI Configuration**: - ```sh - zenml service-connector login azure-service-principal --resource-type kubernetes-cluster --resource-id=<cluster_id> - ``` - -- **Docker CLI Configuration**: - ```sh - zenml service-connector login azure-service-principal --resource-type docker-registry --resource-id=<registry_id> - ``` - -### Stack Components Integration - -The Azure Service Connector can connect various Stack Components, such as: -- Azure Artifact Store to Blob storage. -- Kubernetes Orchestrator to AKS clusters. -- Container Registry to ACR. - -### Example Workflow +3. **Azure Access Token**: Uses temporary tokens, not suitable for long-term use or Azure Blob storage. + - Example Command: + ```sh + zenml service-connector register azure-session-token --type azure --auto-configure + ``` +#### Configuration Steps: 1. **Install Azure Integration**: ```sh - zenml integration install -y azure - ``` - -2. **Register Service Connector**: - ```sh - zenml service-connector register azure-service-principal --type azure --auth-method service-principal --tenant_id=<tenant_id> --client_id=<client_id> --client_secret=<client_secret> - ``` - -3. **Connect Azure Blob Storage**: - ```sh - zenml artifact-store register azure-demo --flavor azure --path=az://<container_name> - zenml artifact-store connect azure-demo --connector azure-service-principal + pip install "zenml[connectors-azure]" ``` -4. **Connect AKS Orchestrator**: - ```sh - zenml orchestrator register aks-demo-cluster --flavor kubernetes --kubernetes_namespace=<namespace> - zenml orchestrator connect aks-demo-cluster --connector azure-service-principal - ``` +2. **Register Service Connector**: Use the appropriate authentication method based on your requirements. -5. **Connect ACR**: - ```sh - zenml container-registry register acr-demo-registry --flavor azure --uri=<registry_uri> - zenml container-registry connect acr-demo-registry --connector azure-service-principal - ``` +3. **Connect Stack Components**: Connect various components like Artifact Store, Orchestrator, and Container Registry to Azure resources using the registered service connector. -6. **Register and Set Active Stack**: +#### Example Workflow: +1. **Register Service Connector**: ```sh - zenml stack register <stack_name> -a azure-demo -o aks-demo-cluster -c acr-demo-registry -i local --set + zenml service-connector register azure-service-principal --type azure --auth-method service-principal --tenant_id=<tenant_id> --client_id=<client_id> --client_secret=<client_secret> ``` -7. **Run a Simple Pipeline**: - ```python - from zenml import pipeline, step - - @step - def step_1() -> str: - return "world" - - @step(enable_cache=False) - def step_2(input_one: str, input_two: str) -> None: - print(f"{input_one} {input_two}") +2. **Register and Connect Stack Components**: + - **Artifact Store**: + ```sh + zenml artifact-store register azure-demo --flavor azure --path=az://demo-zenmlartifactstore + zenml artifact-store connect azure-demo --connector azure-service-principal + ``` + - **Kubernetes Orchestrator**: + ```sh + zenml orchestrator register aks-demo-cluster --flavor kubernetes --kubernetes_namespace=zenml-workloads + zenml orchestrator connect aks-demo-cluster --connector azure-service-principal + ``` + - **Container Registry**: + ```sh + zenml container-registry register acr-demo-registry --flavor azure --uri=demozenmlcontainerregistry.azurecr.io + zenml container-registry connect acr-demo-registry --connector azure-service-principal + ``` - @pipeline - def my_pipeline(): - output_step_one = step_1() - step_2(input_one="hello", input_two=output_step_one) +3. **Run a Pipeline**: Create a simple pipeline to validate the setup. - if __name__ == "__main__": - my_pipeline() - ``` +#### Important Notes: +- **Permissions**: Ensure the Azure service principal has the necessary permissions for the resources it will access. +- **Auto-configuration Limitations**: Only supports temporary tokens and does not work with Azure Blob storage. +- **Security**: Implicit authentication methods are disabled by default due to potential security risks. -This concise summary captures the essential details of configuring and using the Azure Service Connector with ZenML, including commands and examples for practical implementation. +This summary captures the essential technical details and commands for configuring and using the ZenML Azure Service Connector effectively. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/kubernetes-service-connector.md === -### Kubernetes Service Connector Overview - -The ZenML Kubernetes Service Connector enables authentication and connection to Kubernetes clusters, allowing access to any generic cluster using pre-authenticated Kubernetes Python clients. It also supports configuring the local Kubernetes CLI (`kubectl`). +### Kubernetes Service Connector -### Prerequisites +The ZenML Kubernetes Service Connector enables authentication and connection to Kubernetes clusters, allowing access to generic clusters via pre-authenticated Kubernetes Python clients and local `kubectl` configuration. -- Install the Kubernetes Service Connector: - - For only the connector: +#### Prerequisites +- Install the connector: + - For only the Kubernetes Service Connector: ```shell pip install "zenml[connectors-kubernetes]" ``` - - For the entire Kubernetes integration: + - For the entire Kubernetes ZenML integration: ```shell zenml integration install kubernetes ``` -- Local `kubectl` configuration is not required for accessing clusters through the connector. +- Local `kubectl` configuration is not required for accessing Kubernetes clusters. -### Resource Types - -- Supports generic Kubernetes clusters identified by the `kubernetes-cluster` Resource Type. - -### Authentication Methods +#### Resource Types +- Supports only `kubernetes-cluster` resource type, identified by a user-friendly name during registration. +#### Authentication Methods 1. Username and password (not recommended for production). 2. Authentication token (with or without client certificates). For local K3D clusters, an empty token can be used. -**Warning**: The connector does not generate short-lived credentials; configured credentials are directly used for authentication. Use API tokens with client certificates when possible. - -### Auto-configuration - -Fetch credentials from the local `kubectl` during registration. Example command to register a service connector with auto-configuration: +**Warning:** Credentials configured in the Service Connector are directly distributed to clients; use API tokens with client certificates when possible. +#### Auto-configuration +Fetch credentials from the local `kubectl` during registration: ```sh zenml service-connector register kube-auto --type kubernetes --auto-configure ``` -**Example Output**: +**Example Output:** ``` Successfully registered service connector `kube-auto` with access to resources: ┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓ @@ -11156,46 +11085,41 @@ Successfully registered service connector `kube-auto` with access to resources: ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛ ``` -### Describe Service Connector - -To view details of the registered service connector: - +#### Describe Service Connector +To view details of the registered connector: ```sh zenml service-connector describe kube-auto ``` -**Example Output**: +**Example Output:** ``` -Service connector 'kube-auto' of type 'kubernetes' with id '4315e8eb...' is owned by user 'default'. +Service connector 'kube-auto' of type 'kubernetes' with ID '4315e8eb-fcbd-4938-a4d7-a9218ab372a1' is owned by user 'default'. ┏━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ PROPERTY │ VALUE ┃ ┠──────────────────┼──────────────────────────────────────┨ -┃ ID │ 4315e8eb... ┃ +┃ ID │ 4315e8eb-fcbd-4938-a4d7-a9218ab372a1 ┃ ┃ NAME │ kube-auto ┃ ┃ AUTH METHOD │ token ┃ -┃ RESOURCE TYPES │ 🌀 kubernetes-cluster ┃ ┃ RESOURCE NAME │ 35.175.95.223 ┃ ┃ OWNER │ default ┃ -┃ CREATED_AT │ 2023-05-16 21:45:33.224740 ┃ ┗━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ ``` -### Local Client Provisioning - -To configure the local Kubernetes client with credentials: +**Note:** Credentials may have a limited lifetime, especially with third-party authentication providers. +#### Local Client Provisioning +Configure the local Kubernetes client with: ```sh zenml service-connector login kube-auto ``` -**Example Output**: +**Example Output:** ``` Updated local kubeconfig with the cluster details. Current context set to '35.185.95.223'. ``` -### Stack Components Use - -The Kubernetes Service Connector is utilized in Orchestrator and Model Deployer stack components, allowing management of Kubernetes workloads without explicit configuration of `kubectl` contexts and credentials in the target environment. +#### Stack Components Use +The Kubernetes Service Connector is utilized in Orchestrator and Model Deployer stack components, allowing management of Kubernetes workloads without explicit `kubectl` configuration in the target environment. ================================================== @@ -11204,25 +11128,25 @@ The Kubernetes Service Connector is utilized in Orchestrator and Model Deployer # Custom Stack Component Flavor in ZenML ## Overview -ZenML allows for the creation of custom stack component flavors to tailor MLOps solutions. This guide explains the concept of flavors, the core abstractions involved, and how to implement a custom flavor. +ZenML allows for the creation of custom stack component flavors to tailor MLOps solutions. This guide explains component flavors, core abstractions, and how to implement a custom stack component flavor. ## Component Flavors - **Component Type**: Broad category defining functionality (e.g., `artifact_store`). -- **Flavor**: Specific implementation of a component type (e.g., `local`, `s3`). +- **Flavors**: Specific implementations of a component type (e.g., `local`, `s3`). ## Core Abstractions -1. **StackComponent**: Defines core functionality. Example: +1. **StackComponent**: Defines core functionality. ```python from zenml.stack import StackComponent class BaseArtifactStore(StackComponent): @abstractmethod def open(self, path, mode="r"): - ... + pass @abstractmethod def exists(self, path): - ... + pass ``` 2. **StackComponentConfig**: Configures a stack component instance, separating configuration from implementation. @@ -11234,9 +11158,10 @@ ZenML allows for the creation of custom stack component flavors to tailor MLOps SUPPORTED_SCHEMES: ClassVar[Set[str]] ``` -3. **Flavor**: Combines `StackComponent` and `StackComponentConfig`, defining the flavor's name and type. +3. **Flavor**: Combines `StackComponent` implementation with `StackComponentConfig`. ```python from zenml.stack import Flavor + from zenml.enums import StackComponentType class LocalArtifactStoreFlavor(Flavor): @property @@ -11256,9 +11181,9 @@ ZenML allows for the creation of custom stack component flavors to tailor MLOps return LocalArtifactStore ``` -## Implementing a Custom Flavor -### Configuration Class -Define configuration values and supported schemes: +## Implementing a Custom Stack Component Flavor +### Step 1: Configuration Class +Define the configuration class with required variables. ```python from zenml.artifact_stores import BaseArtifactStoreConfig from zenml.utils.secret_utils import SecretField @@ -11267,11 +11192,14 @@ class MyS3ArtifactStoreConfig(BaseArtifactStoreConfig): SUPPORTED_SCHEMES: ClassVar[Set[str]] = {"s3://"} key: Optional[str] = SecretField(default=None) secret: Optional[str] = SecretField(default=None) - ... + token: Optional[str] = SecretField(default=None) + client_kwargs: Optional[Dict[str, Any]] = None + config_kwargs: Optional[Dict[str, Any]] = None + s3_additional_kwargs: Optional[Dict[str, Any]] = None ``` -### Implementation Class -Implement the abstract methods: +### Step 2: Implementation Class +Implement the abstract methods using the S3 file system. ```python import s3fs from zenml.artifact_stores import BaseArtifactStore @@ -11281,9 +11209,15 @@ class MyS3ArtifactStore(BaseArtifactStore): @property def filesystem(self) -> s3fs.S3FileSystem: - if self._filesystem: - return self._filesystem - self._filesystem = s3fs.S3FileSystem(...) + if not self._filesystem: + self._filesystem = s3fs.S3FileSystem( + key=self.config.key, + secret=self.config.secret, + token=self.config.token, + client_kwargs=self.config.client_kwargs, + config_kwargs=self.config.config_kwargs, + s3_additional_kwargs=self.config.s3_additional_kwargs, + ) return self._filesystem def open(self, path, mode="r"): @@ -11293,8 +11227,8 @@ class MyS3ArtifactStore(BaseArtifactStore): return self.filesystem.exists(path=path) ``` -### Flavor Class -Combine the configuration and implementation classes: +### Step 3: Define the Flavor +Combine the configuration and implementation classes. ```python from zenml.artifact_stores import BaseArtifactStoreFlavor @@ -11305,89 +11239,81 @@ class MyS3ArtifactStoreFlavor(BaseArtifactStoreFlavor): @property def implementation_class(self): + from ... import MyS3ArtifactStore return MyS3ArtifactStore @property def config_class(self): + from ... import MyS3ArtifactStoreConfig return MyS3ArtifactStoreConfig ``` ## Registering the Flavor -Register the custom flavor using the ZenML CLI: +Use the ZenML CLI to register the new flavor: ```shell -zenml artifact-store flavor register flavors.my_flavor.MyS3ArtifactStoreFlavor +zenml artifact-store flavor register <path.to.MyS3ArtifactStoreFlavor> ``` ## Usage -Use the custom flavor in your stacks: +Register the artifact store and stack: ```shell zenml artifact-store register <ARTIFACT_STORE_NAME> --flavor=my_s3_artifact_store --path='some-path' zenml stack register <STACK_NAME> --artifact-store <ARTIFACT_STORE_NAME> ``` ## Best Practices -- Execute `zenml init` consistently. +- Execute `zenml init` consistently at the root of your repository. - Test flavors thoroughly before production use. -- Keep code clean and well-documented. +- Keep flavor code clean and well-documented. - Use existing flavors as references for new implementations. -## Further Learning -For specific stack component types, refer to the following: -- [Orchestrator](../../../component-guide/orchestrators/custom.md) -- [Artifact Store](../../../component-guide/artifact-stores/custom.md) -- [Container Registry](../../../component-guide/container-registries/custom.md) -- [Step Operator](../../../component-guide/step-operators/custom.md) -- [Model Deployer](../../../component-guide/model-deployers/custom.md) -- [Feature Store](../../../component-guide/feature-stores/custom.md) -- [Experiment Tracker](../../../component-guide/experiment-trackers/custom.md) -- [Alerter](../../../component-guide/alerters/custom.md) -- [Annotator](../../../component-guide/annotators/custom.md) -- [Data Validator](../../../component-guide/data-validators/custom.md) +## Additional Resources +For more specific stack component types, refer to the links provided in the documentation. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/README.md === -# Managing Stacks & Components +### Managing Stacks & Components -## What is a Stack? -A **stack** in the ZenML framework defines the infrastructure and tooling for pipeline execution. It consists of various components, each serving a specific function, such as: +#### What is a Stack? +A **stack** in ZenML represents the configuration of infrastructure and tooling for executing pipelines. It consists of various components, each serving a specific function, such as: - **Container Registry**: For managing images. - **Kubernetes Cluster**: As an orchestrator. - **Artifact Store**: For storing artifacts. - **Experiment Tracker**: Like MLflow for tracking experiments. -## Organizing Execution Environments -ZenML allows running pipelines across multiple stacks, facilitating testing in different environments. For instance: -1. Data scientists can experiment locally. -2. Move to a staging cloud for advanced testing. -3. Deploy to a production stack when ready. +#### Organizing Execution Environments +ZenML allows running pipelines across multiple stacks, facilitating testing in different environments: +- **Local Development**: Data scientists can experiment locally. +- **Staging**: Transition to a cloud environment for advanced testing. +- **Production**: Deploy on a production-grade stack. -This separation helps: -- Prevent accidental production deployments. -- Reduce costs by using less powerful resources in staging. -- Control access by assigning permissions to specific stacks. +**Benefits of Separate Stacks**: +- Prevents accidental production deployments. +- Reduces costs by using less powerful resources in staging. +- Controls access to environments based on user permissions. -## Managing Credentials -Stack components often require credentials for infrastructure interaction. The recommended method in ZenML is using **Service Connectors**, which abstract sensitive information. +#### Managing Credentials +Most stack components require credentials to interact with infrastructure. ZenML recommends using **Service Connectors** to manage these credentials securely. -### Recommended Roles -- Limit Service Connector creation to personnel with direct cloud resource access to minimize credential leakage, enable instant revocation of compromised credentials, and simplify auditing. +**Recommended Roles**: +- Limit Service Connector creation to individuals with direct access to cloud resources to minimize credential leaks, enable instant revocation, and simplify auditing. -### Recommended Workflow +**Recommended Workflow**: 1. Designate a small group to create Service Connectors. -2. Use one connector for development/staging. -3. Create a separate connector for production to avoid accidental resource usage. - -## Deploying and Managing Stacks -Deploying an MLOps stack involves complexities: -- Tools have specific requirements (e.g., Kubernetes for Kubeflow). -- Setting default infrastructure parameters can be challenging. -- Standard installations may require additional configurations for security. -- Ensure all components have the necessary permissions to communicate. -- Clean up resources post-experimentation to avoid unnecessary costs. - -### Documentation Links +2. Create a connector for development/staging. +3. Create a separate connector for production to ensure resource safety. + +#### Deploying and Managing Stacks +Deploying MLOps stacks can be complex due to: +- Specific requirements for tools (e.g., Kubernetes for Kubeflow). +- Difficulty in setting default infrastructure parameters. +- Potential issues with standard tool installations. +- Necessary permissions for components to communicate. +- Challenges in resource cleanup post-experimentation. + +ZenML aims to simplify the provisioning and configuration of stacks. Key documentation includes: - [Deploy a Cloud Stack](./deploy-a-cloud-stack.md) - [Register a Cloud Stack](./register-a-cloud-stack.md) - [Deploy with Terraform](./deploy-a-cloud-stack-with-terraform.md) @@ -11395,13 +11321,11 @@ Deploying an MLOps stack involves complexities: - [Reference Secrets in Configuration](./reference-secrets-in-stack-configuration.md) - [Implement a Custom Stack Component](./implement-a-custom-stack-component.md) -This section provides guidance on provisioning, configuring, and extending stacks and components in ZenML, simplifying the process of running ML pipelines. - ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/export-stack-requirements.md === -### Summary of Export Stack Requirements +### Export Stack Requirements To obtain the `pip` requirements for a specific stack, use the following CLI command: @@ -11410,7 +11334,7 @@ zenml stack export-requirements <STACK-NAME> --output-file stack_requirements.tx pip install -r stack_requirements.txt ``` -This command exports the requirements to a file named `stack_requirements.txt`, which can then be used to install the necessary packages with `pip`. +This command exports the requirements to a file named `stack_requirements.txt`, which can then be used to install the necessary packages. ================================================== @@ -11418,19 +11342,21 @@ This command exports the requirements to a file named `stack_requirements.txt`, ### Summary: Referencing Secrets in Stack Configuration -When configuring stack components that require sensitive information (e.g., passwords, tokens), ZenML allows you to reference secrets securely instead of hardcoding values. Use the syntax `{{<SECRET_NAME>.<SECRET_KEY>}}` to reference a secret. +In ZenML, sensitive information like passwords or tokens can be referenced securely in stack components using secret references. This is done by specifying the attribute with the syntax: `{{<SECRET_NAME>.<SECRET_KEY>}}`. -#### Example: Registering and Using a Secret +#### Example Usage -**Register a Secret:** +**Registering a Secret:** ```shell +# Create a secret named `mlflow_secret` with username and password zenml secret create mlflow_secret \ --username=admin \ --password=abc123 ``` -**Reference in Experiment Tracker:** +**Referencing the Secret:** ```shell +# Register an experiment tracker with secret references zenml experiment-tracker register mlflow \ --flavor=mlflow \ --tracking_username={{mlflow_secret.username}} \ @@ -11440,14 +11366,15 @@ zenml experiment-tracker register mlflow \ #### Secret Validation -ZenML validates that all referenced secrets and keys exist before running a pipeline to prevent failures due to missing secrets. The validation behavior can be controlled using the `ZENML_SECRET_VALIDATION_LEVEL` environment variable: +ZenML validates the existence of referenced secrets and keys before running a pipeline to prevent runtime failures. The validation can be controlled using the `ZENML_SECRET_VALIDATION_LEVEL` environment variable: + - `NONE`: Disables validation. - `SECRET_EXISTS`: Validates only the existence of secrets. -- `SECRET_AND_KEY_EXISTS`: (default) Validates both secret existence and key-value pairs. +- `SECRET_AND_KEY_EXISTS`: (default) Validates both the existence of secrets and key-value pairs. #### Fetching Secret Values in Steps -Using centralized secrets management, secrets can be accessed directly in steps via the ZenML `Client` API: +When using centralized secrets management, secrets can be accessed directly in steps via the ZenML `Client` API: ```python from zenml import step @@ -11455,6 +11382,7 @@ from zenml.client import Client @step def secret_loader() -> None: + """Load a secret from the server.""" secret = Client().get_secret(<SECRET_NAME>) authenticate_to_some_api( username=secret.secret_values["username"], @@ -11463,156 +11391,140 @@ def secret_loader() -> None: ``` ### Additional Resources -- **Interact with Secrets**: Learn to create, list, and delete secrets using the ZenML CLI and Python SDK. +- [Interact with secrets](../../../how-to/project-setup-and-management/interact-with-secrets.md): Instructions for creating, listing, and deleting secrets using ZenML CLI and Python SDK. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack.md === -# Deploy a Cloud Stack with a Single Click - -In ZenML, a **stack** represents your infrastructure configuration. Traditionally, creating a stack involves deploying infrastructure components and defining them in ZenML, which can be complex, especially in remote settings. ZenML offers a **1-click deployment feature** that simplifies this process by allowing you to deploy infrastructure on your chosen cloud provider with a single action. +### Deploy a Cloud Stack with a Single Click -## Getting Started +ZenML's **stack** is a key concept representing your infrastructure configuration. Traditionally, creating a stack involves deploying infrastructure components and defining them in ZenML, which can be complex and time-consuming. To simplify this, ZenML offers a **1-click deployment feature** that allows you to deploy infrastructure on your chosen cloud provider effortlessly. -To use the 1-click deployment tool, you need a deployed ZenML instance (not a local server). Follow the setup instructions [here](../../../getting-started/deploying-zenml/README.md). +#### Alternatives for Infrastructure Management +- For more control, consider using **Terraform modules** for infrastructure as code. +- If you already have infrastructure deployed, use the **stack wizard** to register your stack. ### Using the 1-Click Deployment Tool -You can deploy through the **Dashboard** or the **CLI**. - -#### Dashboard Steps +1. **Prerequisites**: Ensure you have a deployed ZenML instance (not local). +2. **Accessing the Tool**: Use either the **dashboard** or **CLI**. -1. Navigate to the stacks page and click "+ New Stack". -2. Select "New Infrastructure". -3. Choose your cloud provider (AWS, GCP, Azure) and configure the stack. +#### Dashboard Deployment Steps +- Navigate to the stacks page and click "+ New Stack". +- Select "New Infrastructure" and choose your cloud provider (AWS, GCP, Azure). -**AWS Deployment:** -- Select region and stack name. -- Click "Deploy in AWS" to be redirected to AWS CloudFormation. +**AWS Deployment**: +- Select a region and stack name. +- Complete configuration and click "Deploy in AWS" to access AWS CloudFormation. - Log in, review, and create the stack. -**GCP Deployment:** -- Select region and stack name. -- Click "Deploy in GCP" to open a Cloud Shell session. -- Review the repository, trust it, and authenticate. -- Follow prompts to configure and deploy using Deployment Manager. - -**Azure Deployment:** -- Select location and stack name. -- Click "Deploy in Azure" to open a Cloud Shell session. -- Paste the provided `main.tf` content and run `terraform init --upgrade` and `terraform apply`. +**GCP Deployment**: +- Select a region and stack name. +- Click "Deploy in GCP" to start a Cloud Shell session. +- Review repository contents, authenticate, and configure deployment. +- Run the provided script to deploy resources and register the stack. -#### CLI Command +**Azure Deployment**: +- Choose a location and stack name. +- Review resources, then click "Deploy in Azure" to open a Cloud Shell. +- Paste the `main.tf` content and run `terraform init --upgrade` and `terraform apply`. -To create a remote stack via CLI, use: +#### CLI Deployment Command ```shell zenml stack deploy -p {aws|gcp|azure} ``` -### Deployment Overview +### Infrastructure Overview by Provider -#### AWS Resources -- S3 bucket (Artifact Store) -- ECR (Container Registry) -- CloudBuild (Image Builder) -- IAM roles for SageMaker -- Necessary permissions for resource access. +**AWS**: +- Resources: S3 bucket (Artifact Store), ECR (Container Registry), CloudBuild project (Image Builder), IAM roles. +- Permissions: Various S3, ECR, CloudBuild, and SageMaker permissions. -#### GCP Resources -- GCS bucket (Artifact Store) -- GCP Artifact Registry (Container Registry) -- Vertex AI (Orchestrator and Step Operator) -- GCP Service Account with necessary permissions. +**GCP**: +- Resources: GCS bucket (Artifact Store), Artifact Registry (Container Registry), Vertex AI (Orchestrator), GCP Service Account. +- Permissions: Roles for GCS, Artifact Registry, Vertex AI, and Cloud Build. -#### Azure Resources -- Resource Group for all resources -- Azure Storage Account and Blob Storage (Artifact Store) -- Azure Container Registry (Container Registry) -- AzureML Workspace (Orchestrator and Step Operator) -- Azure Service Principal with necessary permissions. - -### Conclusion +**Azure**: +- Resources: Resource Group, Storage Account (Artifact Store), Container Registry, AzureML Workspace, Service Principal. +- Permissions: Roles for Storage Account, Container Registry, and AzureML Workspace. -With this 1-click deployment feature, you can easily set up a cloud stack and start running your pipelines remotely. +With this feature, you can deploy a cloud stack with a single click and start running your pipelines in a remote environment. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/register-a-cloud-stack.md === -### Summary of ZenML Stack Wizard Documentation +### Summary of ZenML Stack Registration Documentation -**Overview**: The ZenML stack represents the configuration of your infrastructure. Registering a cloud stack typically involves deploying infrastructure and defining stack components with authentication, which can be complex. The **Stack Wizard** simplifies this by allowing users to register a ZenML cloud stack using existing infrastructure. +**Overview**: ZenML's stack represents the configuration of your infrastructure. The stack wizard simplifies the process of registering a ZenML cloud stack using existing infrastructure, which can be complex and time-consuming. **Deployment Options**: -- If infrastructure is not deployed, use the **1-click deployment tool** or **Terraform modules** for custom management. +- If infrastructure isn't deployed, use the **1-click deployment tool** or **Terraform modules** for more control. ### Using the Stack Wizard -**Access**: -- **Dashboard**: Navigate to the stacks page and click "+ New Stack" to select "Use existing Cloud". -- **CLI**: Use the command: - ```shell - zenml stack register <STACK_NAME> -p {aws|gcp|azure} - ``` +**Access**: Available via CLI and dashboard. + +#### Dashboard Steps: +1. Navigate to the stacks page and click "+ New Stack". +2. Select "Use existing Cloud" and choose your cloud provider. +3. Choose an authentication method and fill in the required fields. -**Service Connector**: -- Required to register a cloud stack. You can use an existing connector or let the wizard create one. -- The wizard checks for auto-configuration of cloud provider credentials. +#### CLI Command: +To register a stack: +```shell +zenml stack register <STACK_NAME> -p {aws|gcp|azure} +``` +- Use `-sc <SERVICE_CONNECTOR_ID_OR_NAME>` for existing service connectors. -**Authentication Methods**: -1. **AWS**: - - Options include AWS Secret Key, STS Token, IAM Role, Session Token, Federation Token. - - Required fields vary by method (e.g., `aws_access_key_id`, `aws_secret_access_key`, `region`). +### Authentication Methods by Provider -2. **GCP**: - - Options include User Account, Service Account, External Account, OAuth 2.0 Token, Service Account Impersonation. - - Required fields include `user_account_json`, `project_id`, etc. +**AWS**: +- Options include AWS Secret Key, STS Token, IAM Role, Session Token, and Federation Token. Each requires specific credentials like `aws_access_key_id`, `aws_secret_access_key`, and `region`. -3. **Azure**: - - Options include Service Principal and Access Token. - - Required fields include `client_secret`, `tenant_id`, `client_id`. +**GCP**: +- Options include User Account, Service Account, External Account, OAuth 2.0 Token, and Service Account Impersonation. Required fields include `user_account_json`, `project_id`, and `token`. -### Defining Cloud Components -You will define three essential components for the stack: -- **Artifact Store** -- **Orchestrator** -- **Container Registry** +**Azure**: +- Options include Service Principal and Access Token. Required fields are `client_secret`, `tenant_id`, and `client_id`. -For each component, you can: -- Reuse existing components connected via the defined service connector. -- Create new components from available resources. +### Defining Cloud Components +You will define three major components: +1. **Artifact Store** +2. **Orchestrator** +3. **Container Registry** -**Example Outputs**: -- Available orchestrators and storage options will be displayed for selection. +You can choose to reuse existing components or create new ones based on the resources available through the service connector. ### Conclusion -The wizard streamlines the registration of a cloud stack, enabling users to run pipelines in a remote setting efficiently. +The stack wizard allows for easy registration of a cloud stack, enabling you to run pipelines in a remote setting efficiently. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack-with-terraform.md === -### Summary: Deploy a Cloud Stack with Terraform +# Deploy a Cloud Stack with Terraform -ZenML provides a set of [Terraform modules](https://registry.terraform.io/modules/zenml-io/zenml-stack) to facilitate the provisioning of cloud resources and their integration with ZenML Stacks. These modules enhance the efficiency and scalability of machine learning infrastructure deployments. +ZenML provides a collection of [Terraform modules](https://registry.terraform.io/modules/zenml-io/zenml-stack) to facilitate the provisioning of cloud resources and their integration with ZenML Stacks. These modules streamline setup, enabling efficient and scalable deployment of machine learning infrastructure. -#### Pre-requisites -- A deployed ZenML server instance accessible from the target cloud provider (not local). -- Create a service account and API key for programmatic access: +## Prerequisites +- A deployed ZenML server instance accessible from your cloud provider (not a local server). +- Create a service account and API key for programmatic access to the ZenML server using: ```shell zenml service-account create <account-name> ``` -- Install [Terraform](https://www.terraform.io/downloads.html) (version 1.9 or higher). -- Authenticate with your cloud provider using its CLI or SDK. +- Install [Terraform](https://www.terraform.io/downloads.html) (version 1.9+) on your machine. +- Authenticate with your cloud provider's CLI or SDK. -#### Using Terraform Modules -1. Set up the ZenML provider: +## Using Terraform Stack Deployment Modules +1. Set up the ZenML Terraform provider using environment variables: ```shell export ZENML_SERVER_URL="https://your-zenml-server.com" export ZENML_API_KEY="<your-api-key>" ``` -2. Create a `main.tf` configuration file: + +2. Create a Terraform configuration file (e.g., `main.tf`): ```hcl terraform { required_providers { @@ -11622,59 +11534,63 @@ ZenML provides a set of [Terraform modules](https://registry.terraform.io/module } provider "zenml" {} - provider "aws" { region = "eu-central-1" } module "zenml_stack" { - source = "zenml-io/zenml-stack/aws" + source = "zenml-io/zenml-stack/<cloud-provider>" zenml_stack_name = "<your-stack-name>" orchestrator = "<your-orchestrator-type>" } - output "zenml_stack_id" { value = module.zenml_stack.zenml_stack_id } - output "zenml_stack_name" { value = module.zenml_stack.zenml_stack_name } + output "zenml_stack_id" { + value = module.zenml_stack.zenml_stack_id + } + output "zenml_stack_name" { + value = module.zenml_stack.zenml_stack_name + } ``` -3. Run the following commands: + +3. Run Terraform commands: ```shell terraform init terraform apply ``` -4. Confirm the changes by typing `yes` when prompted. -5. Once completed, the ZenML stack is created and registered. + Confirm changes by typing `yes` when prompted. -#### Cloud Provider Specifics +4. After provisioning, install required integrations and set the ZenML stack: + ```shell + zenml integration install <list-of-required-integrations> + zenml stack set <zenml_stack_id> + ``` -**AWS:** -- Install AWS CLI and run `aws configure`. -- Example configuration: +## Cloud Provider Specifics +### AWS +- **Authentication**: Install the [AWS CLI](https://aws.amazon.com/cli/) and run `aws configure`. +- **Example Configuration**: ```hcl provider "aws" { region = "eu-central-1" } ``` -- Stack components include S3 Artifact Store, ECR Container Registry, and various orchestrators. -**GCP:** -- Install `gcloud` CLI and run `gcloud init`. -- Example configuration: +### GCP +- **Authentication**: Install the [gcloud CLI](https://cloud.google.com/sdk/gcloud) and run `gcloud init`. +- **Example Configuration**: ```hcl - provider "google" { region = "europe-west3"; project = "my-project" } + provider "google" { region = "europe-west3" project = "my-project" } ``` -- Stack components include GCS Artifact Store, Google Artifact Registry, and various orchestrators. -**Azure:** -- Install Azure CLI and run `az login`. -- Example configuration: +### Azure +- **Authentication**: Install the [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/) and run `az login`. +- **Example Configuration**: ```hcl - provider "azurerm" { features {} } + provider "azurerm" { features { resource_group { prevent_deletion_if_contains_resources = false } } } ``` -- Stack components include Azure Storage Account, ACR Container Registry, and various orchestrators. -#### Cleanup +## Cleanup To remove all resources provisioned by Terraform, run: ```shell terraform destroy -``` -This command will also delete the registered ZenML stack. +``` -For more details on specific cloud providers, refer to the respective Terraform module documentation. +This command deletes both the resources and the registered ZenML stack. ================================================== @@ -11682,18 +11598,15 @@ For more details on specific cloud providers, refer to the respective Terraform ### Configuring ZenML's Default Behavior -This guide outlines methods to configure ZenML's behavior in different scenarios. +This guide outlines methods to configure ZenML's behavior in various situations. -#### Key Configuration Options: -- **Environment Variables**: Adjust ZenML settings using environment variables for flexibility in different environments. -- **Configuration Files**: Utilize YAML or JSON files for persistent configurations, allowing for easy adjustments and version control. -- **CLI Commands**: Use command-line interface commands to set configurations dynamically during runtime. +Key Points: +- Users can adapt ZenML's settings to suit their needs. +- Configuration options allow for customization of ZenML's functionality. -#### Important Points: -- Ensure to document any changes made for future reference. -- Test configurations in a safe environment before applying them to production. +For visual reference, an image related to ZenML is provided. -For further details, refer to the specific sections on environment variables, configuration files, and CLI commands in the ZenML documentation. + ================================================== @@ -11701,43 +11614,55 @@ For further details, refer to the specific sections on environment variables, co # Project Setup and Management -This section outlines the essential steps for setting up and managing ZenML projects. Key points include: +This section outlines the setup and management of ZenML projects, covering essential technical information. -1. **Project Initialization**: Use `zenml init` to create a new ZenML project. This command sets up the necessary directory structure and configuration files. +## Key Points: -2. **Configuration**: Configure your project by editing the `zenml.yaml` file. This file contains settings for pipelines, integrations, and other project-specific parameters. +1. **Project Initialization**: + - Use `zenml init` to create a new ZenML project. + - This command sets up the necessary directory structure and configuration files. -3. **Version Control**: It's recommended to use Git for version control. Initialize a Git repository in your project directory to track changes. +2. **Configuration**: + - ZenML uses a `.zenml` directory to store configurations. + - Key configurations include: + - **Stacks**: Define the components (e.g., orchestrators, artifact stores) used in the pipeline. + - **Pipelines**: Specify the sequence of steps for data processing and model training. -4. **Environment Management**: Use virtual environments (e.g., `venv` or `conda`) to manage dependencies specific to your ZenML project. - -5. **Pipeline Management**: Define pipelines using decorators and functions. Use `@pipeline` to create a pipeline and `@step` to define individual steps. +3. **Version Control**: + - It is recommended to use Git for version control of the project. + - Ensure that the `.zenml` directory is included in version control to track changes in configurations. -6. **Execution**: Run pipelines using the command `zenml run <pipeline_name>`. Ensure all dependencies are resolved before execution. +4. **Environment Management**: + - Use virtual environments (e.g., `venv`, `conda`) to manage dependencies. + - Install ZenML with `pip install zenml`. -7. **Logging and Monitoring**: Utilize ZenML's built-in logging features to monitor pipeline execution and troubleshoot issues. +5. **Running Pipelines**: + - Execute pipelines using `zenml run <pipeline_name>`. + - Monitor execution status and logs for troubleshooting. -8. **Documentation**: Maintain project documentation to facilitate collaboration and onboarding of new team members. +6. **Collaboration**: + - Share project configurations and pipelines with team members. + - Utilize ZenML's built-in features for collaboration and reproducibility. -By following these guidelines, you can effectively set up and manage ZenML projects, ensuring a smooth workflow and collaboration. +By following these guidelines, users can effectively set up and manage ZenML projects, ensuring a streamlined workflow for machine learning tasks. ================================================== === File: docs/book/how-to/project-setup-and-management/interact-with-secrets.md === -# ZenML Secrets Management Documentation Summary +# ZenML Secrets Management -## Overview of ZenML Secrets -ZenML secrets are secure groupings of **key-value pairs** stored in the ZenML secrets store, each identified by a **name** for easy reference in pipelines and stacks. +## Overview +ZenML secrets are secure groupings of **key-value pairs** stored in the ZenML secrets store, identified by a **name** for easy reference in pipelines and stacks. -## Creating a Secret +## Creating Secrets -### CLI Method +### CLI To create a secret named `<SECRET_NAME>` with key-value pairs: ```shell zenml secret create <SECRET_NAME> --<KEY_1>=<VALUE_1> --<KEY_2>=<VALUE_2> ``` -Alternatively, use JSON or YAML format: +Alternatively, use JSON/YAML format: ```shell zenml secret create <SECRET_NAME> --values='{"key1":"value2","key2":"value2"}' ``` @@ -11745,14 +11670,14 @@ For interactive creation: ```shell zenml secret create <SECRET_NAME> -i ``` -For large values or special characters, read from a file: +For large values, read from a file: ```bash zenml secret create <SECRET_NAME> --key=@path/to/file.txt zenml secret create <SECRET_NAME> --values=@path/to/file.txt ``` Additional CLI commands are available for listing, updating, and deleting secrets. -### Python SDK Method +### Python SDK Using the ZenML client API: ```python from zenml.client import Client @@ -11762,27 +11687,27 @@ client.create_secret(name="my_secret", values={"username": "admin", "password": ``` Other methods include `get_secret`, `update_secret`, `list_secrets`, and `delete_secret`. -## Scoping Secrets -Secrets can be scoped to a user, making them accessible only to that user. By default, secrets are scoped to the active user. To create a user-scoped secret: +## Secret Scoping +Secrets can be scoped to a user, defaulting to the active user. To create a user-scoped secret: ```shell zenml secret create <SECRET_NAME> --scope user --<KEY_1>=<VALUE_1> ``` -## Accessing Registered Secrets +## Accessing Secrets -### Secret References -Components in a stack can reference secrets without hard-coding sensitive information. Use the syntax `{{<SECRET_NAME>.<SECRET_KEY>}}` to reference secrets: +### Reference in Stack Components +To reference secrets in stack components, use the syntax: `{{<SECRET_NAME>.<SECRET_KEY>}}`. Example: ```shell zenml secret create mlflow_secret --username=admin --password=abc123 zenml experiment-tracker register mlflow --tracking_username={{mlflow_secret.username}} --tracking_password={{mlflow_secret.password}} ``` -ZenML validates the existence of referenced secrets before running a pipeline. You can control validation with the `ZENML_SECRET_VALIDATION_LEVEL` environment variable: -- `NONE`: Disables validation. -- `SECRET_EXISTS`: Validates only the existence of secrets. -- `SECRET_AND_KEY_EXISTS`: Validates both secret and key existence (default). +ZenML validates the existence of referenced secrets and keys before running a pipeline. Control validation with the `ZENML_SECRET_VALIDATION_LEVEL` environment variable: +- `NONE`: disables validation. +- `SECRET_EXISTS`: checks if secrets exist. +- `SECRET_AND_KEY_EXISTS`: (default) checks both secret and key existence. ### Fetching Secret Values in Steps -Secrets can be accessed directly in steps using the ZenML `Client` API: +Access secrets in steps using the ZenML `Client` API: ```python from zenml import step from zenml.client import Client @@ -11796,127 +11721,146 @@ def secret_loader() -> None: ) ``` -This summary captures the essential information about managing secrets in ZenML, including creation, scoping, referencing, and accessing secrets programmatically. +This summary captures the essential information regarding ZenML secrets management, including creation, scoping, access, and usage in both CLI and Python SDK contexts. ================================================== === File: docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/README.md === -# Setting up a Well-Architected ZenML Project +# Setting Up a Well-Architected ZenML Project -This guide outlines best practices for structuring ZenML projects to ensure scalability, maintainability, and team collaboration. +## Overview +This guide outlines best practices for structuring ZenML projects to enhance scalability, maintainability, and team collaboration in machine learning operations (MLOps). -## Importance of a Well-Architected Project -A well-architected ZenML project forms a strong foundation for efficient development, deployment, and maintenance of machine learning models, enabling a robust MLOps pipeline. +## Importance +A well-architected ZenML project serves as a foundation for efficient development, deployment, and maintenance of ML models. ## Key Components ### Repository Structure - Organize folders for pipelines, steps, and configurations. -- Maintain a clear separation of concerns and consistent naming conventions. +- Maintain clear separation of concerns and consistent naming conventions. ### Version Control and Collaboration -- Integrate with Git for code management and collaboration. -- Enables faster pipeline builds and easy tracking of changes. +- Integrate with Git for easy change tracking and team collaboration. +- Speed up pipeline builds by reusing images and code from your repository. ### Stacks, Pipelines, Models, and Artifacts -- **Stacks**: Define infrastructure and tool configurations. -- **Models**: Represent machine learning models and metadata. -- **Pipelines**: Encapsulate ML workflows. -- **Artifacts**: Track data and model outputs. +- **Stacks**: Infrastructure and tool configurations. +- **Models**: ML models and metadata. +- **Pipelines**: Encapsulated ML workflows. +- **Artifacts**: Data and model output tracking. ### Access Management and Roles - Define roles (e.g., data scientists, MLOps engineers). - Set up service connectors and manage authorizations. -- Use ZenML Pro Teams for role assignment. +- Use ZenML Pro Teams for role assignments. ### Shared Components and Libraries - Promote code reuse with custom flavors, steps, and shared libraries. - Handle authentication for specific libraries. ### Project Templates -- Utilize pre-made and custom templates for consistency in new projects. +- Utilize pre-made or custom templates for consistency in project setup. ### Migration and Maintenance - Strategies for migrating legacy code and upgrading ZenML servers. ## Getting Started -Explore the guides in this section for detailed information on project setup and management. Regularly review and refine your project structure to meet evolving team needs. Following these guidelines will help create a scalable and collaborative MLOps environment. +Explore the guides for detailed information on project setup and management. Regularly review and refine your project structure to adapt to evolving team needs. Following these guidelines will help create a robust and collaborative MLOps environment. ================================================== === File: docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/connect-your-git-repository.md === -### Summary of ZenML Code Repository Documentation +### Summary of ZenML Code Repository Integration -#### Overview -ZenML allows you to connect your code repository (e.g., GitHub, GitLab) to track code versions and optimize Docker image builds by avoiding unnecessary rebuilds when source files change. +**Overview**: Connecting a Git repository to ZenML helps track code versions for pipeline runs and speeds up Docker image builds by avoiding unnecessary rebuilds. #### Registering a Code Repository 1. **Install Integration**: - ```bash + ```shell zenml integration install <INTEGRATION_NAME> ``` 2. **Register Repository**: - ```bash + ```shell zenml code-repository register <NAME> --type=<TYPE> [--CODE_REPOSITORY_OPTIONS] ``` #### Available Implementations -- **GitHub**: - - Install integration: - ```bash - zenml integration install github - ``` - - Register repository: - ```bash - zenml code-repository register <NAME> --type=github \ - --owner=<OWNER> --repository=<REPOSITORY> --token=<GITHUB_TOKEN> - ``` - - For self-hosted GitHub, include: - ```bash - --api_url=<API_URL> --host=<HOST> - ``` - - **Secure Token Storage**: - ```bash - zenml secret create github_secret --pa_token=<GITHUB_TOKEN> - zenml code-repository register ... --token={{github_secret.pa_token}} - ``` +- **Built-in Support**: ZenML supports GitHub and GitLab natively, with options for custom implementations. -- **GitLab**: - - Install integration: - ```bash - zenml integration install gitlab - ``` - - Register repository: - ```bash - zenml code-repository register <NAME> --type=gitlab \ - --group=<GROUP> --project=<PROJECT> --token=<GITLAB_TOKEN> - ``` - - For self-hosted GitLab, include: - ```bash - --instance_url=<INSTANCE_URL> --host=<HOST> - ``` - - **Secure Token Storage**: - ```bash - zenml secret create gitlab_secret --pa_token=<GITLAB_TOKEN> - zenml code-repository register ... --token={{gitlab_secret.pa_token}} - ``` +##### GitHub Integration +1. **Install GitHub Integration**: + ```shell + zenml integration install github + ``` + +2. **Register GitHub Repository**: + ```shell + zenml code-repository register <NAME> --type=github \ + --owner=<OWNER> --repository=<REPOSITORY> \ + --token=<GITHUB_TOKEN> + ``` + - For GitHub Enterprise, add: + ```shell + --api_url=<API_URL> --host=<HOST> + ``` + +3. **Store GitHub Token Securely**: + ```shell + zenml secret create github_secret --pa_token=<GITHUB_TOKEN> + zenml code-repository register ... --token={{github_secret.pa_token}} + ``` + +##### GitLab Integration +1. **Install GitLab Integration**: + ```shell + zenml integration install gitlab + ``` + +2. **Register GitLab Repository**: + ```shell + zenml code-repository register <NAME> --type=gitlab \ + --group=<GROUP> --project=<PROJECT> \ + --token=<GITLAB_TOKEN> + ``` + - For self-hosted GitLab, add: + ```shell + --instance_url=<INSTANCE_URL> --host=<HOST> + ``` + +3. **Store GitLab Token Securely**: + ```shell + zenml secret create gitlab_secret --pa_token=<GITLAB_TOKEN> + zenml code-repository register ... --token={{gitlab_secret.pa_token}} + ``` #### Custom Code Repository Development To implement a custom code repository: -1. Subclass `BaseCodeRepository` and implement: - - `login()` - - `download_files(commit: str, directory: str, repo_sub_directory: Optional[str])` - - `get_local_context(path: str)` +1. Subclass `BaseCodeRepository` and implement required methods: + ```python + class BaseCodeRepository(ABC): + @abstractmethod + def login(self) -> None: + pass -2. Register the custom repository: - ```bash + @abstractmethod + def download_files(self, commit: str, directory: str, repo_sub_directory: Optional[str]) -> None: + pass + + @abstractmethod + def get_local_context(self, path: str) -> Optional["LocalRepositoryContext"]: + pass + ``` + +2. **Register Custom Repository**: + ```shell zenml code-repository register <NAME> --type=custom --source=my_module.MyRepositoryClass [--CODE_REPOSITORY_OPTIONS] ``` -This documentation provides essential commands and processes for integrating and managing code repositories within ZenML, ensuring efficient pipeline execution and version control. +This integration allows ZenML to track code changes and maintain the integrity of pipeline executions efficiently. ================================================== @@ -11927,7 +11871,7 @@ This documentation provides essential commands and processes for integrating and #### Project Structure A recommended structure for ZenML projects is as follows: -``` +```markdown . ├── .dockerignore ├── Dockerfile @@ -11936,13 +11880,11 @@ A recommended structure for ZenML projects is as follows: │ │ ├── loader_step.py │ │ └── requirements.txt (optional) │ └── training_step -│ └── ... ├── pipelines │ ├── training_pipeline │ │ ├── training_pipeline.py │ │ └── requirements.txt (optional) │ └── deployment_pipeline -│ └── ... ├── notebooks │ └── *.ipynb ├── requirements.txt @@ -11950,12 +11892,12 @@ A recommended structure for ZenML projects is as follows: └── run.py ``` -- The `steps` and `pipelines` folders contain the respective components of your project. For simpler projects, you can keep steps at the top level of the `steps` folder. -- Registering your repository as a code repository allows ZenML to track code versions and can speed up Docker image builds. +- **Steps and Pipelines**: Store each in separate Python files for modularity. You can keep them in subfolders or at the top level if the project is simpler. +- **Code Repository**: Registering your repository allows ZenML to track code versions and speeds up Docker image builds. #### Steps -- Store each step in separate Python files for better organization of utils and dependencies. -- Use the `logging` module to log messages, which will be recorded in the ZenML dashboard. +- Keep steps in separate Python files. +- Use the `logging` module for logging, which will be recorded in the ZenML dashboard. ```python from zenml.logger import get_logger @@ -11968,37 +11910,30 @@ def training_data_loader(): ``` #### Pipelines -- Similar to steps, keep pipelines in separate Python files. -- Separate pipeline execution from definition to avoid immediate execution upon import. -- Avoid naming pipelines "pipeline" to prevent conflicts with the ZenML decorator. +- Store pipelines in separate Python files. +- Separate execution from definition to avoid immediate execution upon import. +- Avoid naming pipelines or instances "pipeline" to prevent conflicts. #### .dockerignore -- Exclude unnecessary files (e.g., data, virtual environments) in the `.dockerignore` to optimize Docker image size and build time. +Exclude unnecessary files (e.g., data, virtual environments) to optimize Docker image size and build speed. #### Dockerfile (optional) -- ZenML uses an official Docker image by default. You can customize this with your own `Dockerfile`. +ZenML uses an official Docker image by default. You can customize this with your own `Dockerfile`. #### Notebooks -- Organize all Jupyter notebooks in a dedicated folder. +Organize all notebooks in a dedicated folder. -#### .zen -- Initialize a `.zen` directory with `zenml init` to define the project scope and resolve import paths. -- It is crucial for Jupyter notebooks and recommended for Python scripts to have a `.zen` directory in the project root. +#### .zen Directory +Run `zenml init` at the project root to define the project scope. This is crucial for resolving import paths and storing configurations. Ensure a `.zen` directory is present when running Jupyter notebooks or Python scripts to avoid issues with import paths. #### run.py -- Place pipeline runners in the root directory to ensure correct import resolution. If no `.zen` is defined, this also sets the implicit source root. - -### Key Points -- Maintain a clear structure with separate files for steps and pipelines. -- Use logging for visibility in the ZenML dashboard. -- Manage Docker builds efficiently with `.dockerignore` and optional `Dockerfile`. -- Ensure proper initialization of the `.zen` directory for import path resolution. +Place your pipeline runners in the root directory to ensure proper resolution of imports relative to the project root. If no `.zen` is defined, this also sets the implicit source root. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/README.md === -It appears that the text you intended to provide for summarization is missing. Please provide the documentation text, and I will be happy to summarize it for you. +It seems that you provided an icon descriptor ("--- icon: people-group ---") without any accompanying documentation text to summarize. Please provide the relevant documentation text that you would like me to summarize, and I'll be happy to assist you! ================================================== @@ -12006,46 +11941,42 @@ It appears that the text you intended to provide for summarization is missing. P # Access Management and Roles in ZenML -This guide outlines user roles and access management in ZenML, essential for project security and efficiency. +This guide outlines the management of user roles and responsibilities in ZenML, emphasizing the importance of effective access management for security and efficiency. ## Typical Roles in an ML Project +Common roles include: - **Data Scientists**: Develop and run pipelines. - **MLOps Platform Engineers**: Manage infrastructure and stack components. - **Project Owners**: Oversee ZenML deployment and user access. -Roles may vary in your organization, but responsibilities can be aligned with the above descriptions. +Roles may vary in your organization, but responsibilities can be adapted accordingly. ### Creating Roles -You can create roles in ZenML Pro with specific permissions and assign them to Users or Teams. [Sign up for a free trial](https://cloud.zenml.io/). +You can create roles in ZenML Pro with specific permissions, assigning them to Users or Teams. [Sign up for a free trial](https://cloud.zenml.io/). ## Service Connectors -Service connectors integrate cloud services with ZenML, abstracting credentials and configurations. Ideally, only MLOps Platform Engineers should manage these connectors, while other team members can use them for stack components without accessing sensitive credentials. - -### Example Permissions -- **Data Scientist Role**: Can use connectors to create stack components and run pipelines, but cannot create, update, delete connectors, or access secret values. -- **MLOps Platform Engineer Role**: Has permissions to create, update, delete connectors, and read secret values. +Service connectors integrate external services with ZenML, abstracting credentials and configurations. Only MLOps Platform Engineers should manage these connectors due to their infrastructure knowledge. -### Note -RBAC features are available only in ZenML Pro. Learn more about roles [here](../../../getting-started/zenml-pro/roles.md). +### Role Permissions +- **Data Scientist**: Can use connectors to create stack components and run pipelines, but cannot create, update, delete connectors, or access secret values. +- **MLOps Platform Engineer**: Has permissions to create, update, delete connectors, and read secret values. -## Server Upgrade Responsibilities -- **Decision**: Typically made by Project Owners after team consultations to avoid conflicts. -- **Execution**: MLOps Platform Engineers are responsible for upgrades, ensuring data backups and no service disruptions. +RBAC features are available in ZenML Pro. More details can be found in the [Managing Stacks and Components guide](../../infrastructure-deployment/stack-deployment/README.md). -For detailed upgrade practices, refer to the [Best Practices for Upgrading ZenML Servers](../../../how-to/manage-zenml-server/best-practices-upgrading-zenml.md). +## Server Upgrades +Project Owners decide on server upgrades after consulting teams. MLOps Platform Engineers typically handle the upgrade process, ensuring data backup and no service disruption. For best practices, refer to the [Best Practices for Upgrading ZenML Servers](../../../how-to/manage-zenml-server/best-practices-upgrading-zenml.md). -## Pipeline Migration and Maintenance -Data Scientists own the pipeline code, while Platform Engineers ensure compatibility with new ZenML versions. Testing and staged upgrades are crucial to prevent workflow disruptions. Data Scientists should review release notes and migration guides. +## Pipeline Maintenance +Data Scientists own pipeline code, while Platform Engineers ensure compatibility with new ZenML versions. Both should review release notes and migration guides during upgrades. ## Best Practices for Access Management -- **Regular Audits**: Periodic reviews of user access and permissions. +To maintain a secure ZenML environment: +- **Regular Audits**: Review user access and permissions periodically. - **Role-Based Access Control (RBAC)**: Streamline permission management. - **Least Privilege**: Grant minimal necessary permissions. -- **Documentation**: Maintain clear records of roles and access policies. +- **Documentation**: Keep clear records of roles and access policies. -RBAC and permission assignment are exclusive to ZenML Pro users. - -By adhering to these guidelines, you can maintain a secure ZenML environment that fosters collaboration while ensuring proper access controls. +RBAC is available only for ZenML Pro users. Following these guidelines supports collaboration while ensuring proper access controls. ================================================== @@ -12053,59 +11984,60 @@ By adhering to these guidelines, you can maintain a secure ZenML environment tha # Organizing Stacks, Pipelines, Models, and Artifacts in ZenML -This guide outlines how to effectively organize stacks, pipelines, models, and artifacts in ZenML, which are essential components of your ML project architecture. +This guide outlines the organization of stacks, pipelines, models, and artifacts in ZenML, which are essential for structuring your ML projects effectively. ## Key Concepts -- **Stacks**: Configuration of tools and infrastructure for running pipelines, consisting of components like orchestrators and artifact stores. Stacks enable consistent environments across local, staging, and production setups. +- **Stacks**: Configuration of tools and infrastructure for running pipelines, comprising components like orchestrators and artifact stores. Stacks enable seamless transitions between environments (local, staging, production) and promote reproducibility. -- **Pipelines**: Sequences of steps representing specific tasks in the ML workflow, such as data preparation and model training. Pipelines should be modular, with separate pipelines for different tasks to enhance manageability and collaboration. +- **Pipelines**: A sequence of steps in your ML workflow, automating tasks and providing visibility. It's advisable to separate pipelines by task (e.g., training vs. inference) for modularity and easier management. -- **Models**: Entities that group related pipelines, artifacts, and metadata, acting as a "project" that spans multiple pipelines. Models facilitate data transfer between pipelines. +- **Models**: Entities that group related pipelines, artifacts, and metadata. Models facilitate data transfer between pipelines and can represent a project or workspace. -- **Artifacts**: Outputs of pipeline steps that can be tracked and reused. Artifacts should be named clearly for easy identification, and each pipeline run generates a new version for traceability. +- **Artifacts**: Outputs from pipeline steps that can be tracked and reused. Proper naming and versioning of artifacts ensure traceability and organization. -## Stack Organization +## Stack Management -- Use a single stack for multiple pipelines to reduce configuration overhead and maintain a consistent execution environment. -- Stacks should be created once and reused to minimize errors and enhance reproducibility. +- You do not need a separate stack for each pipeline; multiple pipelines can share a stack. +- Benefits of reusing stacks include reduced configuration overhead, consistent environments, and minimized error risks. -## Pipeline Organization +## Organizing Pipelines, Models, and Artifacts -- Separate pipelines for tasks like training and inference allow for independent execution and easier code management. -- This modular approach enables different team members to work on separate pipelines without conflict. +### Pipelines +- Encompass the entire ML workflow, including data preparation and evaluation. +- Benefits of modular pipelines include independent execution, easier code management, and improved collaboration. -## Model and Artifact Management +### Models +- Use models to connect related pipelines and facilitate data handover. The Model Control Plane helps manage model versions and stages. -- Use Models to connect related pipelines and manage the flow of data. -- Artifacts should be tied to Models for better organization and visibility. Log metadata for enhanced tracking in the Model Control Plane. +### Artifacts +- Track and reuse outputs from pipeline steps. Each unique execution produces a new artifact version, ensuring clear history and traceability. ## Example Workflow -In a team scenario with Bob and Alice working on a classification model: -1. They create three pipelines: feature engineering, training, and inference. -2. Both use a `default` stack for local testing. -3. Bob's training pipeline produces model artifacts required by Alice's inference pipeline. -4. They utilize a ZenML Model to link pipelines and artifacts, ensuring Alice can access the correct model version. -5. The Model Control Plane helps manage model versions and promote the best-performing model to production. +1. Team members create three pipelines: feature engineering, training, and inference. +2. They use a shared `default` stack for local testing. +3. Artifacts from the training pipeline (model, metrics) are utilized in the inference pipeline. +4. The Model Control Plane manages model versions, allowing easy access and comparisons. +5. Inference pipelines produce new artifacts, such as prediction datasets. -## Guidelines for Organization +## Rules of Thumb ### Models -- Create one Model per distinct ML use case. -- Use Models to group related resources. -- Manage model versions and stages with the Model Control Plane. +- One model per distinct ML use-case. +- Group related pipelines and artifacts. +- Use the Model Control Plane for version management. ### Stacks -- Maintain separate stacks for different environments. +- Separate stacks for different environments. - Share production and staging stacks for consistency. - Keep local stacks simple for quick iterations. ### Naming and Organization -- Use consistent naming conventions. -- Leverage tags for resource organization. -- Document configurations and dependencies. -- Keep pipeline code modular and reusable. +- Consistent naming conventions for resources. +- Use tags for organization (e.g., `environment:production`). +- Document stack configurations and dependencies. +- Maintain modular and reusable pipeline code. Following these guidelines will help maintain a clean and scalable MLOps workflow as your project evolves. @@ -12115,81 +12047,86 @@ Following these guidelines will help maintain a clean and scalable MLOps workflo # Shared Libraries and Logic for Teams -## Overview -This guide outlines how teams can share code libraries using ZenML to enhance collaboration, standardization, and robustness across projects. It covers what can be shared and how to distribute shared components. +This guide addresses sharing code and libraries within teams using ZenML, focusing on what can be shared and how to distribute shared components. ## What Can Be Shared -ZenML supports sharing various custom components: ### Custom Flavors -- Create in a shared repository. -- Implement as per ZenML documentation. -- Register using ZenML CLI: - ```bash - zenml artifact-store flavor register <path.to.MyS3ArtifactStoreFlavor> - ``` +Custom flavors are integrations not built-in with ZenML. To share: +1. Create the flavor in a shared repository. +2. Implement the component as per ZenML documentation. +3. Register using the ZenML CLI: + ```bash + zenml artifact-store flavor register <path.to.MyS3ArtifactStoreFlavor> + ``` ### Custom Steps -- Develop and share via a separate repository. -- Reference as standard Python modules. +Custom steps can be created in a separate repository and referenced like Python modules. ### Custom Materializers -- Create in a shared repository. -- Implement according to ZenML guidelines. -- Import and use in projects. +To share a custom materializer: +1. Create it in a shared repository. +2. Implement as per ZenML documentation. +3. Team members can import and use it. ## How to Distribute Shared Components + ### Shared Private Wheels -- Package Python code for internal use. -- **Benefits**: Easy installation, version and dependency management, privacy. -- **Setup**: - 1. Create a private PyPI server (e.g., AWS CodeArtifact). - 2. Build code into wheel format. - 3. Upload to the server. - 4. Configure pip to use the private server. - 5. Install packages via pip. +This method packages Python code for internal distribution. -### Using Shared Libraries with `DockerSettings` -- ZenML generates a `Dockerfile` at runtime for remote orchestrators. -- Specify shared libraries in `DockerSettings`: - ```python - import os - from zenml.config import DockerSettings - from zenml import pipeline +#### Benefits +- Easy installation via pip +- Simplified version and dependency management +- Privacy through internal hosting - docker_settings = DockerSettings( - requirements=["my-simple-package==0.1.0"], - environment={'PIP_EXTRA_INDEX_URL': f"https://{os.environ.get('PYPI_TOKEN', '')}@my-private-pypi-server.com/{os.environ.get('PYPI_USERNAME', '')}/"} - ) +#### Setup Steps +1. Create a private PyPI server (e.g., AWS CodeArtifact). +2. Build your code into wheel format. +3. Upload the wheel to the server. +4. Configure pip to use the private server. +5. Install packages like public ones. - @pipeline(settings={"docker": docker_settings}) - def my_pipeline(...): - ... - ``` +### Using Shared Libraries with `DockerSettings` +ZenML generates a `Dockerfile` at runtime for remote orchestrators. Use `DockerSettings` to include shared libraries. -- Alternatively, use a requirements file: - ```python - docker_settings = DockerSettings(requirements="/path/to/requirements.txt") +#### Installing Shared Libraries +Specify requirements directly: +```python +import os +from zenml.config import DockerSettings +from zenml import pipeline - @pipeline(settings={"docker": docker_settings}) - def my_pipeline(...): - ... - ``` +docker_settings = DockerSettings( + requirements=["my-simple-package==0.1.0"], + environment={'PIP_EXTRA_INDEX_URL': f"https://{os.environ.get('PYPI_TOKEN', '')}@my-private-pypi-server.com/{os.environ.get('PYPI_USERNAME', '')}/"} +) -- Example `requirements.txt`: - ``` - --extra-index-url https://YOURTOKEN@my-private-pypi-server.com/YOURUSERNAME/ - my-simple-package==0.1.0 - ``` +@pipeline(settings={"docker": docker_settings}) +def my_pipeline(...): + ... +``` +Or use a requirements file: +```python +docker_settings = DockerSettings(requirements="/path/to/requirements.txt") + +@pipeline(settings={"docker": docker_settings}) +def my_pipeline(...): + ... +``` +The `requirements.txt` should include: +``` +--extra-index-url https://YOURTOKEN@my-private-pypi-server.com/YOURUSERNAME/ +my-simple-package==0.1.0 +``` ## Best Practices - Use version control (e.g., Git) for shared repositories. - Implement access controls for private PyPI servers. - Maintain clear documentation for shared components. - Regularly update shared libraries and communicate changes. -- Consider continuous integration for quality assurance. +- Set up continuous integration for quality assurance. -By following these guidelines, teams can effectively share code and libraries, ensuring consistency and accelerating development within the ZenML framework. +By following these methods, teams can enhance collaboration, maintain consistency, and accelerate development within the ZenML framework. ================================================== @@ -12197,63 +12134,61 @@ By following these guidelines, teams can effectively share code and libraries, e ### Creating Your Own ZenML Template -To create a ZenML template for standardizing and sharing ML workflows, follow these steps: +To standardize and share ML workflows, you can create a ZenML template using the Copier library. Here’s a concise guide: -1. **Create a Repository**: Set up a new repository to store your template's code and configuration files. +1. **Create a Repository**: Set up a new repository for your template code and configuration files. -2. **Define ML Workflows**: Use existing ZenML templates (e.g., the [starter template](https://github.com/zenml-io/template-starter)) as a reference to define your ML steps and pipelines. +2. **Define Workflows**: Implement your ML workflows as ZenML steps and pipelines. You can start by modifying an existing template, such as the [starter template](https://github.com/zenml-io/template-starter). -3. **Create `copier.yml`**: This configuration file specifies template parameters and default values. Refer to the [Copier documentation](https://copier.readthedocs.io/en/stable/creating/) for details. +3. **Create `copier.yml`**: This file defines the template's parameters and default values. Refer to the [Copier documentation](https://copier.readthedocs.io/en/stable/creating/) for details. -4. **Test Your Template**: Use the Copier command to generate a project from your template: +4. **Test Your Template**: Use the Copier CLI to generate a new project from your template: ```bash copier copy https://github.com/your-username/your-template.git your-project ``` -5. **Use Your Template with ZenML**: Initialize a ZenML project using your template: +5. **Use with ZenML**: Initialize your ZenML project using your template: ```bash zenml init --template https://github.com/your-username/your-template.git ``` - To specify a version, use: + For a specific version, use the `--template-tag` option: ```bash zenml init --template https://github.com/your-username/your-template.git --template-tag v1.0.0 ``` -6. **Keep It Updated**: Regularly update your template to align with best practices in ML workflows. +6. **Keep Updated**: Regularly update your template with best practices and changes in workflows. It's recommended to install the `e2e_batch` template with the `--template-with-defaults` flag for reference: -For practical examples, install the `e2e_batch` template using: - -```bash -mkdir e2e_batch -cd e2e_batch -zenml init --template e2e_batch --template-with-defaults -``` + ```bash + mkdir e2e_batch + cd e2e_batch + zenml init --template e2e_batch --template-with-defaults + ``` -This will help you follow along with the documentation effectively. +This guide helps you create a ZenML template for efficient ML project setup. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/project-templates/README.md === -# ZenML Project Templates Overview +### ZenML Project Templates Overview -ZenML provides project templates to help users quickly understand and build ML pipelines. These templates cover major use cases and include a simple CLI. +ZenML provides project templates to help users quickly understand the framework and build ML pipelines. These templates cover major use cases and include a simple CLI for ease of use. -## Available Project Templates +#### Available Project Templates -| Project Template [Short name] | Tags | Description | +| Project Template [Short Name] | Tags | Description | |-------------------------------|------|-------------| -| [Starter template](https://github.com/zenml-io/template-starter) [<code>starter</code>] | <code>basic</code> <code>scikit-learn</code> | A foundational template for ML, featuring parameterized steps, a model training pipeline, and a flexible configuration using scikit-learn. | -| [E2E Training with Batch Predictions](https://github.com/zenml-io/template-e2e-batch) [<code>e2e_batch</code>] | <code>etl</code> <code>hp-tuning</code> <code>model-promotion</code> <code>drift-detection</code> <code>batch-prediction</code> <code>scikit-learn</code> | A comprehensive template with pipelines for data loading, preprocessing, hyperparameter tuning, model training, evaluation, production promotion, data drift detection, and batch inference. | -| [NLP Training Pipeline](https://github.com/zenml-io/template-nlp) [<code>nlp</code>] | <code>nlp</code> <code>hp-tuning</code> <code>model-promotion</code> <code>training</code> <code>pytorch</code> <code>gradio</code> <code>huggingface</code> | An NLP training pipeline for BERT or GPT-2, covering tokenization, training, hyperparameter tuning, evaluation, and local testing with Gradio. | +| [Starter Template](https://github.com/zenml-io/template-starter) [*starter*] | `basic`, `scikit-learn` | Includes basic ML components: parameterized steps, a model training pipeline, flexible configuration, and a simple CLI, centered around a scikit-learn use case. | +| [E2E Training with Batch Predictions](https://github.com/zenml-io/template-e2e-batch) [*e2e_batch*] | `etl`, `hp-tuning`, `model-promotion`, `drift-detection`, `batch-prediction`, `scikit-learn` | Features two pipelines: data loading, splitting, preprocessing; hyperparameter tuning; model training and evaluation; model promotion; data drift detection; batch inference. | +| [NLP Training Pipeline](https://github.com/zenml-io/template-nlp) [*nlp*] | `nlp`, `hp-tuning`, `model-promotion`, `training`, `pytorch`, `gradio`, `huggingface` | A simple NLP pipeline covering tokenization, training, hyperparameter tuning, evaluation, and deployment for BERT or GPT-2 models, with local testing using Gradio. | -**Note:** ZenML is seeking collaboration for real-world project templates. Interested users can [join the Slack](https://zenml.io/slack/) for partnership opportunities. +*ZenML is seeking collaboration for real-world project templates. Interested users can join [Slack](https://zenml.io/slack/) to share their projects.* -## Using a Project Template +#### Using a Project Template To use the templates, install ZenML with the templates extras: @@ -12261,9 +12196,9 @@ To use the templates, install ZenML with the templates extras: pip install zenml[templates] ``` -**Important:** These templates differ from 'Run Templates' used for triggering pipelines. More information on Run Templates can be found [here](https://docs.zenml.io/how-to/trigger-pipelines). +Note: These templates differ from 'Run Templates' used for triggering pipelines. More information can be found [here](https://docs.zenml.io/how-to/trigger-pipelines). -To generate a project from a template, use the `--template` flag with the `zenml init` command: +To generate a project from a template, use the `zenml init` command with the `--template` flag: ```bash zenml init --template <short_name_of_template> @@ -12283,9 +12218,9 @@ zenml init --template <short_name_of_template> --template-with-defaults ### ZenML REST API: Creating and Running a Template -**Note:** This feature is available only in [ZenML Pro](https://zenml.io/pro). [Sign up here](https://cloud.zenml.io) for access. +**Note:** This feature is exclusive to [ZenML Pro](https://zenml.io/pro). [Sign up here](https://cloud.zenml.io) for access. -### Triggering a Pipeline via REST API +#### Triggering a Pipeline via REST API To trigger a pipeline using the REST API, you must first create a run template for that pipeline. The following steps outline the process: @@ -12301,29 +12236,27 @@ To trigger a pipeline using the REST API, you must first create a run template f - Call: `POST /run_templates/<TEMPLATE_ID>/runs` - Include `PipelineRunConfiguration` in the request body. -### Example Workflow +#### Example Workflow -To re-run a pipeline named `training`, follow these steps: +To re-run a pipeline named `training`, execute the following: -1. **Get Pipeline ID:** +1. **Fetch Pipeline ID:** ```shell curl -X 'GET' \ - '<YOUR_ZENML_SERVER_URL>/api/v1/pipelines?hydrate=false&name=training' \ + '<YOUR_ZENML_SERVER_URL>/api/v1/pipelines?name=training' \ -H 'accept: application/json' \ -H 'Authorization: Bearer <YOUR_TOKEN>' ``` + - Extract `<PIPELINE_ID>` from the response. - Extract `<PIPELINE_ID>` from the response. - -2. **Get Template ID:** +2. **Fetch Template ID:** ```shell curl -X 'GET' \ - '<YOUR_ZENML_SERVER_URL>/api/v1/run_templates?hydrate=false&pipeline_id=<PIPELINE_ID>' \ + '<YOUR_ZENML_SERVER_URL>/api/v1/run_templates?pipeline_id=<PIPELINE_ID>' \ -H 'accept: application/json' \ -H 'Authorization: Bearer <YOUR_TOKEN>' ``` - - Extract `<TEMPLATE_ID>` from the response. + - Extract `<TEMPLATE_ID>` from the response. 3. **Trigger the Pipeline:** ```shell @@ -12336,10 +12269,9 @@ To re-run a pipeline named `training`, follow these steps: "steps": {"model_trainer": {"parameters": {"model_type": "rf"}}} }' ``` + - A successful response indicates the pipeline has been re-triggered with the new configuration. -A successful response indicates that the pipeline has been re-triggered with the specified configuration. - -For more details on obtaining a bearer token, refer to the [API reference](../../../reference/api-reference.md#using-a-bearer-token-to-access-the-api-programmatically). +For details on obtaining a bearer token, refer to the [API reference](../../../reference/api-reference.md#using-a-bearer-token-to-access-the-api-programmatically). ================================================== @@ -12347,26 +12279,26 @@ For more details on obtaining a bearer token, refer to the [API reference](../.. ### ZenML CLI: Create a Run Template -**Note:** This feature is available only in [ZenML Pro](https://zenml.io/pro). [Sign up here](https://cloud.zenml.io) for access. +**Feature Access**: This feature is exclusive to [ZenML Pro](https://zenml.io/pro). [Sign up here](https://cloud.zenml.io) for access. -#### Command to Create a Template -Use the ZenML CLI to create a run template with the following command: +**Command**: Use the ZenML CLI to create a run template with the following command: ```bash zenml pipeline create-run-template <PIPELINE_SOURCE_PATH> --name=<TEMPLATE_NAME> ``` -- `<PIPELINE_SOURCE_PATH>`: Use `run.my_pipeline` if your pipeline is named `my_pipeline` in `run.py`. +- `<PIPELINE_SOURCE_PATH>`: Use `run.my_pipeline` if your pipeline is defined in `run.py` as `my_pipeline`. -**Important:** Ensure you have an active **remote stack** when executing this command, or specify one using the `--stack` option. +**Requirements**: Ensure you have an active **remote stack** when executing this command, or specify one using the `--stack` option. ================================================== === File: docs/book/how-to/trigger-pipelines/README.md === -### Trigger a Pipeline in ZenML +### Trigger a Pipeline (Run Templates) -In ZenML, you can trigger a pipeline using your pipeline function. Here’s a concise example: +In ZenML, pipelines can be triggered in various ways, with the simplest method being the direct execution of a pipeline function. +#### Example Code ```python from zenml import step, pipeline @@ -12376,10 +12308,7 @@ def load_data() -> dict: @step def train_model(data: dict) -> None: - total_features = sum(map(sum, data['features'])) - total_labels = sum(data['labels']) - print(f"Trained model using {len(data['features'])} data points. " - f"Feature sum is {total_features}, label sum is {total_labels}.") + print(f"Trained model using {len(data['features'])} data points.") @pipeline def simple_ml_pipeline(): @@ -12389,29 +12318,28 @@ if __name__ == "__main__": simple_ml_pipeline() ``` -### Run Templates - -**Run Templates** are pre-defined, parameterized configurations for ZenML pipelines. They can be executed from the ZenML dashboard or via the Client/REST API, serving as customizable blueprints for pipeline runs. +#### Run Templates +Run Templates are pre-defined, parameterized configurations for ZenML pipelines, allowing for easy execution from the ZenML dashboard or via the Client/REST API. They serve as customizable blueprints for pipeline runs. -**Note:** This feature is exclusive to ZenML Pro users. For access, sign up [here](https://cloud.zenml.io). +**Note:** This feature is exclusive to ZenML Pro users. -### Additional Resources -- Use templates: [Python SDK](use-templates-python.md) -- Use templates: [CLI](use-templates-cli.md) -- Use templates: [Dashboard](use-templates-dashboard.md) -- Use templates: [REST API](use-templates-rest-api.md) +#### Additional Resources +- [Use templates: Python SDK](use-templates-python.md) +- [Use templates: CLI](use-templates-cli.md) +- [Use templates: Dashboard](use-templates-dashboard.md) +- [Use templates: REST API](use-templates-rest-api.md) ================================================== === File: docs/book/how-to/trigger-pipelines/use-templates-dashboard.md === -### ZenML Dashboard Template Management +### ZenML Dashboard: Creating and Running Templates -**Feature Access**: This functionality is available only in [ZenML Pro](https://zenml.io/pro). [Sign up here](https://cloud.zenml.io) for access. +**Note:** This feature is exclusive to [ZenML Pro](https://zenml.io/pro). [Sign up here](https://cloud.zenml.io) for access. #### Creating a Template 1. Navigate to a pipeline run executed on a remote stack (requires a remote orchestrator, artifact store, and container registry). -2. Click `+ New Template`, enter a name, and click `Create`. +2. Click on `+ New Template`, enter a name, and click `Create`. #### Running a Template - To run a template: @@ -12420,29 +12348,30 @@ if __name__ == "__main__": You will be directed to the `Run Details` page, where you can upload a `.yaml` configuration file or modify the configuration using the editor. -Upon execution, the template runs on the same stack as the original run. +Once executed, the new run will occur on the same stack as the original run. ================================================== === File: docs/book/how-to/trigger-pipelines/use-templates-python.md === -### ZenML Run Templates Documentation Summary +### ZenML Template Creation and Execution Guide -**Feature Access**: This feature is exclusive to [ZenML Pro](https://zenml.io/pro). Sign up [here](https://cloud.zenml.io) for access. +**Feature Availability**: This feature is exclusive to [ZenML Pro](https://zenml.io/pro). [Sign up here](https://cloud.zenml.io) for access. #### Creating a Template -To create a run template using the ZenML client, you can either: -1. **From a Pipeline Run**: +To create a run template using the ZenML client: + +1. **From an Existing Pipeline Run**: ```python from zenml.client import Client run = Client().get_pipeline_run(<RUN_NAME_OR_ID>) Client().create_run_template(name=<TEMPLATE_NAME>, deployment_id=run.deployment_id) ``` - - **Note**: The pipeline run must be executed on a remote stack. + - **Note**: Select a pipeline run executed on a remote stack (with a remote orchestrator, artifact store, and container registry). -2. **From Pipeline Definition**: +2. **From Pipeline Definition** (requires an active remote stack): ```python from zenml import pipeline @@ -12452,10 +12381,11 @@ To create a run template using the ZenML client, you can either: template = my_pipeline.create_run_template(name=<TEMPLATE_NAME>) ``` - - Requires an active remote stack. #### Running a Template -To run a template, use the following code: + +To run a previously created template: + ```python from zenml.client import Client @@ -12466,10 +12396,12 @@ config = template.config_template Client().trigger_pipeline(template_id=template.id, run_configuration=config) ``` -- The new run will execute on the same stack as the original. +- A new run will execute on the same stack as the original. #### Advanced Usage: Running a Template from Another Pipeline -You can trigger a pipeline within another pipeline using the following structure: + +You can trigger one pipeline from another: + ```python import pandas as pd from zenml import pipeline, step @@ -12500,24 +12432,22 @@ def trigger_pipeline(df: UnmaterializedArtifact): @pipeline def loads_data_and_triggers_training(): df = load_data() - trigger_pipeline(df) + trigger_pipeline(df) # Triggers the training pipeline ``` -#### Additional Resources -- Learn more about [PipelineRunConfiguration](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.pipeline_run_configuration.PipelineRunConfiguration) and [`trigger_pipeline`](https://sdkdocs.zenml.io/latest/core_code_docs/core-client/#zenml.client.Client) in the SDK Docs. -- More on Unmaterialized Artifacts can be found [here](../data-artifact-management/complex-usecases/unmaterialized-artifacts.md). +For further details, refer to the [PipelineRunConfiguration](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.pipeline_run_configuration.PipelineRunConfiguration) and [`trigger_pipeline`](https://sdkdocs.zenml.io/latest/core_code_docs/core-client/#zenml.client.Client) documentation, as well as information on Unmaterialized Artifacts [here](../data-artifact-management/complex-usecases/unmaterialized-artifacts.md). ================================================== === File: docs/book/how-to/contribute-to-zenml/README.md === -# Contribute to ZenML +# Contributing to ZenML -Thank you for considering contributing to ZenML! We welcome contributions such as new features, documentation improvements, integrations, or bug reports. +Thank you for considering contributing to ZenML! ## How to Contribute -Refer to the [ZenML contribution guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) for best practices and conventions for contributing features, including custom integrations. +We welcome contributions such as new features, documentation improvements, integrations, or bug reports. For detailed guidelines on contributing, including creating custom integrations, please refer to the [ZenML contribution guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md).  @@ -12525,34 +12455,34 @@ Refer to the [ZenML contribution guide](https://github.com/zenml-io/zenml/blob/m === File: docs/book/how-to/contribute-to-zenml/implement-a-custom-integration.md === -# Creating an External Integration for ZenML +# Creating an External Integration and Contributing to ZenML -ZenML aims to streamline the MLOps landscape by providing numerous integrations with popular tools. This guide outlines how to contribute your own integration to the ZenML codebase. +ZenML aims to organize the MLOps landscape by providing numerous integrations with popular tools. This guide helps you contribute your own integration to the ZenML codebase. ### Step 1: Plan Your Integration -Identify the categories your integration fits into from the [ZenML categories list](../../component-guide/README.md). An integration may belong to multiple categories, such as cloud integrations (AWS/GCP/Azure) and their respective component types. +Identify the categories your integration belongs to. Categories can be found [here](../../component-guide/README.md). An integration may fit multiple categories, such as cloud integrations (AWS/GCP/Azure) that include container registries and artifact stores. ### Step 2: Create Stack Component Flavors -Develop individual stack component flavors for your chosen categories. Test them as custom flavors before packaging. For example, to register a custom orchestrator flavor: +Develop individual stack component flavors corresponding to the selected categories. Use the following command to register your flavor: ```shell zenml orchestrator flavor register flavors.my_flavor.MyOrchestratorFlavor ``` -Ensure ZenML is initialized at the root of your repository to resolve the flavor class correctly. List available flavors with: +Ensure ZenML is initialized at the root of your repository to avoid resolution issues. Verify the registration with: ```shell zenml orchestrator flavor list ``` -Refer to the [extensibility documentation](../../component-guide/README.md) for more details. +Refer to the extensibility documentation [here](../../component-guide/README.md) for more details. ### Step 3: Create an Integration Class -Once your flavors are ready, package them into your integration: +Once your flavors are ready, package them into your integration. Follow this checklist: -1. **Clone the ZenML Repository**: Set up your local environment by following the [contributing guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). +1. **Clone the Repo**: Clone the [main ZenML repository](https://github.com/zenml-io/zenml) and set up your local environment as per the [contributing guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). -2. **Create the Integration Directory**: Structure your integration within `src/zenml/integrations/` as follows: +2. **Create Integration Directory**: Create a new folder in `src/zenml/integrations/` for your integration. The structure should look like this: ``` /src/zenml/integrations/ @@ -12562,13 +12492,13 @@ Once your flavors are ready, package them into your integration: └── __init__.py ``` -3. **Define Integration Name**: Add your integration name to `zenml/integrations/constants.py`: +3. **Define Integration Name**: In `zenml/integrations/constants.py`, add: ```python EXAMPLE_INTEGRATION = "<name-of-integration>" ``` -4. **Create the Integration Class**: In `src/zenml/integrations/<YOUR_INTEGRATION>/__init__.py`, subclass the `Integration` class: +4. **Create Integration Class**: In `src/zenml/integrations/<YOUR_INTEGRATION>/__init__.py`, define your integration class: ```python from zenml.integrations.constants import <EXAMPLE_INTEGRATION> @@ -12580,7 +12510,7 @@ class ExampleIntegration(Integration): REQUIREMENTS = ["<INSERT PYTHON REQUIREMENTS HERE>"] @classmethod - def flavors(cls): + def flavors(cls) -> List[Type[Flavor]]: from zenml.integrations.<example_flavor> import <ExampleFlavor> return [<ExampleFlavor>] @@ -12589,10 +12519,10 @@ ExampleIntegration.check_installation() Refer to the [MLflow Integration](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/mlflow/__init__.py) for an example. -5. **Import the Integration**: Ensure your integration is imported in `src/zenml/integrations/__init__.py`. +5. **Import Integration**: Import your integration in `src/zenml/integrations/__init__.py`. ### Step 4: Create a PR -Submit a [Pull Request](https://github.com/zenml-io/zenml/compare) to ZenML for review. Thank you for contributing! +Submit a [PR](https://github.com/zenml-io/zenml/compare) to ZenML for review by core maintainers. Thank you for your contribution! ================================================== @@ -12600,13 +12530,13 @@ Submit a [Pull Request](https://github.com/zenml-io/zenml/compare) to ZenML for ### Disabling Rich Traceback Output in ZenML -ZenML uses the [`rich`](https://rich.readthedocs.io/en/stable/traceback.html) library for rich traceback output by default, which aids in debugging. To disable this feature, set the following environment variable: +ZenML uses the [`rich`](https://rich.readthedocs.io/en/stable/traceback.html) library for rich traceback output by default, which aids in debugging pipelines. To disable this feature, set the following environment variable: ```bash export ZENML_ENABLE_RICH_TRACEBACK=false ``` -This change will only affect local pipeline runs. To disable rich tracebacks for remote pipeline runs, set the `ZENML_ENABLE_RICH_TRACEBACK` variable in the pipeline's environment: +This change affects only local pipeline runs. For remote pipeline runs, set the `ZENML_ENABLE_RICH_TRACEBACK` variable in the pipeline's environment: ```python from zenml import pipeline @@ -12619,10 +12549,12 @@ def my_pipeline() -> None: my_step() # Alternatively, configure pipeline options -my_pipeline = my_pipeline.with_options(settings={"docker": docker_settings}) +my_pipeline = my_pipeline.with_options( + settings={"docker": docker_settings} +) ``` -This configuration ensures that both local and remote pipeline runs will display plain text tracebacks. +This setup ensures plain text traceback output in both local and remote runs. ================================================== @@ -12630,7 +12562,7 @@ This configuration ensures that both local and remote pipeline runs will display # Viewing Logs on the Dashboard -ZenML captures logs during step execution using a logging handler. Users can utilize the default Python logging module or print statements, which ZenML will log. +ZenML captures logs during step execution using a logging handler. Users can utilize the default Python logging module or print statements, which ZenML will capture. ```python import logging @@ -12642,14 +12574,14 @@ def my_step() -> None: print("World.") # Use print statements as well. ``` -Logs are stored in the artifact store of your stack, and can be viewed on the dashboard only if the ZenML server has access to the artifact store. This is true in two scenarios: +Logs are stored in the artifact store of your stack and can be viewed on the dashboard if the ZenML server has access to the artifact store. Access conditions include: -1. **Local ZenML Server**: Both local and remote artifact stores may be accessible based on client configuration. -2. **Deployed ZenML Server**: Logs from runs on a local artifact store are not accessible. Logs from a remote artifact store may be accessible if configured with a service connector. +- **Local ZenML Server**: Both local and remote artifact stores may be accessible based on client configuration. +- **Deployed ZenML Server**: Logs from a local artifact store are not accessible. Logs from a remote artifact store may be accessible if configured with a service connector. -For configuration details on remote artifact stores with service connectors, refer to the production guide. If configured correctly, logs will be displayed on the dashboard. +For configuring a remote artifact store with a service connector, refer to the production guide. Proper configuration allows logs to be displayed on the dashboard. -**Note**: To disable log storage for performance or storage reasons, follow the provided instructions. +**Note**: To disable log storage due to performance or storage limits, follow the provided instructions. ================================================== @@ -12659,9 +12591,9 @@ For configuration details on remote artifact stores with service connectors, ref ZenML generates different types of logs across various environments: -1. **ZenML Server**: Produces server logs similar to any FastAPI server. -2. **Client or Runner Environment**: Logs events related to pipeline execution, including pre- and post-run steps. -3. **Execution Environment**: Logs generated during the execution of each pipeline step, typically using Python's `logging` module. +1. **ZenML Server Logs**: Produced by the ZenML server, similar to any FastAPI server. +2. **Client or Runner Logs**: Generated during pipeline execution, capturing events before, after, and during pipeline runs. +3. **Execution Environment Logs**: Created at the orchestrator level when executing pipeline steps, typically using Python's `logging` module. This section outlines how users can manage logging behavior across these environments. @@ -12671,13 +12603,13 @@ This section outlines how users can manage logging behavior across these environ ### Setting Logging Verbosity in ZenML -By default, ZenML logging verbosity is set to `INFO`. To change this, set the environment variable: +ZenML defaults to a logging verbosity level of `INFO`. To change this, set the environment variable: ```bash export ZENML_LOGGING_VERBOSITY=INFO ``` -Available levels are `INFO`, `WARN`, `ERROR`, `CRITICAL`, and `DEBUG`. Note that changing this variable in the client environment (e.g., local machine) does not affect remote pipeline runs. To set logging verbosity for remote runs, configure it in the pipeline environment: +Available levels include `INFO`, `WARN`, `ERROR`, `CRITICAL`, and `DEBUG`. Note that setting this variable in the client environment (e.g., local machine) does **not** affect remote pipeline runs. For remote runs, set `ZENML_LOGGING_VERBOSITY` in the pipeline environment: ```python from zenml import pipeline @@ -12689,9 +12621,13 @@ docker_settings = DockerSettings(environment={"ZENML_LOGGING_VERBOSITY": "DEBUG" def my_pipeline() -> None: my_step() -# Alternatively, configure pipeline options -my_pipeline = my_pipeline.with_options(settings={"docker": docker_settings}) -``` +# Alternatively, configure options +my_pipeline = my_pipeline.with_options( + settings={"docker": docker_settings} +) +``` + +This setup ensures that the desired logging verbosity is applied to the appropriate environment. ================================================== @@ -12699,7 +12635,7 @@ my_pipeline = my_pipeline.with_options(settings={"docker": docker_settings}) # ZenML Logging Configuration -ZenML captures logs during step execution using a logging handler. Users can utilize the Python logging module or print statements, which ZenML will store in the artifact store. +ZenML captures logs during step execution using a logging handler. Users can utilize the default Python logging module or print statements, which ZenML will store. ## Example Code ```python @@ -12712,27 +12648,25 @@ def my_step() -> None: print("World.") ``` -Logs can be viewed on the dashboard, but require a connected cloud artifact store with a service connector. For more information, refer to the [view logs documentation](./view-logs-on-the-dasbhoard.md). +Logs are stored in the artifact store of your stack and can be viewed on the dashboard. Note: Logs will not be visible if not connected to a cloud artifact store with a service connector. For more information, refer to [view logs on the dashboard](./view-logs-on-the-dasbhoard.md). ## Disabling Log Storage -Logs can be disabled in two ways: - 1. **Using Decorators**: - - Disable logging for a specific step: + - Disable logging for a step: ```python @step(enable_step_logs=False) def my_step() -> None: ... ``` - - Disable logging for the entire pipeline: + - Disable logging for an entire pipeline: ```python @pipeline(enable_step_logs=False) def my_pipeline(): ... ``` -2. **Using Environment Variables**: - Set the environment variable `ZENML_DISABLE_STEP_LOGS_STORAGE` to `true` in the execution environment. This takes precedence over decorator parameters. +2. **Using Environment Variable**: + Set `ZENML_DISABLE_STEP_LOGS_STORAGE` to `true` in the execution environment (orchestrator level). This variable takes precedence over decorator parameters. ```python from zenml import pipeline from zenml.config import DockerSettings @@ -12742,15 +12676,15 @@ Logs can be disabled in two ways: @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() - ``` -This configuration allows for flexible management of log storage in ZenML pipelines. + my_pipeline = my_pipeline.with_options(settings={"docker": docker_settings}) + ``` ================================================== === File: docs/book/how-to/control-logging/disable-colorful-logging.md === -### How to Disable Colorful Logging in ZenML +### Disabling Colorful Logging in ZenML ZenML enables colorful logging by default for better readability. To disable this feature, set the following environment variable: @@ -12758,7 +12692,7 @@ ZenML enables colorful logging by default for better readability. To disable thi ZENML_LOGGING_COLORS_DISABLED=true ``` -Setting this variable in the client environment (e.g., local machine) will also disable colorful logging for remote pipeline runs. To disable it locally while keeping it enabled for remote runs, set the variable in your pipeline's environment as shown below: +Setting this variable in the client environment (e.g., local machine) will also disable colorful logging for remote pipeline runs. To disable it locally while keeping it enabled for remote runs, set the variable in the pipeline run's environment: ```python from zenml import pipeline @@ -12770,17 +12704,19 @@ docker_settings = DockerSettings(environment={"ZENML_LOGGING_COLORS_DISABLED": " def my_pipeline() -> None: my_step() -# Alternatively, configure pipeline options -my_pipeline = my_pipeline.with_options(settings={"docker": docker_settings}) -``` +# Alternatively, configure options +my_pipeline = my_pipeline.with_options( + settings={"docker": docker_settings} +) +``` -This allows for flexible logging configurations based on the execution environment. +This setup allows flexibility in managing logging preferences across different environments. ================================================== === File: docs/book/how-to/control-logging/set-logging-format.md === -### Summary: Setting the Logging Format in ZenML +### Setting the Logging Format in ZenML To change the default logging format in ZenML, use the following environment variable: @@ -12788,9 +12724,9 @@ To change the default logging format in ZenML, use the following environment var export ZENML_LOGGING_FORMAT='%(asctime)s %(message)s' ``` -The logging format must adhere to the `%`-string formatting style. For available attributes, refer to the [Python logging documentation](https://docs.python.org/3/library/logging.html#logrecord-attributes). +The logging format must follow the `%`-string formatting style. Refer to the [Python logging documentation](https://docs.python.org/3/library/logging.html#logrecord-attributes) for available attributes. -**Important Note:** Setting this variable in the client environment (e.g., local machine) will not affect remote pipeline runs. To configure logging for remote runs, set the `ZENML_LOGGING_FORMAT` in the pipeline environment as shown below: +**Important Note:** Setting this variable in the client environment (e.g., local machine) will not affect remote pipeline runs. To configure logging format for remote runs, set the `ZENML_LOGGING_FORMAT` in the pipeline environment: ```python from zenml import pipeline @@ -12802,9 +12738,11 @@ docker_settings = DockerSettings(environment={"ZENML_LOGGING_FORMAT": "%(asctime def my_pipeline() -> None: my_step() -# Alternatively, configure options -my_pipeline = my_pipeline.with_options(settings={"docker": docker_settings}) -``` +# Alternatively, configure pipeline options +my_pipeline = my_pipeline.with_options( + settings={"docker": docker_settings} +) +``` This ensures that the specified logging format is applied to both local and remote pipeline executions. @@ -12812,28 +12750,31 @@ This ensures that the specified logging format is applied to both local and remo === File: docs/book/how-to/model-management-metrics/README.md === -# Model Management and Metrics +# Model Management and Metrics in ZenML -This section details the management of models and tracking of metrics in ZenML. +This section outlines the processes for managing models and tracking metrics within ZenML. ## Key Components -1. **Model Management**: - - ZenML facilitates versioning, deployment, and monitoring of machine learning models. - - Models can be registered, updated, and accessed through a centralized repository. +1. **Model Management**: + - ZenML provides tools for versioning, storing, and retrieving machine learning models. + - Models can be registered and organized for easy access and deployment. 2. **Metrics Tracking**: - - Metrics can be logged during training and evaluation phases. - - ZenML supports integration with various metric tracking tools for visualization and analysis. + - Metrics can be logged and monitored throughout the model lifecycle. + - ZenML integrates with various tracking tools to visualize performance metrics. 3. **Version Control**: - - Each model version is tracked, allowing users to revert to previous versions if necessary. + - Each model version is tracked to ensure reproducibility and facilitate comparisons. + - Users can specify versioning strategies during model training. 4. **Deployment**: - - Models can be deployed to different environments (e.g., production, staging) seamlessly. + - Models can be deployed to different environments directly from ZenML. + - Supports various deployment options, including cloud services and on-premises solutions. -5. **Monitoring**: - - Continuous monitoring of model performance is enabled, with alerts for significant deviations. +5. **Integration**: + - ZenML integrates with popular ML frameworks and tools for seamless workflow management. + - Users can customize integrations based on their project needs. ## Example Code Snippet @@ -12841,16 +12782,14 @@ This section details the management of models and tracking of metrics in ZenML. from zenml.model import Model # Register a model -model = Model.register(name="my_model", version="1.0") +model = Model(name="my_model", version="1.0") +model.register() # Log metrics model.log_metrics({"accuracy": 0.95, "loss": 0.05}) - -# Deploy the model -model.deploy(environment="production") ``` -This concise overview provides essential information on managing models and tracking metrics using ZenML, ensuring clarity without losing critical details. +This concise overview provides essential information on managing models and tracking metrics in ZenML, ensuring users can effectively utilize these features. ================================================== @@ -12858,10 +12797,9 @@ This concise overview provides essential information on managing models and trac ### Grouping Metadata in the Dashboard -To organize metadata in the ZenML dashboard, use a dictionary of dictionaries in the `metadata` parameter. This groups metadata into cards, enhancing visualization and understanding. - -**Example of Grouping Metadata:** +To organize metadata in the ZenML dashboard, use a dictionary of dictionaries in the `metadata` parameter. This allows for grouping metadata into distinct cards, enhancing visualization and comprehension. +#### Example Code: ```python from zenml import log_metadata from zenml.metadata.metadata_types import StorageSize @@ -12883,7 +12821,7 @@ log_metadata( ) ``` -In the ZenML dashboard, "model_metrics" and "data_details" will be displayed as separate cards with their respective key-value pairs. +In the ZenML dashboard, "model_metrics" and "data_details" will be displayed as separate cards, each containing their respective key-value pairs. ================================================== @@ -12892,13 +12830,12 @@ In the ZenML dashboard, "model_metrics" and "data_details" will be displayed as # ZenML: Tracking and Comparing Metrics and Metadata ## Overview -ZenML provides a unified `log_metadata` function to log and manage metrics and metadata across models, artifacts, steps, and runs. +ZenML offers a unified `log_metadata` function for logging and managing metrics and metadata across models, artifacts, steps, and runs. ## Logging Metadata ### Basic Usage -You can log metadata within a step using the `log_metadata` function: - +To log metadata within a step: ```python from zenml import step, log_metadata @@ -12906,11 +12843,10 @@ from zenml import step, log_metadata def my_step() -> ...: log_metadata(metadata={"accuracy": 0.91}) ``` -This logs the `accuracy` for the step and its associated pipeline run. +This logs the `accuracy` for the step, its pipeline run, and the model version if provided. ### Comprehensive Example -Here's an example of logging various metrics in a machine learning pipeline: - +In a machine learning pipeline, you can log various metadata types: ```python from zenml import step, pipeline, log_metadata @@ -12940,24 +12876,22 @@ def telemetry_pipeline(): This data can be visualized in the ZenML Pro dashboard. ## Visualizing and Comparing Metadata (Pro) -Once metadata is logged, use the Experiment Comparison tool in the ZenML Pro dashboard to analyze and compare metrics across runs. +Once metadata is logged, use the Experiment Comparison tool in ZenML Pro to analyze and compare metrics across runs. ### Comparison Views -The tool offers: 1. **Table View**: Compare metadata with automatic change tracking. 2. **Parallel Coordinates Plot**: Visualize relationships between metrics. -You can compare up to 20 pipeline runs and support any numerical metadata (`float` or `int`). +The tool supports comparison of up to 20 runs and any numerical metadata (`float` or `int`). ### Additional Use-Cases -The `log_metadata` function can target various entities (model, artifact, step, run). For more details, refer to: -- [Log metadata to a step](attach-metadata-to-a-step.md) -- [Log metadata to a run](attach-metadata-to-a-run.md) -- [Log metadata to an artifact](attach-metadata-to-an-artifact.md) -- [Log metadata to a model](attach-metadata-to-a-model.md) +The `log_metadata` function allows specifying the target entity (model, artifact, step, or run). For more details, refer to: +- Log metadata to a step +- Log metadata to a run +- Log metadata to an artifact +- Log metadata to a model -### Deprecation Notice -Older methods for logging metadata (e.g., `log_model_metadata`, `log_artifact_metadata`, `log_step_metadata`) are deprecated. Use `log_metadata` for future implementations. +**Note**: Older methods like `log_model_metadata`, `log_artifact_metadata`, and `log_step_metadata` are deprecated. Use `log_metadata` for future implementations. ================================================== @@ -12969,7 +12903,7 @@ In ZenML, you can log metadata to a pipeline run using the `log_metadata` functi #### Logging Metadata Within a Run -When logging metadata from a pipeline step, use `log_metadata` to attach metadata to the current run. The metadata key follows the `step_name::metadata_key` pattern, allowing reuse of keys across different steps. +When logging metadata from within a pipeline step, use `log_metadata` to attach it to the current run. The metadata key follows the `step_name::metadata_key` pattern, allowing reuse across steps. ```python from typing import Annotated @@ -13001,7 +12935,7 @@ def train_model(dataset: pd.DataFrame) -> Annotated[ #### Manually Logging Metadata -You can attach metadata to a specific pipeline run post-execution using the run ID: +You can also log metadata to a specific pipeline run using identifiers like the run ID, useful for post-execution metrics. ```python from zenml import log_metadata @@ -13014,7 +12948,7 @@ log_metadata( #### Fetching Logged Metadata -Retrieve logged metadata using the ZenML Client: +To retrieve logged metadata, use the ZenML Client: ```python from zenml.client import Client @@ -13025,17 +12959,17 @@ run = client.get_pipeline_run("run_id_name_or_prefix") print(run.run_metadata["metadata_key"]) ``` -**Note:** The fetched value will always reflect the latest entry for the specified key. +**Note:** The fetched value for a specific key reflects the latest entry. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/fetch-metadata-within-pipeline.md === -### Fetch Metadata During Pipeline Composition +### Fetching Metadata During Pipeline Composition -#### Pipeline Configuration Using `PipelineContext` +#### Pipeline Configuration with `PipelineContext` -To access pipeline configuration during composition, use the `zenml.get_pipeline_context()` function to retrieve the `PipelineContext`. +To access pipeline configuration during composition, utilize the `zenml.get_pipeline_context()` function to retrieve the `PipelineContext`. **Example Code:** ```python @@ -13089,7 +13023,7 @@ def my_step(): step_name = step_context.step_run.name ``` -You can also retrieve the output storage URI and the Materializer class used for saving outputs: +You can also retrieve the output storage URI and the associated Materializer class for saving outputs: ```python from zenml import step, get_step_context @@ -13101,7 +13035,7 @@ def my_step(): materializer = step_context.get_output_materializer() # Materializer class ``` -For more details on the `StepContext` attributes and methods, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-new/#zenml.steps.step_context.StepContext). +For more details on the attributes and methods available in `StepContext`, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-new/#zenml.steps.step_context.StepContext). ================================================== @@ -13109,10 +13043,10 @@ For more details on the `StepContext` attributes and methods, refer to the [SDK ### Summary: Attaching Metadata to Artifacts in ZenML -In ZenML, metadata enhances artifacts by providing context and details like size and performance metrics, accessible via the ZenML dashboard. +In ZenML, metadata enhances artifacts by providing context and details such as size, structure, and performance metrics. This metadata is viewable in the ZenML dashboard, aiding in artifact inspection and comparison across pipeline runs. #### Logging Metadata for Artifacts -Artifacts are outputs from pipeline steps (e.g., datasets, models). To log metadata, use the `log_metadata` function with the artifact name, version, or ID. Metadata can include JSON-serializable values, including ZenML types like `Uri`, `Path`, `DType`, and `StorageSize`. +Artifacts are outputs from pipeline steps (e.g., datasets, models). To log metadata, use the `log_metadata` function with the artifact name, version, or ID. Metadata can be any JSON-serializable value, including ZenML types like `Uri`, `Path`, `DType`, and `StorageSize`. **Example of Logging Metadata:** ```python @@ -13135,12 +13069,12 @@ def process_data_step(dataframe: pd.DataFrame) -> pd.DataFrame: ``` #### Selecting the Artifact for Metadata Logging -1. **Using `infer_artifact`**: Automatically selects the output artifact of the step. -2. **Name and Version Provided**: Uses both to identify the specific artifact version. -3. **Artifact Version ID Provided**: Directly fetches the specified artifact version. +1. **Using `infer_artifact`**: Automatically infers output artifacts if used within a step. +2. **Name and Version**: Specify both to attach metadata to a specific artifact version. +3. **Artifact Version ID**: Use directly to fetch and attach metadata to that version. #### Fetching Logged Metadata -To retrieve logged metadata, use the ZenML Client: +To retrieve logged metadata, utilize the ZenML Client: ```python from zenml.client import Client @@ -13148,10 +13082,10 @@ client = Client() artifact = client.get_artifact_version("my_artifact", "my_version") print(artifact.run_metadata["metadata_key"]) ``` -*Note: Fetching metadata by key returns the latest entry.* +*Note: The returned value reflects the latest entry for the specified key.* #### Grouping Metadata in the Dashboard -To organize metadata into cards in the ZenML dashboard, pass a dictionary of dictionaries to the `metadata` parameter: +To organize metadata into cards in the ZenML dashboard, pass a dictionary of dictionaries in the `metadata` parameter: ```python log_metadata( metadata={ @@ -13169,7 +13103,7 @@ log_metadata( artifact_version="version", ) ``` -This groups `model_metrics` and `data_details` into separate cards for better visualization in the dashboard. +In the dashboard, `model_metrics` and `data_details` will appear as separate cards with their respective key-value pairs. ================================================== @@ -13177,12 +13111,12 @@ This groups `model_metrics` and `data_details` into separate cards for better vi ### Summary: Attaching Metadata to a Step in ZenML -In ZenML, you can log metadata to a specific step using the `log_metadata` function, which accepts a dictionary of key-value pairs. This metadata can include any JSON-serializable values, such as custom classes (`Uri`, `Path`, `DType`, `StorageSize`). +In ZenML, you can log metadata for a specific step using the `log_metadata` function, which allows attaching a dictionary of key-value pairs as metadata. The metadata can include any JSON-serializable values, including custom classes like `Uri`, `Path`, `DType`, and `StorageSize`. #### Logging Metadata Within a Step -When called within a step, `log_metadata` attaches the metadata to the executing step and its pipeline run, making it suitable for logging metrics available during execution. +When `log_metadata` is called within a step, it attaches the metadata to the currently executing step and its pipeline run, making it suitable for logging metrics available during execution. -**Example:** +**Example: Logging Metadata During Step Execution** ```python from typing import Annotated import pandas as pd @@ -13199,12 +13133,12 @@ def train_model(dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactCon return classifier ``` -**Note:** If a pipeline step execution is cached, the cached run will copy the original step's metadata, excluding any manually generated entries post-execution. +**Note:** If a pipeline execution is cached, the cached step run will copy the original step's metadata, excluding any manually generated metadata post-execution. #### Manually Logging Metadata After Execution You can log metadata for a specific step after execution using identifiers for the pipeline, step, and run. -**Example:** +**Example: Manually Logging Metadata** ```python from zenml import log_metadata @@ -13218,16 +13152,17 @@ log_metadata(metadata={"additional_info": {"a_number": 3}}, step_id="step_id") #### Fetching Logged Metadata To fetch logged metadata, use the ZenML Client: -**Example:** +**Example: Fetching Metadata** ```python from zenml.client import Client client = Client() step = client.get_pipeline_run("pipeline_id").steps["step_name"] + print(step.run_metadata["metadata_key"]) ``` -**Note:** Fetching metadata with a specific key will return the latest entry. +**Note:** When fetching metadata by key, the returned value reflects the latest entry. ================================================== @@ -13235,10 +13170,10 @@ print(step.run_metadata["metadata_key"]) ### Summary: Attaching Metadata to a Model in ZenML -ZenML enables logging metadata for models, providing context beyond individual artifact details. This metadata can include evaluation results, deployment information, or customer-specific details, aiding in model performance management across versions. +ZenML enables logging metadata for models, providing context beyond individual artifact details. This metadata can include evaluation results, deployment information, and customer-specific details, aiding in model management and performance interpretation across versions. #### Logging Metadata for Models -To log metadata, use the `log_metadata` function, which attaches key-value pairs to a model, including metrics and JSON-serializable values. +To log metadata, use the `log_metadata` function, which allows attaching key-value pairs, including metrics and JSON-serializable values like `Uri`, `Path`, and `StorageSize`. **Example:** ```python @@ -13250,28 +13185,27 @@ from zenml import step, log_metadata, ArtifactConfig @step def train_model(dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactConfig(name="sklearn_classifier")]: + """Train a model and log metadata.""" classifier = RandomForestClassifier().fit(dataset) accuracy, precision, recall = ... - log_metadata( - metadata={ - "evaluation_metrics": { - "accuracy": accuracy, - "precision": precision, - "recall": recall - } - }, - infer_model=True, - ) + log_metadata(metadata={ + "evaluation_metrics": { + "accuracy": accuracy, + "precision": precision, + "recall": recall + } + }, infer_model=True) + return classifier ``` -In this example, metadata is logged for the model rather than the classifier artifact, useful for summarizing multiple pipeline steps. +In this example, metadata is linked to the model rather than the classifier artifact, useful for summarizing various pipeline steps. #### Selecting Models with `log_metadata` -ZenML offers options for attaching metadata to model versions: +ZenML offers flexible options for attaching metadata to model versions: 1. **Using `infer_model`**: Automatically infers the model from the step context. -2. **Model Name and Version**: Attach metadata to a specified model version. -3. **Model Version ID**: Directly fetch and attach metadata to a specific version. +2. **Model Name and Version**: Specify both to attach metadata to a specific version. +3. **Model Version ID**: Directly provide an ID to fetch and attach metadata. #### Fetching Logged Metadata To retrieve attached metadata, use the ZenML Client: @@ -13281,24 +13215,19 @@ from zenml.client import Client client = Client() model = client.get_model_version("my_model", "my_version") + print(model.run_metadata["metadata_key"]) ``` -**Note**: Fetching metadata with a specific key returns the latest entry. +**Note**: Fetching metadata by key returns the latest entry. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/logging-metadata.md === -### Tracking Metadata in ZenML - -ZenML supports special metadata types to capture specific information. Key types include: +### Summary: Tracking Metadata in ZenML -- **Uri**: Represents a dataset source URI. -- **Path**: Specifies the filesystem path to a script. -- **DType**: Describes data types of specific columns. -- **StorageSize**: Indicates the size of processed data in bytes. +ZenML supports special metadata types to capture specific information, including `Uri`, `Path`, `DType`, and `StorageSize`. Below is an example of how to use these types: -#### Example Usage: ```python from zenml import log_metadata from zenml.metadata.metadata_types import StorageSize, DType, Uri, Path @@ -13317,23 +13246,29 @@ log_metadata( ) ``` -This example demonstrates how to log metadata using these special types, ensuring consistency and interpretability in metadata logging. +**Key Points:** +- **Uri**: Represents a dataset source URI. +- **Path**: Specifies the filesystem path to a script. +- **DType**: Describes data types for specific columns. +- **StorageSize**: Indicates the size of processed data in bytes. + +These types standardize metadata logging, ensuring consistency and interpretability. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/load-artifacts-from-model.md === -### Summary of Documentation on Loading Artifacts from a Model +# Summary of Loading Artifacts from a Model This documentation explains how to load artifacts from a model in a two-pipeline project, where the first pipeline handles training and the second performs batch inference using the trained model artifacts. -#### Key Points: +## Key Points: -1. **Model Context**: Use `get_pipeline_context().model` to access the model context during pipeline execution. This context is not evaluated during pipeline compilation, as the production version may change before execution. +1. **Model Context**: Use `get_pipeline_context().model` to access the model context during pipeline execution. This context is not evaluated during pipeline compilation, as the production model version may change before execution. -2. **Artifact Loading**: The method `model.get_model_artifact("trained_model")` retrieves the trained model artifact, which is stored for delayed materialization until the step runs. +2. **Artifact Loading**: Use `model.get_model_artifact("trained_model")` to load the trained model artifact during the step execution. This ensures that the correct version is used. -3. **Alternative Approach**: You can also use `Client` methods to directly fetch the model version: +3. **Alternative Method**: You can also use the `Client` class to directly retrieve the model version: ```python from zenml.client import Client @@ -13347,9 +13282,9 @@ This documentation explains how to load artifacts from a model in a two-pipeline ) ``` -4. **Execution Timing**: The evaluation of the model artifact occurs only when the step is executed, ensuring that the most current version is used. +4. **Execution Timing**: The evaluation of the model artifact occurs only when the step is actually running, ensuring that the latest version is utilized. -This concise overview captures the essential technical details regarding artifact loading in ZenML pipelines. +This concise overview retains essential technical details while eliminating redundancy. ================================================== @@ -13357,19 +13292,16 @@ This concise overview captures the essential technical details regarding artifac # Model Versions Overview -Model versions track different iterations of your training process, providing dashboard and API functionality for the ML lifecycle. You can associate model versions with stages and promote them to production. Versions are created automatically during training, but can also be explicitly named using the `version` argument in the `Model` object. +Model versions allow tracking of different iterations in the machine learning training process, facilitating the ML lifecycle with dashboard and API functionalities. You can associate model versions with stages (e.g., production) and link them to non-technical artifacts like datasets. ## Explicitly Naming Model Versions -To explicitly name a model version: +To explicitly name a model version, use the `version` argument in the `Model` object. If omitted, ZenML generates a version number automatically. ```python from zenml import Model, step, pipeline -model = Model( - name="my_model", - version="1.0.5" -) +model = Model(name="my_model", version="1.0.5") @step(model=model) def svc_trainer(...) -> ...: @@ -13380,19 +13312,14 @@ def training_pipeline(...): # training happens here ``` -If the model version exists, it is automatically associated with the pipeline. +## Templated Naming for Model Versions -## Using Name Templates for Model Versions - -For semantic naming in continuous projects, use templated names in the `version` and/or `name` arguments: +For continuous projects, use templated names in the `version` and/or `name` arguments for unique, semantically meaningful model versions. ```python from zenml import Model, step, pipeline -model = Model( - name="{team}_my_model", - version="experiment_with_phi_3_{date}_{time}" -) +model = Model(name="{team}_my_model", version="experiment_with_phi_3_{date}_{time}") @step(model=model) def llm_trainer(...) -> ...: @@ -13403,15 +13330,11 @@ def training_pipeline(...): # training happens here ``` -This will produce unique model versions with names like `experiment_with_phi_3_2024_08_30_12_42_53`. Substitutions can be set in the `@pipeline` decorator, `pipeline.with_options`, or `@step` decorator. - -### Standard Substitutions -- `{date}`: current date (e.g., `2024_11_27`) -- `{time}`: current time in UTC (e.g., `11_07_09_326492`) +When executed, this will produce a model version with a runtime-evaluated name like `experiment_with_phi_3_2024_08_30_12_42_53`. Standard substitutions include `{date}` and `{time}`. ## Fetching Model Versions by Stage -Assign stages (e.g., `production`, `staging`) to model versions for semantic retrieval. Update a model version's stage via CLI: +Assign stages (e.g., `production`, `staging`) to model versions for semantic retrieval. Update the model version's stage via CLI: ```shell zenml model version update MODEL_NAME --stage=STAGE @@ -13422,10 +13345,7 @@ Fetch a model version by stage: ```python from zenml import Model, step, pipeline -model = Model( - name="my_model", - version="production" -) +model = Model(name="my_model", version="production") @step(model=model) def svc_trainer(...) -> ...: @@ -13438,38 +13358,28 @@ def training_pipeline(...): ## Autonumbering of Versions -ZenML automatically numbers model versions. If no version is specified, a new version is generated: +ZenML automatically numbers model versions. If no version is specified, a new version is generated. For example: ```python from zenml import Model, step -model = Model( - name="my_model", - version="even_better_version" -) +model = Model(name="my_model", version="even_better_version") @step(model=model) def svc_trainer(...) -> ...: ... ``` -ZenML tracks the iteration sequence: +This creates a new version, incrementing the sequence. ```python from zenml import Model -earlier_version = Model( - name="my_model", - version="really_good_version" -).number # == 5 - -updated_version = Model( - name="my_model", - version="even_better_version" -).number # == 6 +earlier_version = Model(name="my_model", version="really_good_version").number # == 5 +updated_version = Model(name="my_model", version="even_better_version").number # == 6 ``` -This ensures proper versioning throughout the model's lifecycle. +This structure allows for effective management and retrieval of model versions throughout the ML lifecycle. ================================================== @@ -13477,15 +13387,15 @@ This ensures proper versioning throughout the model's lifecycle. # Use the Model Control Plane -A `Model` in ZenML is an entity that consolidates pipelines, artifacts, metadata, and business data related to machine learning products. It can be viewed as a "project" or "workspace." +A `Model` in ZenML is an entity that consolidates pipelines, artifacts, metadata, and business data, representing your ML products' business logic. It can be viewed as a "project" or "workspace." **Key Points:** -- A ZenML Model typically includes a technical model (the model file with weights and parameters), training data, and production predictions. +- The technical model, which includes model files with weights and parameters, is a primary artifact associated with a ZenML Model. Other relevant artifacts include training data and production predictions. - Models are first-class entities in ZenML, managed through a unified API and the ZenML Pro dashboard. -- Models capture lineage information and support version staging (e.g., `Production`), allowing for business rule-based promotion of model versions. -- The Model Control Plane provides a centralized interface for managing models, integrating pipelines, artifacts, and the technical model. +- Models capture lineage information and support version staging (e.g., `Production` stage) to facilitate decision-making based on business rules. +- The Model Control Plane provides a centralized interface for managing models, integrating pipelines, artifacts, and business data with the technical model. -For a detailed example, refer to the [starter guide](../../../user-guide/starter-guide/track-ml-models.md). +For a comprehensive example, refer to the [starter guide](../../../user-guide/starter-guide/track-ml-models.md). ================================================== @@ -13502,13 +13412,13 @@ To register a model using the CLI, use the following command: zenml model register iris_logistic_regression --license=... --description=... ``` -For more options, run `zenml model register --help`. You can also add tags using the `--tag` option. +For additional options, run `zenml model register --help`. Tags can be added using the `--tag` option. ## Explicit Dashboard Registration ZenML Pro users can register models directly from the cloud dashboard. ## Explicit Python SDK Registration -To register a model using the Python SDK: +Register a model using the Python SDK as follows: ```python from zenml import Model @@ -13523,7 +13433,7 @@ Client().create_model( ``` ## Implicit Registration by ZenML -Models are commonly registered implicitly during a pipeline run by specifying a `Model` object in the `@pipeline` decorator: +Models can also be registered implicitly during a pipeline run by specifying a `Model` object in the `@pipeline` decorator: ```python from zenml import pipeline @@ -13541,7 +13451,7 @@ def train_and_promote_model(): ... ``` -Running this pipeline creates a new model version while linking to the artifacts. +Running this pipeline creates a new model version, linking it to the associated artifacts. ================================================== @@ -13549,11 +13459,11 @@ Running this pipeline creates a new model version while linking to the artifacts # Linking Model Binaries/Data to Models in ZenML -ZenML allows linking artifacts generated during pipeline runs to models, enabling lineage tracking and transparency for training, evaluation, and inference processes. +ZenML allows linking artifacts generated during pipeline runs to models, facilitating lineage tracking and transparency in training, evaluation, and inference processes. ## Configuring the Model at Pipeline Level -You can link artifacts by configuring the `model` parameter in the `@pipeline` or `@step` decorator: +You can link artifacts by configuring the `model` parameter in the `@pipeline` or `@step` decorators: ```python from zenml import Model, pipeline @@ -13565,11 +13475,11 @@ def my_pipeline(): ... ``` -This links all artifacts from the pipeline run to the specified model configuration. +This links all artifacts from the pipeline run to the specified model. ## Saving Intermediate Artifacts -To save progress during long-running steps (e.g., epoch-based training), use the `save_artifact` utility. If the step has a Model context configured, it will automatically link to the model. +To save intermediate results, use the `save_artifact` utility function. If the step is configured with a Model context, the artifacts will be automatically linked. ```python from zenml import step, Model @@ -13586,9 +13496,9 @@ def trainer(trn_dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactCon return model ``` -## Linking Artifacts Explicitly +## Explicitly Linking Artifacts -To link an artifact to a model outside of a step context, use the `link_artifact_to_model` function. You need the artifact and model configuration. +To link an artifact to a model outside of a step, use the `link_artifact_to_model` function. You need the artifact ready for linking and the model configuration. ```python from zenml import step, Model, link_artifact_to_model, save_artifact @@ -13603,28 +13513,29 @@ existing_artifact = Client().get_artifact_version(name_id_or_prefix="existing_ar link_artifact_to_model(artifact_version_id=existing_artifact.id, model=Model(name="MyModel", version="0.2.42")) ``` -This approach allows for flexibility in linking artifacts to models as needed. +This documentation provides essential methods for linking artifacts to models in ZenML, ensuring efficient tracking and management of model versions and associated data. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/connecting-artifacts-via-a-model.md === -### Summary: Structuring an MLOps Project +### Structuring an MLOps Project -#### Overview -An MLOps project typically consists of multiple pipelines, including: -- **Feature Engineering Pipeline**: Prepares raw data for training. -- **Training Pipeline**: Trains models using data from the feature engineering pipeline. -- **Inference Pipeline**: Runs batch predictions on trained models. -- **Deployment Pipeline**: Deploys trained models to production. +This documentation outlines how to structure an MLOps project by connecting artifacts through pipelines. Key components include: -The structure of these pipelines can vary based on project requirements. Information transfer between pipelines is essential, particularly regarding artifacts, models, and metadata. +- **Pipelines**: Essential for managing the flow of data and models. Common types include: + - **Feature Engineering Pipeline**: Prepares raw data. + - **Training Pipeline**: Trains models using data from the feature engineering pipeline. + - **Inference Pipeline**: Runs predictions on trained models. + - **Deployment Pipeline**: Deploys models to production. -#### Common Patterns for Artifact Exchange +The structure of these pipelines can vary based on project requirements. + +#### Artifact Exchange Patterns 1. **Artifact Exchange via `Client`**: - - Use the ZenML Client to exchange datasets between pipelines. - - Example: + - Use the ZenML Client to transfer artifacts between pipelines. + - Example code: ```python from zenml import pipeline from zenml.client import Client @@ -13642,11 +13553,11 @@ The structure of these pipelines can vary based on project requirements. Informa model_evaluator(model, sklearn_classifier) ``` - **Note**: Artifacts are referenced, not materialized, in the `@pipeline` function. + **Note**: Artifacts are references, not materialized in memory during the pipeline function. 2. **Artifact Exchange via `Model`**: - - Use ZenML Model as a reference point for artifacts. - - Example: + - Use ZenML Model as a reference point instead of individual artifact IDs. + - Example code: ```python from zenml import step, get_step_context @@ -13657,11 +13568,10 @@ The structure of these pipelines can vary based on project requirements. Informa return predictions ``` - - Alternatively, resolve the artifact at the pipeline level: + Alternatively, resolve the artifact at the pipeline level: ```python from zenml import get_pipeline_context, pipeline, Model from zenml.enums import ModelStages - import pandas as pd @step def predict(model: ClassifierMixin, data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: @@ -13669,22 +13579,21 @@ The structure of these pipelines can vary based on project requirements. Informa @pipeline(model=Model(name="iris_classifier", version=ModelStages.PRODUCTION)) def do_predictions(): - model = get_pipeline_context().model.get_model_artifact("trained_model") + model = get_pipeline_context().model inference_data = load_data() - predict(model=model, data=inference_data) + predict(model=model.get_model_artifact("trained_model"), data=inference_data) if __name__ == "__main__": do_predictions() ``` -#### Conclusion -Choose between artifact exchange methods based on project needs and preferences. Both approaches are valid for managing artifacts and models within MLOps pipelines. +Both artifact exchange methods are valid; the choice depends on user preference and project needs. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/associate-a-pipeline-with-a-model.md === -### Summary of Documentation on Associating a Pipeline with a Model +### Summary of Documentation: Associating a Pipeline with a Model To associate a pipeline with a model in ZenML, use the following code structure: @@ -13704,9 +13613,9 @@ def my_pipeline(): ... ``` -This code associates the pipeline with the specified model. If the model exists, a new version is created. To attach the pipeline to an existing model version, specify it accordingly. +- This code associates the pipeline with the specified model. If the model exists, a new version is created. To attach the pipeline to an existing model version, specify it accordingly. -Model configuration can also be moved to a configuration file, as shown below: +Model configurations can also be defined in configuration files, as shown below: ```yaml model: @@ -13715,7 +13624,7 @@ model: tags: ["classifier", "sgd"] ``` -This allows for better organization and management of model settings. +This allows for better organization and management of model attributes. ================================================== @@ -13723,7 +13632,7 @@ This allows for better organization and management of model settings. ### Delete a Model -Deleting a model or a specific version removes all links to artifacts and pipeline runs, along with all associated metadata. +Deleting a model or a specific model version removes all links between the Model entity and its artifacts and pipeline runs, along with all associated metadata. #### Deleting All Versions of a Model @@ -13739,7 +13648,7 @@ from zenml.client import Client Client().delete_model(<MODEL_NAME>) ``` -#### Deleting a Specific Version of a Model +#### Delete a Specific Version of a Model **CLI:** ```shell @@ -13759,40 +13668,34 @@ Client().delete_model_version(<MODEL_VERSION_ID>) # Model Promotion in ZenML -## Stages -ZenML Model versions progress through various lifecycle stages, which serve as metadata to indicate their state. The stages include: +## Stages and Promotion +ZenML Model versions can progress through various lifecycle stages, which serve as metadata to indicate their state. The stages include: - **staging**: Prepared for production. - **production**: Actively running in production. - **latest**: Represents the most recent version (not a promotion target). - **archived**: No longer relevant, moving out of other stages. ### Promotion Methods -Models can be promoted using the following methods: - -#### CLI Promotion -Use the ZenML CLI to promote a model version: -```bash -zenml model version update iris_logistic_regression --stage=... -``` +1. **CLI**: Use the command: + ```bash + zenml model version update iris_logistic_regression --stage=... + ``` -#### Cloud Dashboard Promotion -This feature will soon be available for promoting model versions directly from the ZenML Pro dashboard. +2. **Cloud Dashboard**: Upcoming feature to promote models directly from the ZenML Pro dashboard. -#### Python SDK Promotion -The most common method for promoting models: -```python -from zenml import Model -from zenml.enums import ModelStages +3. **Python SDK**: Common method for promotion: + ```python + from zenml import Model + from zenml.enums import ModelStages -MODEL_NAME = "iris_logistic_regression" -model = Model(name=MODEL_NAME, version="1.2.3") -model.set_stage(stage=ModelStages.PRODUCTION) + model = Model(name="iris_logistic_regression", version="1.2.3") + model.set_stage(stage=ModelStages.PRODUCTION) -latest_model = Model(name=MODEL_NAME, version=ModelStages.LATEST) -latest_model.set_stage(stage=ModelStages.STAGING) -``` + latest_model = Model(name="iris_logistic_regression", version=ModelStages.LATEST) + latest_model.set_stage(stage=ModelStages.STAGING) + ``` -In a pipeline context, retrieve the model from the step context: +Within a pipeline, use the step context to set the model stage: ```python from zenml import get_step_context, step, pipeline from zenml.enums import ModelStages @@ -13809,7 +13712,7 @@ def train_and_promote_model(): ``` ## Fetching Model Versions by Stage -To load the correct model version, specify the stage as follows: +Load specific model versions using the stage: ```python from zenml import Model, step, pipeline @@ -13821,10 +13724,10 @@ def svc_trainer(...) -> ...: @pipeline(model=model) def training_pipeline(...): - # training logic here + ... ``` -This configuration ensures the specified model version is used throughout the pipeline. +This configuration ensures the correct model version is used throughout the pipeline. ================================================== @@ -13835,7 +13738,7 @@ This configuration ensures the specified model version is used throughout the pi ## Loading a Model in Code ### 1. Load the Active Model in a Pipeline -You can load the active model to access its metadata and associated artifacts. +You can load the active model to access its metadata and associated artifacts. ```python from zenml import step, pipeline, get_step_context, Model @@ -13853,7 +13756,7 @@ def my_step(): ``` ### 2. Load Any Model via the Client -You can also load models using the `Client` to retrieve specific model versions. +You can also load models using the `Client` class. ```python from zenml import step @@ -13871,7 +13774,7 @@ def model_evaluator_step(): staging_zenml_model = None ``` -This documentation provides methods to load models in ZenML, either through an active pipeline context or using the Client to access specific model versions. +This documentation provides methods to load models in ZenML, either through the active model in a pipeline or by using the Client to access any model. ================================================== @@ -13879,93 +13782,91 @@ This documentation provides methods to load models in ZenML, either through an a # Advanced Topics in ZenML -This section addresses advanced features and configurations in ZenML, focusing on enhancing the functionality and customization of the framework. +This section discusses advanced features and configurations in ZenML, focusing on enhancing the functionality and customization of the framework. ## Key Features -1. **Custom Components**: Users can create custom components to extend ZenML's capabilities. Components can be defined using Python functions or classes and can integrate with various ML libraries. +1. **Custom Components**: Users can create custom components to extend ZenML's capabilities. Components can be defined using Python functions or classes and should implement the required interfaces. -2. **Pipelines**: ZenML allows the construction of complex pipelines that can be configured with different steps, including data ingestion, preprocessing, model training, and evaluation. +2. **Pipelines**: Advanced pipeline configurations allow for dynamic pipeline creation and execution. Users can leverage conditional logic and parameterization to build flexible workflows. -3. **Artifact Management**: ZenML manages artifacts generated during pipeline execution, allowing users to track and version datasets, models, and other outputs. +3. **Artifact Management**: ZenML supports artifact tracking and versioning, enabling users to manage outputs from various pipeline steps efficiently. -4. **Integrations**: ZenML supports integrations with various tools and platforms, enabling seamless workflows with services like AWS, GCP, and Azure. +4. **Integrations**: ZenML integrates with various tools and platforms, including cloud services and ML frameworks, allowing for seamless data handling and model deployment. -5. **Versioning**: Users can version their pipelines and components, ensuring reproducibility and traceability of experiments. +5. **Secrets Management**: Securely manage sensitive information using ZenML's built-in secrets management feature, which allows for the safe storage and retrieval of credentials. + +6. **Experiment Tracking**: Users can track experiments and their results, facilitating reproducibility and comparison of different model versions and configurations. ## Configuration -- **Settings**: ZenML can be configured through a configuration file or environment variables, allowing customization of settings such as storage backends and orchestrators. +- **Settings**: ZenML can be configured through a configuration file or environment variables, allowing users to customize settings such as backend services and logging levels. -- **Secrets Management**: ZenML provides mechanisms for managing sensitive information, ensuring secure handling of credentials and API keys. +- **Version Control**: It is recommended to use version control for ZenML pipelines and components to ensure consistency and facilitate collaboration. ## Example Code Snippet ```python from zenml.pipelines import pipeline -from zenml.steps import step - -@step -def data_ingestion(): - # Code to ingest data - pass - -@step -def model_training(data): - # Code to train model - pass @pipeline -def training_pipeline(): - data = data_ingestion() - model_training(data) +def my_pipeline(): + # Define pipeline steps + step1 = custom_component() + step2 = another_component(step1.outputs) -# Run the pipeline -training_pipeline() +# Execute the pipeline +my_pipeline.run() ``` -This code demonstrates how to define a simple pipeline with data ingestion and model training steps. - -## Conclusion - -Advanced configurations in ZenML enable users to tailor their ML workflows, ensuring flexibility and efficiency in model development and deployment. +This concise overview highlights the advanced features and configurations available in ZenML, emphasizing customization, integration, and management capabilities essential for advanced users. ================================================== === File: docs/book/how-to/data-artifact-management/README.md === -# Data and Artifact Management in ZenML - -This section addresses the management of data and artifacts within ZenML, focusing on key functionalities and processes. - -## Key Components - -1. **Data Management**: Involves handling datasets used in machine learning workflows, ensuring proper storage, retrieval, and versioning. - -2. **Artifact Management**: Refers to managing outputs generated during the ML pipeline, such as models, metrics, and visualizations. - -## Important Functions +### Data and Artifact Management in ZenML -- **Data Versioning**: ZenML allows users to version datasets, enabling reproducibility and tracking changes over time. - -- **Artifact Storage**: Artifacts can be stored in various backends (e.g., local file systems, cloud storage) for easy access and management. +This section details the management of data and artifacts within ZenML, focusing on key functionalities and best practices. -- **Integration with ML Workflows**: Data and artifacts are integrated seamlessly into ML pipelines, ensuring that all components work cohesively. +#### Key Concepts: +- **Data Management**: Involves handling datasets used in machine learning workflows, ensuring data integrity and accessibility. +- **Artifact Management**: Refers to the storage and retrieval of outputs generated during the ML pipeline, such as models, metrics, and visualizations. -## Code Example +#### Important Features: +- **Versioning**: ZenML supports version control for datasets and artifacts, allowing users to track changes and revert to previous versions. +- **Storage Backends**: ZenML integrates with various storage solutions (e.g., AWS S3, GCP, local file systems) for flexible data and artifact storage. +- **Data Lineage**: The framework provides tools to trace the origin and transformation of data throughout the pipeline, enhancing reproducibility. +#### Code Example: ```python from zenml import pipeline +from zenml.steps import step + +@step +def load_data(): + # Load and return dataset + pass + +@step +def process_data(data): + # Process and return transformed data + pass @pipeline def my_pipeline(): - data = load_data() # Load data - processed_data = preprocess(data) # Preprocess data - model = train_model(processed_data) # Train model - save_artifact(model) # Save model artifact + data = load_data() + processed_data = process_data(data) ``` -This concise overview highlights the essential aspects of data and artifact management in ZenML, ensuring clarity and focus on critical functionalities. +This example illustrates a simple pipeline with data loading and processing steps, showcasing how ZenML structures workflows. + +#### Best Practices: +- Regularly update and document data and artifact versions. +- Utilize appropriate storage backends based on project requirements. +- Implement data lineage tracking for better transparency and reproducibility. + +This summary encapsulates the essential aspects of data and artifact management in ZenML, providing a foundational understanding for further exploration or inquiry. ================================================== @@ -13973,20 +13874,20 @@ This concise overview highlights the essential aspects of data and artifact mana ### Types of Visualizations in ZenML -ZenML automatically saves visualizations for various data types, accessible via the ZenML dashboard or Jupyter notebooks using the `artifact.visualize()` method. +ZenML automatically saves visualizations for various data types, accessible via the ZenML dashboard or in Jupyter notebooks using the `artifact.visualize()` method. **Default Visualizations Include:** -- **Statistical Representation**: Displays a Pandas DataFrame as a PNG image. -- **Drift Detection Reports**: Generated by tools like Evidently, Great Expectations, and whylogs. -- **Hugging Face Datasets Viewer**: Embedded as an HTML iframe. +- **Statistical Representation:** Visualizes a Pandas DataFrame as a PNG image. +- **Drift Detection Reports:** Generated by tools like Evidently, Great Expectations, and whylogs. +- **Hugging Face Datasets Viewer:** Displayed as an HTML iframe. -Visualizations can be viewed in the ZenML dashboard or directly in Jupyter notebooks, providing flexibility in data analysis and presentation. +These visualizations enhance data analysis and monitoring within ZenML workflows. ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/README.md === ---- icon: chart-scatter description: Configuring ZenML for data visualizations in the dashboard. --- +--- icon: chart-scatter description: Configuring ZenML for data visualizations in the dashboard. --- # Visualize Artifacts @@ -14003,46 +13904,39 @@ ZenML allows easy association of visualizations with data and artifacts. === File: docs/book/how-to/data-artifact-management/visualize-artifacts/creating-custom-visualizations.md === -# Creating Custom Visualizations in ZenML - -ZenML allows you to associate custom visualizations with artifacts using supported types: - -- **HTML:** Embedded HTML visualizations (e.g., data validation reports) -- **Image:** Visualizations of image data (e.g., Pillow images) -- **CSV:** Tables (e.g., pandas DataFrame output) -- **Markdown:** Markdown strings or pages -- **JSON:** JSON strings or objects - -## Methods to Add Custom Visualizations +### Creating Custom Visualizations in ZenML -1. **Special Return Types:** If your step outputs HTML, Markdown, CSV, or JSON data, cast them to a specific type: - - `zenml.types.HTMLString` - - `zenml.types.MarkdownString` - - `zenml.types.CSVString` - - `zenml.types.JSONString` +ZenML supports several visualization types for artifacts, including: +- **HTML**: Embedded HTML visualizations (e.g., data validation reports). +- **Image**: Visualizations of image data (e.g., Pillow images). +- **CSV**: Tables (e.g., pandas DataFrame output). +- **Markdown**: Markdown strings or pages. +- **JSON**: JSON strings or objects. - **Example:** - ```python - from zenml.types import CSVString +#### Adding Custom Visualizations +You can add custom visualizations in three ways: +1. **Special Return Types**: Return HTML, Markdown, CSV, or JSON data by casting to specific types. +2. **Custom Materializers**: Define visualization logic for specific data types. +3. **Custom Return Type Class**: Create a custom class and materializer for unique visualizations. - @step - def my_step() -> CSVString: - return CSVString("a,b,c\n1,2,3") - ``` +#### Visualization via Special Return Types +To visualize data, cast it to the appropriate type and return it: -2. **Custom Materializers:** Define visualization logic for specific data types by overriding the `save_visualizations()` method in a custom materializer. +```python +from zenml.types import CSVString -3. **Custom Return Type Class:** Create a custom class and materializer for other visualizations. +@step +def my_step() -> CSVString: + return CSVString("a,b,c\n1,2,3") +``` -## Visualization via Special Return Types +For matplotlib visualizations, embed the image in an HTML string: -### Example of Matplotlib Plot as HTML ```python import matplotlib.pyplot as plt import base64 import io from zenml.types import HTMLString -from zenml import step, pipeline @step def create_matplotlib_visualization() -> HTMLString: @@ -14058,20 +13952,13 @@ def create_matplotlib_visualization() -> HTMLString: html = f'<div style="text-align: center;"><img src="data:image/png;base64,{image_base64}" style="max-width: 100%; height: auto;"></div>' return HTMLString(html) - -@pipeline -def visualization_pipeline(): - create_matplotlib_visualization() - -if __name__ == "__main__": - visualization_pipeline() ``` -## Visualization via Materializers - -### Example: Visualizing Matplotlib Figures +#### Visualization via Materializers +To visualize all artifacts of a certain type, override the `save_visualizations()` method in a custom materializer. -1. **Custom Class:** +**Example: Matplotlib Figure Visualization** +1. **Custom Class**: ```python from pydantic import BaseModel @@ -14079,7 +13966,7 @@ if __name__ == "__main__": figure: Any ``` -2. **Materializer:** +2. **Materializer**: ```python class MatplotlibMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (MatplotlibVisualization,) @@ -14091,7 +13978,7 @@ if __name__ == "__main__": return {visualization_path: VisualizationType.IMAGE} ``` -3. **Step:** +3. **Step**: ```python @step def create_matplotlib_visualization() -> MatplotlibVisualization: @@ -14101,12 +13988,7 @@ if __name__ == "__main__": return MatplotlibVisualization(figure=fig) ``` -### Workflow Summary -- The step creates and returns a `MatplotlibVisualization`. -- ZenML invokes `MatplotlibMaterializer` to save the figure as a PNG. -- The dashboard displays the PNG when viewing the artifact. - -For further examples, refer to the Hugging Face datasets materializer for dataset visualizations. +When this step is used in a pipeline, ZenML automatically handles visualization saving and display in the dashboard. For further examples, refer to the Hugging Face datasets materializer. ================================================== @@ -14124,65 +14006,61 @@ def my_step(): @pipeline(enable_artifact_visualization=False) def my_pipeline(): ... -``` +``` -This configuration prevents the generation of visual artifacts during execution. +This configuration prevents visualizations from being generated for the specified step or pipeline. ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/visualizations-in-dashboard.md === -### Displaying Visualizations in the ZenML Dashboard +### Displaying Visualizations in the Dashboard -To display visualizations on the ZenML dashboard, the following steps must be taken: +#### Accessing Visualizations +To display visualizations on the ZenML dashboard, the ZenML server must have access to the artifact store where visualizations are stored. #### Configuring a Service Connector -- Visualizations are stored in the artifact store. To view them on the dashboard, the ZenML server must have access to this store. -- Configure a [service connector](../../infrastructure-deployment/auth-management/README.md) to grant the server permission to access the artifact store. -- For example, refer to the [AWS S3](../../../component-guide/artifact-stores/s3.md) documentation for specific configurations. +- Visualizations are typically stored in the [artifact store](../../../component-guide/artifact-stores/artifact-stores.md). +- Users must configure a [service connector](../../infrastructure-deployment/auth-management/README.md) to grant the server permission to access the artifact store. +- For example, refer to the [AWS S3](../../../component-guide/artifact-stores/s3.md) documentation for setup details. -**Note:** When using the default/local artifact store with a deployed ZenML, the server cannot access local files, resulting in visualizations not being displayed. Use a service connector with a remote artifact store to view visualizations. +**Note:** When using the default/local artifact store with a deployed ZenML, the server cannot access local files, resulting in no visualizations displayed. A remote artifact store with an enabled service connector is required to view visualizations. #### Configuring Artifact Stores -- If visualizations from a pipeline run are missing, the ZenML server may lack the necessary dependencies or permissions for the artifact store. -- Consult the [custom artifact store documentation](../../../component-guide/artifact-stores/custom.md#enabling-artifact-visualizations-with-custom-artifact-stores) for further guidance. - - +If visualizations from a pipeline run are missing, check if the ZenML server has the necessary dependencies or permissions for the artifact store. For more details, see the [custom artifact store documentation](../../../component-guide/artifact-stores/custom.md#enabling-artifact-visualizations-with-custom-artifact-stores). ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/README.md === -It seems that there is no documentation text provided for summarization. Please provide the text you'd like me to summarize, and I'll be happy to assist! +It seems that the documentation text you intended to provide is missing. Please share the text you would like summarized, and I will be happy to assist you! ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/unmaterialized-artifacts.md === -### Summary of Unmaterialized Artifacts in ZenML +### Summary of Skipping Materialization of Artifacts in ZenML -**Overview**: ZenML pipelines are data-centric, where steps are connected through their inputs and outputs, managed by **materializers** that handle serialization and deserialization of artifacts in the artifact store. +**Unmaterialized Artifacts**: In ZenML, a pipeline is structured around the data flow between steps, where each step reads and writes artifacts to the artifact store. **Materializers** handle the serialization and deserialization of these artifacts. However, there are scenarios where you may want to skip materialization and use a reference to an artifact instead. -**Unmaterialized Artifacts**: In certain scenarios, you may want to skip materialization and use a reference to an artifact instead. This is done using `zenml.materializers.UnmaterializedArtifact`, which provides access to the artifact's unique storage path via the `uri` property. - -**Warning**: Skipping materialization can have unintended consequences for downstream tasks that depend on materialized artifacts. Use this feature cautiously. +**Warning**: Skipping materialization can lead to unintended consequences for downstream tasks that depend on materialized artifacts. Use this feature only when necessary. ### How to Skip Materialization -To use an unmaterialized artifact in a step, specify `UnmaterializedArtifact` as the type: +To use an unmaterialized artifact, import `UnmaterializedArtifact` and specify it as the type in the step: ```python from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact from zenml import step @step -def my_step(my_artifact: UnmaterializedArtifact): +def my_step(my_artifact: UnmaterializedArtifact): pass ``` ### Code Example -The following example demonstrates how to implement unmaterialized artifacts in a pipeline: +The following example demonstrates the use of unmaterialized artifacts in a pipeline: ```python from typing_extensions import Annotated @@ -14216,7 +14094,7 @@ def example_pipeline(): example_pipeline() ``` -### Additional Resources +In this pipeline, `s1` and `s2` produce identical artifacts, with `s3` consuming materialized artifacts and `s4` consuming unmaterialized artifacts, allowing direct access to their URIs. For further details on using `UnmaterializedArtifact`, refer to the documentation on triggering pipelines from another pipeline. @@ -14224,95 +14102,112 @@ For further details on using `UnmaterializedArtifact`, refer to the documentatio === File: docs/book/how-to/data-artifact-management/complex-usecases/registering-existing-data.md === -### Summary of ZenML Artifact Registration Documentation +# ZenML Artifact Registration Documentation Summary -This documentation explains how to register external data as ZenML artifacts for future use, focusing on both folders and files, as well as handling model checkpoints during training with PyTorch Lightning. +This documentation explains how to register external data as ZenML artifacts for future use, focusing on folders and files, as well as managing checkpoints during training with PyTorch Lightning. -#### Registering Existing Data as ZenML Artifacts +## Register Existing Data as ZenML Artifacts -1. **Register Existing Folder**: - - You can register an entire folder as a ZenML artifact. - - Example code: - ```python - import os - from uuid import uuid4 - from pathlib import Path - from zenml.client import Client - from zenml import register_artifact +### Register Existing Folder +To register an existing folder as a ZenML artifact: + +```python +import os +from uuid import uuid4 +from pathlib import Path +from zenml.client import Client +from zenml import register_artifact - prefix = Client().active_stack.artifact_store.path - preexisting_folder = os.path.join(prefix, f"my_test_folder_{uuid4()}") - os.mkdir(preexisting_folder) - with open(os.path.join(preexisting_folder, "test_file.txt"), "w") as f: - f.write("test") +prefix = Client().active_stack.artifact_store.path +preexisting_folder = os.path.join(prefix, f"my_test_folder_{uuid4()}") - register_artifact(folder_or_file_uri=preexisting_folder, name="my_folder_artifact") +# Create folder and file +os.mkdir(preexisting_folder) +with open(os.path.join(preexisting_folder, "test_file.txt"), "w") as f: + f.write("test") - temp_artifact_folder_path = Client().get_artifact_version(name_id_or_prefix="my_folder_artifact").load() - assert os.path.isdir(temp_artifact_folder_path) - ``` +# Register the folder as an artifact +register_artifact(folder_or_file_uri=preexisting_folder, name="my_folder_artifact") + +# Load and verify the artifact +temp_artifact_folder_path = Client().get_artifact_version(name_id_or_prefix="my_folder_artifact").load() +assert isinstance(temp_artifact_folder_path, Path) +assert os.path.isdir(temp_artifact_folder_path) +``` + +### Register Existing File +To register an existing file as a ZenML artifact: + +```python +import os +from uuid import uuid4 +from pathlib import Path +from zenml.client import Client +from zenml import register_artifact + +prefix = Client().active_stack.artifact_store.path +preexisting_file = os.path.join(prefix, f"my_test_file_{uuid4()}.txt") + +# Create file +with open(preexisting_file, "w") as f: + f.write("test") + +# Register the file as an artifact +register_artifact(folder_or_file_uri=preexisting_file, name="my_file_artifact") + +# Load and verify the artifact +temp_artifact_file_path = Client().get_artifact_version(name_id_or_prefix="my_file_artifact").load() +assert isinstance(temp_artifact_file_path, Path) +``` -2. **Register Existing File**: - - You can also register a single file as a ZenML artifact. - - Example code: - ```python - import os - from uuid import uuid4 - from pathlib import Path - from zenml.client import Client - from zenml import register_artifact +## Register Checkpoints of a PyTorch Lightning Training Run +To register all checkpoints during a PyTorch Lightning training run: - prefix = Client().active_stack.artifact_store.path - preexisting_file = os.path.join(prefix, f"my_test_file_{uuid4()}.txt") - with open(preexisting_file, "w") as f: - f.write("test") +```python +from zenml.client import Client +from zenml import register_artifact +from pytorch_lightning import Trainer +from pytorch_lightning.callbacks import ModelCheckpoint +from uuid import uuid4 - register_artifact(folder_or_file_uri=preexisting_file, name="my_file_artifact") +prefix = Client().active_stack.artifact_store.path +default_root_dir = os.path.join(prefix, uuid4().hex) - temp_artifact_file_path = Client().get_artifact_version(name_id_or_prefix="my_file_artifact").load() - ``` +# Define and fit the model +trainer = Trainer( + default_root_dir=default_root_dir, + callbacks=[ModelCheckpoint(every_n_epochs=1, save_top_k=-1, filename="checkpoint-{epoch:02d}")] +) +trainer.fit(model) -#### Registering Checkpoints in PyTorch Lightning +# Register checkpoints as artifacts +register_artifact(default_root_dir, name="all_my_model_checkpoints") +``` -1. **Register All Checkpoints**: - - You can register all checkpoints from a PyTorch Lightning training run as a ZenML artifact. - - Example code: - ```python - from zenml.client import Client - from zenml import register_artifact - from pytorch_lightning import Trainer - from pytorch_lightning.callbacks import ModelCheckpoint - from uuid import uuid4 +### Custom Checkpoint Callback +To register each checkpoint as a separate artifact version, extend the `ModelCheckpoint` class: - prefix = Client().active_stack.artifact_store.path - default_root_dir = os.path.join(prefix, uuid4().hex) +```python +from zenml.client import Client +from zenml import register_artifact +from zenml import get_step_context +from pytorch_lightning.callbacks import ModelCheckpoint - trainer = Trainer(default_root_dir=default_root_dir, callbacks=[ModelCheckpoint(every_n_epochs=1)]) - trainer.fit(model) - register_artifact(default_root_dir, name="all_my_model_checkpoints") - ``` +class ZenMLModelCheckpoint(ModelCheckpoint): + def __init__(self, artifact_name: str, *args, **kwargs): + zenml_model = get_step_context().model + self.artifact_name = artifact_name + self.default_root_dir = os.path.join(Client().active_stack.artifact_store.path, str(zenml_model.version)) + super().__init__(*args, **kwargs) -2. **Register Checkpoints as Separate Artifact Versions**: - - Extend the `ModelCheckpoint` to register each checkpoint as a separate artifact version. - - Example code: - ```python - from zenml import register_artifact - from pytorch_lightning.callbacks import ModelCheckpoint - - class ZenMLModelCheckpoint(ModelCheckpoint): - def __init__(self, artifact_name: str, *args, **kwargs): - # Initialization logic - self.artifact_name = artifact_name - super().__init__(*args, **kwargs) - - def on_train_epoch_end(self, trainer, pl_module): - super().on_train_epoch_end(trainer, pl_module) - register_artifact(os.path.join(self.dirpath, self.filename_format.format(epoch=trainer.current_epoch)), self.artifact_name) - ``` + def on_train_epoch_end(self, trainer, pl_module): + super().on_train_epoch_end(trainer, pl_module) + register_artifact(os.path.join(self.dirpath, self.filename_format.format(epoch=trainer.current_epoch)), self.artifact_name) +``` -#### Full Example: PyTorch Lightning Training Pipeline +## Example Pipeline +A complete example of a PyTorch Lightning training pipeline with checkpoint registration: -A complete example of a training pipeline using PyTorch Lightning with artifact linkage for checkpoints: ```python from zenml import step, pipeline from pytorch_lightning import Trainer, LightningModule @@ -14332,149 +14227,228 @@ def train_model(model, train_loader): # Train model and register checkpoints pass +@step +def predict(checkpoint_file): + # Load model and make predictions + pass + @pipeline def train_pipeline(): train_loader = get_data() model = get_model() train_model(model, train_loader) + predict(get_pipeline_context().model.get_artifact("my_model_ckpts")) if __name__ == "__main__": train_pipeline() ``` -### Important Notes -- Artifacts can be treated like any other ZenML artifacts, with full functionality. -- Ensure checkpoint settings (e.g., `save_top_k=-1`) to prevent deletion of older checkpoints. +This summary captures the essential technical details for registering artifacts in ZenML, including folder and file registration, checkpoint management, and an example pipeline. ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/manage-big-data.md === -### Summary of Scaling Strategies for Big Data in ZenML +# Scaling Strategies for Big Data in ZenML -This documentation outlines strategies for managing large datasets in ZenML, focusing on scaling pipelines as data size increases. +This documentation outlines how to manage large datasets in ZenML, providing strategies for scaling pipelines based on dataset sizes. -#### Dataset Size Thresholds +## Dataset Size Thresholds 1. **Small datasets (up to a few GB)**: Handled in-memory with pandas. 2. **Medium datasets (up to tens of GB)**: Require chunking or out-of-core processing. 3. **Large datasets (hundreds of GB or more)**: Necessitate distributed processing frameworks. -#### Strategies for Small Datasets -1. **Efficient Data Formats**: Use formats like Parquet for better performance. +## Strategies for Small Datasets +1. **Efficient Data Formats**: Use formats like Parquet instead of CSV. + ```python import pyarrow.parquet as pq class ParquetDataset(Dataset): + def __init__(self, data_path: str): + self.data_path = data_path + def read_data(self) -> pd.DataFrame: return pq.read_table(self.data_path).to_pandas() + + def write_data(self, df: pd.DataFrame): + pq.write_table(pa.Table.from_pandas(df), self.data_path) ``` 2. **Data Sampling**: Implement sampling methods. + ```python class SampleableDataset(Dataset): def sample_data(self, fraction: float = 0.1) -> pd.DataFrame: return self.read_data().sample(frac=fraction) + + @step + def analyze_sample(dataset: SampleableDataset) -> Dict[str, float]: + sample = dataset.sample_data() + return {"mean": sample["value"].mean(), "std": sample["value"].std()} ``` -3. **Optimize Pandas Operations**: Use efficient operations to minimize memory usage. +3. **Optimize Pandas Operations**: Use efficient operations to reduce memory usage. + ```python @step def optimize_processing(df: pd.DataFrame) -> pd.DataFrame: df['new_column'] = df['column1'] + df['column2'] + df['mean_normalized'] = df['value'] - np.mean(df['value']) return df ``` -#### Handling Medium Datasets -- **Chunking for CSV Datasets**: Process large files in chunks. - ```python - class ChunkedCSVDataset(Dataset): - def read_data(self): - for chunk in pd.read_csv(self.data_path, chunksize=self.chunk_size): - yield chunk - ``` +## Handling Medium Datasets +### Chunking for CSV Datasets +Implement chunking in Dataset classes. -- **Data Warehouses**: Use platforms like Google BigQuery for distributed processing. - ```python - @step - def process_big_query_data(dataset: BigQueryDataset) -> BigQueryDataset: - client = bigquery.Client() - query_job = client.query("SELECT column1, AVG(column2) FROM `{dataset.table_id}` GROUP BY column1") - query_job.result() - ``` +```python +class ChunkedCSVDataset(Dataset): + def __init__(self, data_path: str, chunk_size: int = 10000): + self.data_path = data_path + self.chunk_size = chunk_size -#### Approaches for Very Large Datasets -- **Distributed Computing Frameworks**: Use Apache Spark, Ray, or Dask for large datasets. - - **Apache Spark**: - ```python - from pyspark.sql import SparkSession + def read_data(self): + for chunk in pd.read_csv(self.data_path, chunksize=self.chunk_size): + yield chunk - @step - def process_with_spark(input_data: str) -> None: - spark = SparkSession.builder.appName("ZenMLSparkStep").getOrCreate() - df = spark.read.csv(input_data, header=True) - df.groupBy("column1").agg({"column2": "mean"}).write.csv("output_path") - spark.stop() - ``` +@step +def process_chunked_csv(dataset: ChunkedCSVDataset) -> pd.DataFrame: + return pd.concat(process_chunk(chunk) for chunk in dataset.read_data()) - - **Ray**: - ```python - import ray +def process_chunk(chunk: pd.DataFrame) -> pd.DataFrame: + return chunk +``` - @step - def process_with_ray(input_data: str) -> None: - ray.init() - results = ray.get([process_partition.remote(part) for part in partitions]) - ray.shutdown() - ``` +### Leveraging Data Warehouses +Utilize data warehouses like Google BigQuery for distributed processing. - - **Dask**: - ```python - import dask.dataframe as dd +```python +@step +def process_big_query_data(dataset: BigQueryDataset) -> BigQueryDataset: + client = bigquery.Client() + query = f""" + SELECT column1, AVG(column2) as avg_column2 + FROM `{dataset.table_id}` + GROUP BY column1 + """ + job_config = bigquery.QueryJobConfig(destination=f"{dataset.project}.{dataset.dataset}.processed_data") + client.query(query, job_config=job_config).result() + return BigQueryDataset(table_id=result_table_id) +``` - @step - def create_dask_dataframe(): - return dd.from_pandas(pd.DataFrame({'A': range(1000)}), npartitions=4) - ``` +## Approaches for Very Large Datasets +### Using Distributed Computing Frameworks +#### Apache Spark +Initialize and use Spark in ZenML. - - **Numba**: - ```python - from numba import jit +```python +from pyspark.sql import SparkSession - @jit(nopython=True) - def numba_function(x): - return x * x + 2 * x - 1 - ``` +@step +def process_with_spark(input_data: str) -> None: + spark = SparkSession.builder.appName("ZenMLSparkStep").getOrCreate() + df = spark.read.csv(input_data, header=True) + result = df.groupBy("column1").agg({"column2": "mean"}) + result.write.csv("output_path", header=True, mode="overwrite") + spark.stop() +``` -#### Important Considerations -1. **Environment Setup**: Ensure required frameworks are installed. -2. **Resource Management**: Coordinate resource allocation with ZenML orchestration. -3. **Error Handling**: Implement cleanup for frameworks like Spark and Ray. +#### Ray +Initialize and use Ray directly. + +```python +import ray + +@step +def process_with_ray(input_data: str) -> None: + ray.init() + + @ray.remote + def process_partition(partition): + return processed_partition + + data = load_data(input_data) + partitions = split_data(data) + results = ray.get([process_partition.remote(part) for part in partitions]) + combined_results = combine_results(results) + save_results(combined_results, "output_path") + ray.shutdown() +``` + +#### Dask +Integrate Dask for parallel computing. + +```python +import dask.dataframe as dd + +@step +def create_dask_dataframe(): + return dd.from_pandas(pd.DataFrame({'A': range(1000), 'B': range(1000, 2000)}), npartitions=4) + +@step +def process_dask_dataframe(df: dd.DataFrame) -> dd.DataFrame: + return df.map_partitions(lambda x: x ** 2) + +@step +def compute_result(df: dd.DataFrame) -> pd.DataFrame: + return df.compute() +``` + +#### Numba +Use Numba for JIT compilation. + +```python +from numba import jit + +@jit(nopython=True) +def numba_function(x): + return x * x + 2 * x - 1 + +@step +def load_data() -> np.ndarray: + return np.arange(1000000) + +@step +def apply_numba_function(data: np.ndarray) -> np.ndarray: + return numba_function(data) +``` + +## Important Considerations +1. **Environment Setup**: Ensure necessary frameworks are installed. +2. **Resource Management**: Coordinate resource allocation with ZenML. +3. **Error Handling**: Implement error handling for cleanup. 4. **Data I/O**: Use intermediate storage for large datasets. 5. **Scaling**: Ensure infrastructure supports the scale of computation. -#### Choosing the Right Strategy -Consider dataset size, processing complexity, infrastructure, update frequency, and team expertise when selecting a scaling strategy. Start simple and scale as needed, leveraging ZenML's architecture to adapt data processing strategies as projects grow. For more details on custom Dataset classes, refer to the [custom dataset classes](datasets.md). +## Choosing the Right Scaling Strategy +- Assess dataset size and processing complexity. +- Consider infrastructure and team expertise. +- Start with simpler strategies and scale as needed. + +By following these strategies, you can effectively manage and scale your ZenML pipelines to handle datasets of any size. For further details, refer to [custom dataset classes](datasets.md). ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/passing-artifacts-between-pipelines.md === -### Structuring an MLOps Project +### Summary: Structuring an MLOps Project An MLOps project typically consists of multiple pipelines, including: - **Feature Engineering Pipeline**: Prepares raw data for training. - **Training Pipeline**: Trains models using data from the feature engineering pipeline. -- **Inference Pipeline**: Runs batch predictions on trained models, often using pre-processed data. -- **Deployment Pipeline**: Deploys trained models to production endpoints. +- **Inference Pipeline**: Runs batch predictions on the trained model. +- **Deployment Pipeline**: Deploys the trained model to a production endpoint. -The structure of these pipelines can vary based on project requirements, and sharing information (artifacts, models, and metadata) between them is essential. +The structure of these pipelines can vary based on project requirements, and often requires sharing artifacts (models, datasets, metadata) between them. -#### Pattern 1: Artifact Exchange through `Client` +#### Artifact Exchange Patterns -In this pattern, the ZenML Client facilitates the exchange of artifacts between pipelines. For example, a feature engineering pipeline generates datasets that are then used by a training pipeline. +**Pattern 1: Artifact Exchange through `Client`** + +This pattern involves using the ZenML Client to facilitate data transfer between pipelines. For example, in a feature engineering pipeline, datasets are prepared and then accessed in the training pipeline: -**Example Code:** ```python from zenml import pipeline from zenml.client import Client @@ -14488,16 +14462,17 @@ def training_pipeline(): client = Client() train_data = client.get_artifact_version(name="iris_training_dataset") test_data = client.get_artifact_version(name="iris_testing_dataset", version="raw_2023") + sklearn_classifier = model_trainer(train_data) model_evaluator(model, sklearn_classifier) ``` -*Note: Artifacts are referenced, not materialized in memory, limiting logic use during compilation.* -#### Pattern 2: Artifact Exchange through a `Model` +*Note*: Artifacts are referenced, not materialized in memory, meaning no logic can be applied to them during compilation. -This pattern uses the ZenML Model as a reference point. For instance, a training pipeline (`train_and_promote`) produces models that are promoted based on accuracy. An inference pipeline (`do_predictions`) retrieves the latest promoted model without needing artifact IDs. +**Pattern 2: Artifact Exchange through a `Model`** + +In this pattern, the ZenML Model serves as the reference point for artifacts. For instance, a training pipeline (`train_and_promote`) generates models, promoting them based on accuracy. The inference pipeline (`do_predictions`) retrieves the latest promoted model without needing artifact IDs: -**Example Code:** ```python from zenml import step, get_step_context @@ -14507,14 +14482,14 @@ def predict(data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: predictions = pd.Series(model.predict(data)) return predictions ``` -*Note: Disabling caching avoids unexpected results.* -Alternatively, you can resolve the artifact at the pipeline level: +To avoid caching issues, either disable caching in the step or resolve artifacts at the pipeline level: -**Example Code:** ```python from zenml import get_pipeline_context, pipeline, Model from zenml.enums import ModelStages +import pandas as pd +from sklearn.base import ClassifierMixin @step def predict(model: ClassifierMixin, data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: @@ -14530,7 +14505,7 @@ if __name__ == "__main__": do_predictions() ``` -Both artifact exchange patterns are valid; the choice depends on user preference. +Both artifact exchange approaches are valid; the choice depends on user preference. ================================================== @@ -14539,24 +14514,28 @@ Both artifact exchange patterns are valid; the choice depends on user preference # Custom Dataset Classes and Complex Data Flows in ZenML ## Overview -ZenML allows for the creation of custom Dataset classes to manage complex data sources and flows efficiently. This is particularly useful for handling multiple data sources, complex data structures, and custom processing logic. +ZenML allows for the creation of custom Dataset classes to manage data loading, processing, and saving from various sources, such as CSV files and databases. This is essential for handling complex data flows in machine learning projects. ## Custom Dataset Classes -Custom Dataset classes encapsulate data loading, processing, and saving logic. Key classes include: +Custom Dataset classes encapsulate data logic and are beneficial when: +1. Working with multiple data sources. +2. Handling complex data structures. +3. Implementing custom processing logic. + +### Example Implementation +Here’s a base `Dataset` class with implementations for CSV and BigQuery: -### Base Dataset Class ```python from abc import ABC, abstractmethod import pandas as pd +from google.cloud import bigquery +from typing import Optional class Dataset(ABC): @abstractmethod def read_data(self) -> pd.DataFrame: pass -``` -### CSV Dataset Implementation -```python class CSVDataset(Dataset): def __init__(self, data_path: str, df: Optional[pd.DataFrame] = None): self.data_path = data_path @@ -14566,11 +14545,6 @@ class CSVDataset(Dataset): if self.df is None: self.df = pd.read_csv(self.data_path) return self.df -``` - -### BigQuery Dataset Implementation -```python -from google.cloud import bigquery class BigQueryDataset(Dataset): def __init__(self, table_id: str, project: Optional[str] = None): @@ -14584,50 +14558,64 @@ class BigQueryDataset(Dataset): def write_data(self) -> None: job_config = bigquery.LoadJobConfig(write_disposition="WRITE_TRUNCATE") - job = self.client.load_table_from_dataframe(self.df, self.table_id, job_config=job_config) - job.result() + self.client.load_table_from_dataframe(self.df, self.table_id, job_config=job_config).result() ``` ## Custom Materializers -Materializers in ZenML handle the serialization and deserialization of artifacts. Custom Materializers are necessary for custom Dataset classes. +Materializers handle the serialization and deserialization of artifacts. Custom Materializers are necessary for custom Dataset classes. -### CSV Dataset Materializer +### Example Materializers ```python +from zenml.materializers import BaseMaterializer +from zenml.io import fileio +from zenml.enums import ArtifactType +import json +import tempfile +import pandas as pd + class CSVDatasetMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (CSVDataset,) - + ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA + def load(self, data_type: Type[CSVDataset]) -> CSVDataset: - # Load CSV data - dataset = CSVDataset(temp_path) - dataset.read_data() - return dataset + with tempfile.NamedTemporaryFile(delete=False, suffix='.csv') as temp_file: + with fileio.open(os.path.join(self.uri, "data.csv"), "rb") as source_file: + temp_file.write(source_file.read()) + return CSVDataset(temp_file.name) def save(self, dataset: CSVDataset) -> None: df = dataset.read_data() - df.to_csv(temp_path, index=False) -``` + with tempfile.NamedTemporaryFile(delete=False, suffix='.csv') as temp_file: + df.to_csv(temp_file.name, index=False) + with open(temp_file.name, "rb") as source_file: + with fileio.open(os.path.join(self.uri, "data.csv"), "wb") as target_file: + target_file.write(source_file.read()) -### BigQuery Dataset Materializer -```python class BigQueryDatasetMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (BigQueryDataset,) - + ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA + def load(self, data_type: Type[BigQueryDataset]) -> BigQueryDataset: - # Load BigQuery dataset - return dataset + with fileio.open(os.path.join(self.uri, "metadata.json"), "r") as f: + metadata = json.load(f) + return BigQueryDataset(table_id=metadata["table_id"], project=metadata["project"]) def save(self, bq_dataset: BigQueryDataset) -> None: + metadata = {"table_id": bq_dataset.table_id, "project": bq_dataset.project} + with fileio.open(os.path.join(self.uri, "metadata.json"), "w") as f: + json.dump(metadata, f) if bq_dataset.df is not None: bq_dataset.write_data() ``` -## Managing Complexity in Pipelines -Design flexible pipelines to handle multiple data sources. +## Pipeline Management +Designing flexible pipelines for multiple data sources is crucial. Below is an example of a pipeline that handles both CSV and BigQuery datasets: -### Example Pipeline ```python +from zenml import step, pipeline + @step(output_materializer=CSVDatasetMaterializer) -def extract_data_local(data_path: str) -> CSVDataset: +def extract_data_local(data_path: str = "data/raw_data.csv") -> CSVDataset: return CSVDataset(data_path) @step(output_materializer=BigQueryDatasetMaterializer) @@ -14637,33 +14625,33 @@ def extract_data_remote(table_id: str) -> BigQueryDataset: @step def transform(dataset: Dataset) -> pd.DataFrame: df = dataset.read_data() - return df.copy() # Apply transformations here + return df.copy() # Apply transformations @pipeline -def etl_pipeline(mode: str): +def etl_pipeline(mode: str = "develop"): raw_data = extract_data_local() if mode == "develop" else extract_data_remote(table_id="project.dataset.raw_table") transformed_data = transform(raw_data) ``` ## Best Practices -1. **Common Base Class**: Use the `Dataset` base class for consistent handling. -2. **Specialized Steps**: Implement separate steps for loading different datasets. -3. **Flexible Pipelines**: Use configuration parameters for adaptable pipelines. -4. **Modular Design**: Create steps for specific tasks to promote reuse and maintenance. +1. **Common Base Class**: Use the `Dataset` base class for consistent handling of data sources. +2. **Specialized Steps**: Create separate steps for loading different datasets. +3. **Flexible Pipelines**: Design pipelines that adapt to various data sources using configuration parameters. +4. **Modular Design**: Create steps for specific tasks to promote code reuse and maintainability. -By adhering to these practices, ZenML pipelines can effectively manage complex data flows and adapt to evolving project requirements. For scaling strategies, refer to [scaling strategies for big data](manage-big-data.md). +By following these practices, you can effectively manage complex data flows and maintain flexibility in your ZenML pipelines. For scaling strategies, refer to [scaling strategies for big data](manage-big-data.md). ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/get-arbitrary-artifacts-in-a-step.md === -### Summary +### Summary of Documentation -Artifacts in ZenML can be accessed not only through direct upstream steps but also from other pipelines. This is facilitated by the ZenML client, allowing you to fetch metadata and artifacts as needed. +Artifacts in ZenML can be accessed not only from direct upstream steps but also from different pipelines. This is facilitated through the ZenML client, allowing the fetching of metadata and artifacts. #### Key Points: -- Use the ZenML client to fetch artifacts from various sources. -- Artifacts can be accessed even if they are not from directly upstream steps. +- Metadata can be fetched using the ZenML client as outlined in the metadata guide. +- Artifacts can be retrieved from various sources, not limited to direct upstream steps. #### Example Code: ```python @@ -14677,10 +14665,10 @@ def my_step(): accuracy = output.run_metadata["accuracy"].value ``` -This code snippet demonstrates how to retrieve a specific artifact version and access its metadata. +This method enables the use of pre-existing artifacts stored in the artifact store, enhancing flexibility in pipeline design. -#### Additional Resources: -- [Managing artifacts](../../../user-guide/starter-guide/manage-artifacts.md): Learn about the `ExternalArtifact` type and artifact passing between steps. +#### See Also: +- [Managing artifacts](../../../user-guide/starter-guide/manage-artifacts.md) - Information on `ExternalArtifact` and artifact transfer between steps. ================================================== @@ -14688,62 +14676,62 @@ This code snippet demonstrates how to retrieve a specific artifact version and a ### ZenML Artifact Naming Overview -In ZenML pipelines, managing artifact names is crucial for tracking outputs, especially when reusing steps with different inputs. ZenML allows for both static and dynamic naming of artifacts, utilizing type annotations to determine names. Artifacts with identical names are saved with incremented version numbers. +In ZenML pipelines, managing artifact names is crucial for tracking outputs, especially when reusing steps with different inputs. ZenML allows for both static and dynamic naming of artifacts, utilizing type annotations to determine names and incrementing version numbers for duplicates. #### Naming Strategies -- **Static Naming**: Defined as string literals. - ```python - @step - def static_single() -> Annotated[str, "static_output_name"]: - return "null" - ``` +1. **Static Naming**: Defined as string literals. + ```python + @step + def static_single() -> Annotated[str, "static_output_name"]: + return "null" + ``` -- **Dynamic Naming**: Generated at runtime using string templates with standard or custom placeholders. +2. **Dynamic Naming**: Generated at runtime using string templates. + - **Standard Placeholders**: + - `{date}`: Current date (e.g., `2024_11_18`) + - `{time}`: Current time (e.g., `11_07_09_326492`) + ```python + @step + def dynamic_single_string() -> Annotated[str, "name_{date}_{time}"]: + return "null" + ``` - - **Standard Placeholders**: - - `{date}`: Current date (e.g., `2024_11_18`) - - `{time}`: Current time (e.g., `11_07_09_326492`) - ```python - @step - def dynamic_single_string() -> Annotated[str, "name_{date}_{time}"]: - return "null" - ``` + - **Custom Placeholders**: Defined via the `substitutions` parameter. + ```python + @step(substitutions={"custom_placeholder": "some_substitute"}) + def dynamic_single_string() -> Annotated[str, "name_{custom_placeholder}_{time}"]: + return "null" + ``` - - **Custom Placeholders**: Defined via the `substitutions` parameter. - ```python - @step(substitutions={"custom_placeholder": "some_substitute"}) - def dynamic_single_string() -> Annotated[str, "name_{custom_placeholder}_{time}"]: - return "null" - ``` + - **Using `with_options`**: + ```python + @step + def extract_data(source: str) -> Annotated[str, "{stage}_dataset"]: + return "my data" - - **Using `with_options`**: - ```python - @step - def extract_data(source: str) -> Annotated[str, "{stage}_dataset"]: - return "my data" - - @pipeline - def extraction_pipeline(): - extract_data.with_options(substitutions={"stage": "train"})(source="s3://train") - extract_data.with_options(substitutions={"stage": "test"})(source="s3://test") - ``` + @pipeline + def extraction_pipeline(): + extract_data.with_options(substitutions={"stage": "train"})(source="s3://train") + extract_data.with_options(substitutions={"stage": "test"})(source="s3://test") + ``` -#### Multiple Output Handling + **Substitution Scope**: + - Defined in `@pipeline`, `pipeline.with_options`, `@step`, or `step.with_options`. -Combine naming options for steps returning multiple artifacts: -```python -@step -def mixed_tuple() -> Tuple[ - Annotated[str, "static_output_name"], - Annotated[str, "name_{date}_{time}"], -]: - return "static_namer", "str_namer" -``` +3. **Multiple Output Handling**: Combine naming strategies for multiple artifacts. + ```python + @step + def mixed_tuple() -> Tuple[ + Annotated[str, "static_output_name"], + Annotated[str, "name_{date}_{time}"], + ]: + return "static_namer", "str_namer" + ``` #### Caching Behavior -When caching is enabled, output artifact names remain consistent across runs. Example: +When caching is enabled, artifact names remain consistent across runs. The following example demonstrates this: ```python from typing_extensions import Annotated from typing import Tuple @@ -14764,116 +14752,88 @@ def my_pipeline(): if __name__ == "__main__": run_without_cache: PipelineRunResponse = my_pipeline.with_options(enable_cache=False)() run_with_cache: PipelineRunResponse = my_pipeline.with_options(enable_cache=True)() - assert set(run_without_cache.steps["demo"].outputs.keys()) == set(run_with_cache.steps["demo"].outputs.keys()) + + assert set(run_without_cache.steps["demo"].outputs.keys()) == set( + run_with_cache.steps["demo"].outputs.keys() + ) + print(list(run_without_cache.steps["demo"].outputs.keys())) +``` + +**Output Example**: +``` +['name_2024_11_21_14_27_33_750134', 'name_resolution'] ``` -This will produce consistent output artifact names across cached and non-cached runs. +This summary encapsulates the key technical details regarding artifact naming in ZenML, including naming strategies, dynamic capabilities, and caching behavior. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/handle-custom-data-types.md === -### Summary: Using Materializers to Pass Custom Data Types Through Steps in ZenML +### Summary: Using Materializers to Pass Custom Data Types in ZenML Pipelines #### Overview -ZenML pipelines are data-centric, where steps are connected through their inputs and outputs. **Materializers** are crucial for defining how artifacts are serialized, stored, and retrieved from the artifact store. +ZenML pipelines are data-centric, where steps are connected through their inputs and outputs. Each step acts as an independent process, interacting with an artifact store. **Materializers** are crucial for defining how artifacts are serialized, stored, and retrieved. #### Built-In Materializers -ZenML includes built-in materializers for common data types, which operate without user intervention: - -| Materializer | Handled Data Types | Storage Format | -|--------------|---------------------|-----------------| -| BuiltInMaterializer | `bool`, `float`, `int`, `str`, `None` | `.json` | -| BytesMaterializer | `bytes` | `.txt` | -| BuiltInContainerMaterializer | `dict`, `list`, `set`, `tuple` | Directory | -| NumpyMaterializer | `np.ndarray` | `.npy` | -| PandasMaterializer | `pd.DataFrame`, `pd.Series` | `.csv` (or `.gzip` if `parquet` installed) | -| PydanticMaterializer | `pydantic.BaseModel` | `.json` | -| ServiceMaterializer | `zenml.services.service.BaseService` | `.json` | -| StructuredStringMaterializer | `zenml.types.CSVString`, `zenml.types.HTMLString`, `zenml.types.MarkdownString` | `.csv`, `.html`, `.md` | +ZenML provides several built-in materializers for common data types, which operate automatically: +- **BuiltInMaterializer**: Handles `bool`, `float`, `int`, `str`, `None` (stored as `.json`). +- **BytesMaterializer**: Handles `bytes` (stored as `.txt`). +- **BuiltInContainerMaterializer**: Handles `dict`, `list`, `set`, `tuple` (stored as directories). +- **NumpyMaterializer**: Handles `np.ndarray` (stored as `.npy`). +- **PandasMaterializer**: Handles `pd.DataFrame`, `pd.Series` (stored as `.csv` or `.gzip`). +- **PydanticMaterializer**: Handles `pydantic.BaseModel` (stored as `.json`). +- **ServiceMaterializer**: Handles `zenml.services.service.BaseService` (stored as `.json`). +- **StructuredStringMaterializer**: Handles `zenml.types.CSVString`, `HTMLString`, `MarkdownString` (stored as `.csv`, `.html`, or `.md`). **Warning**: The `CloudpickleMaterializer` can handle any object but is not production-ready due to compatibility issues across Python versions. #### Integration Materializers -ZenML provides integration-specific materializers activated by installing the respective integration. Examples include: +ZenML also offers integration-specific materializers activated by installing respective integrations. Each integration has its own materializer for specific data types, such as: +- **BentoMaterializer** for `bentoml.Bento` (stored as `.bento`). +- **DeepchecksResultMaterializer** for `deepchecks.CheckResult` (stored as `.json`). +- **PandasMaterializer** for `pandas.DataFrame` and `pandas.Series`, etc. -| Integration | Materializer | Handled Data Types | Storage Format | -|-------------|--------------|---------------------|-----------------| -| bentoml | BentoMaterializer | `bentoml.Bento` | `.bento` | -| deepchecks | DeepchecksResultMaterializer | `deepchecks.CheckResult`, `deepchecks.SuiteResult` | `.json` | -| huggingface | HFDatasetMaterializer | `datasets.Dataset`, `datasets.DatasetDict` | Directory | -| lightgbm | LightGBMBoosterMaterializer | `lgbm.Booster` | `.txt` | - -**Warning**: For Docker-based orchestrators, specify required integrations in `DockerSettings` for materializers to be available. +**Note**: For Docker-based orchestrators, specify required integrations in `DockerSettings`. #### Custom Materializers -To use a custom materializer, define it and associate it with the appropriate data type: +To create a custom materializer: +1. **Define the Materializer**: Subclass `BaseMaterializer`, specifying `ASSOCIATED_TYPES` and `ASSOCIATED_ARTIFACT_TYPE`. +2. **Implement Load and Save**: Override `load()` and `save()` methods for serialization/deserialization. +Example: ```python -from zenml.materializers.base_materializer import BaseMaterializer -from zenml.enums import ArtifactType - -class MyObj: - ... - class MyMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (MyObj,) ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA -@step(output_materializers=MyMaterializer) -def my_first_step() -> MyObj: - return MyObj("my_object") -``` - -For multiple outputs, use a dictionary to specify materializers: - -```python -@step(output_materializers={"1": MyMaterializer1, "2": MyMaterializer2}) -def my_first_step() -> Tuple[Annotated[MyObj1, "1"], Annotated[MyObj2, "2"]]: - return MyObj1(), MyObj2() -``` - -Materializers can also be defined in YAML configuration files. - -#### Global Materializer Configuration -To set a custom materializer globally: - -```python -from zenml.materializers.materializer_registry import materializer_registry - -class FastPandasMaterializer(BaseMaterializer): - ... + def load(self, data_type: Type[MyObj]) -> MyObj: + with self.artifact_store.open(os.path.join(self.uri, 'data.txt'), 'r') as f: + return MyObj(name=f.read()) -materializer_registry.register_and_overwrite_type(key=pd.DataFrame, type_=FastPandasMaterializer) + def save(self, my_obj: MyObj) -> None: + with self.artifact_store.open(os.path.join(self.uri, 'data.txt'), 'w') as f: + f.write(my_obj.name) ``` -#### Developing a Custom Materializer -Implement the `BaseMaterializer` interface, defining `load()` and `save()` methods for serialization and deserialization: - +3. **Configure Steps**: Use the `@step` decorator or `.configure()` method to specify the materializer. ```python -class BaseMaterializer: - def load(self, data_type: Type[Any]) -> Any: - ... - - def save(self, data: Any) -> None: - ... +@step(output_materializers=MyMaterializer) +def my_first_step() -> MyObj: + return MyObj("my_object") ``` -#### Example of Custom Materialization -Here’s a basic example of using a custom materializer with a class `MyObj`: +4. **Global Materializer**: Register a custom materializer globally to override built-in ones using `materializer_registry`. +#### Example Pipeline ```python -class MyMaterializer(BaseMaterializer): - ASSOCIATED_TYPES = (MyObj,) - ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA - - def load(self, data_type: Type[MyObj]) -> MyObj: - with self.artifact_store.open(os.path.join(self.uri, 'data.txt'), 'r') as f: - return MyObj(name=f.read()) +@step +def my_first_step() -> MyObj: + return MyObj("my_object") - def save(self, my_obj: MyObj) -> None: - with self.artifact_store.open(os.path.join(self.uri, 'data.txt'), 'w') as f: - f.write(my_obj.name) +@step +def my_second_step(my_obj: MyObj) -> None: + logging.info(f"The following object was passed: `{my_obj.name}`") @pipeline def first_pipeline(): @@ -14883,7 +14843,8 @@ def first_pipeline(): first_pipeline() ``` -This example demonstrates the creation and usage of a custom materializer to handle the serialization of a custom object type within a ZenML pipeline. +### Conclusion +Materializers in ZenML are essential for managing custom data types across pipeline steps. By defining custom materializers, users can ensure robust serialization and deserialization of artifacts, enhancing pipeline performance and reliability. ================================================== @@ -14891,17 +14852,17 @@ This example demonstrates the creation and usage of a custom materializer to han ### Delete an Artifact -Artifacts cannot be deleted directly to avoid breaking the ZenML database with dangling references. However, you can delete artifacts that are no longer referenced by any pipeline runs using the following command: +Artifacts cannot be deleted directly to avoid breaking the ZenML database. However, you can delete artifacts that are no longer referenced by any pipeline runs using the following command: ```shell zenml artifact prune ``` -By default, this command removes artifacts from the artifact store and deletes their database entries. You can modify this behavior with the following flags: +By default, this command removes artifacts from the underlying artifact store and the database. You can modify this behavior with the following flags: - `--only-artifact`: Deletes only the artifact. -- `--only-metadata`: Deletes only the database entry. +- `--only-metadata`: Deletes only the metadata entry. -If you encounter errors while pruning (often due to locally stored artifacts that no longer exist), you can use the `--ignore-errors` flag to proceed with pruning while ignoring these errors. Warning messages will still be displayed in the terminal. +If you encounter errors while pruning (often due to locally stored artifacts that no longer exist), you can use the `--ignore-errors` flag to continue the process, though warning messages will still be displayed. ================================================== @@ -14909,13 +14870,9 @@ If you encounter errors while pruning (often due to locally stored artifacts tha ### Summary of Documentation on Using `Annotated` for Multiple Outputs -The `Annotated` type in ZenML allows you to return multiple named outputs from a step, enhancing artifact retrieval and dashboard readability. +The `Annotated` type allows you to return multiple outputs from a step in a pipeline, each with a specific name for easy retrieval and improved dashboard readability. -#### Key Points: -- **Functionality**: Use `Annotated` to name outputs for easy identification. -- **Example Step**: The `clean_data` function processes a pandas DataFrame and returns four outputs: `x_train`, `x_test`, `y_train`, and `y_test`. - -#### Code Example: +#### Code Example ```python from typing import Annotated, Tuple import pandas as pd @@ -14934,10 +14891,14 @@ def clean_data(data: pd.DataFrame) -> Tuple[ return train_test_split(x, y, test_size=0.2, random_state=42) ``` -#### Explanation: -- The `clean_data` step splits the input DataFrame into features (`x`) and target (`y`). -- It uses `train_test_split` to generate training and testing datasets. -- Outputs are returned as a tuple with annotations for clarity, aiding in later retrieval and improving dashboard presentation. +#### Key Points +- The `clean_data` function accepts a pandas DataFrame and returns a tuple containing: + - `x_train`: Training features + - `x_test`: Testing features + - `y_train`: Training target + - `y_test`: Testing target +- Each output is annotated for easy identification, facilitating retrieval in the pipeline and enhancing dashboard clarity. +- The function uses `train_test_split` from scikit-learn to partition the data into training and testing sets. ================================================== @@ -14945,10 +14906,14 @@ def clean_data(data: pd.DataFrame) -> Tuple[ ### Summary of ZenML Step Outputs and Pipeline -In ZenML, step outputs are stored in an artifact store, facilitating caching, lineage, and auditability. Utilizing type annotations for outputs enhances transparency, aids in data passing between steps, and allows for serialization/deserialization (termed 'materialize'). +**Overview:** +Step outputs in ZenML are stored in an artifact store, enabling caching, lineage, and auditability. Utilizing type annotations enhances transparency, facilitates data passing between steps, and allows for serialization/deserialization (materialization). -#### Code Overview +**Key Points:** +- Use type annotations for outputs to improve coding practices and data handling. +- Steps can be defined to process and pass data effectively. +**Code Example:** ```python @step def load_data(parameter: int) -> Dict[str, Any]: @@ -14965,13 +14930,14 @@ def train_model(data: Dict[str, Any]) -> None: @pipeline def simple_ml_pipeline(parameter: int): - dataset = load_data(parameter) # Output from load_data - train_model(dataset) # Input to train_model + dataset = load_data(parameter) + train_model(dataset) ``` -### Key Points -- **Steps**: `load_data` generates training data and labels; `train_model` processes this data to train a model. -- **Pipeline**: `simple_ml_pipeline` connects the two steps, demonstrating data flow in ZenML. +**Explanation:** +- `load_data`: Takes an integer parameter and returns a dictionary with training data and labels. +- `train_model`: Accepts the dictionary, computes totals, and simulates model training. +- `simple_ml_pipeline`: Chains `load_data` and `train_model`, passing the output of the former as input to the latter, illustrating data flow in a ZenML pipeline. ================================================== @@ -14979,20 +14945,17 @@ def simple_ml_pipeline(parameter: int): ### Organizing Data with Tags in ZenML -ZenML allows users to organize machine learning artifacts and models using tags, enhancing workflow and discoverability. This guide outlines how to assign tags to artifacts and models. +ZenML allows the use of tags to organize and filter machine learning artifacts and models, enhancing workflow efficiency and discoverability. #### Assigning Tags to Artifacts - -To tag artifact versions in a step or pipeline, use the `tags` property of `ArtifactConfig`: +To tag artifact versions in a step or pipeline, use the `tags` property of `ArtifactConfig`. **Python SDK Example:** ```python from zenml import step, ArtifactConfig @step -def training_data_loader() -> ( - Annotated[pd.DataFrame, ArtifactConfig(tags=["sklearn", "pre-training"])] -): +def training_data_loader() -> Annotated[pd.DataFrame, ArtifactConfig(tags=["sklearn", "pre-training"])]: ... ``` @@ -15004,46 +14967,35 @@ zenml artifacts update iris_dataset -t sklearn # Tag the artifact version zenml artifacts versions update iris_dataset raw_2023 -t sklearn ``` -Tags like `sklearn` and `pre-training` will be assigned to all artifacts created by this step. ZenML Pro users can tag artifacts directly in the cloud dashboard. + +Tags like `sklearn` and `pre-training` will be assigned to artifacts created by this step. ZenML Pro users can also tag artifacts directly in the cloud dashboard. #### Assigning Tags to Models +Tags can also be applied to models for semantic organization. When creating a model version with the `Model` object, specify tags as key-value pairs. -Models can also be tagged for semantic organization. Tags can be specified as key-value pairs when creating a model version: +**Important Note:** Models created implicitly during a pipeline run will not inherit tags from the `Model` class. **Python SDK Example:** ```python from zenml.models import Model tags = ["experiment", "v1", "classification-task"] - -model = Model( - name="iris_classifier", - version="1.0.0", - tags=tags, -) +model = Model(name="iris_classifier", version="1.0.0", tags=tags) @pipeline(model=model) def my_pipeline(...): ... ``` -You can also create or register models with tags using the SDK: +You can also create or register models and versions with tags: ```python from zenml.client import Client -Client().create_model( - name="iris_logistic_regression", - tags=["classification", "iris-dataset"], -) - -Client().create_model_version( - model_name_or_id="iris_logistic_regression", - name="2", - tags=["version-1", "experiment-42"], -) +Client().create_model(name="iris_logistic_regression", tags=["classification", "iris-dataset"]) +Client().create_model_version(model_name_or_id="iris_logistic_regression", name="2", tags=["version-1", "experiment-42"]) ``` -To add tags to existing models using the CLI: +**CLI Example for Existing Models:** ```shell # Tag an existing model zenml model update iris_logistic_regression --tag "classification" @@ -15052,7 +15004,7 @@ zenml model update iris_logistic_regression --tag "classification" zenml model version update iris_logistic_regression 2 --tag "experiment3" ``` -**Note:** During pipeline runs, models may be created implicitly without tags from the `Model` class. Tags can be managed through the SDK or ZenML Pro UI. +This tagging functionality helps in managing and organizing both artifacts and models effectively within the ZenML ecosystem. ================================================== @@ -15060,44 +15012,42 @@ zenml model version update iris_logistic_regression 2 --tag "experiment3" ### ZenML Data Storage Overview -ZenML integrates data versioning and lineage tracking into its core functionality, automatically managing artifacts generated during pipeline executions. Users can view the lineage of artifacts and interact with them through a dashboard, enhancing insights, reproducibility, and reliability in machine learning workflows. +ZenML integrates data versioning and lineage tracking into its core functionality, automatically managing artifacts generated during pipeline executions. Users can view the lineage of artifacts and interact with them through a dashboard, enhancing insights and reproducibility in machine learning workflows. #### Artifact Creation and Caching -When a ZenML pipeline runs, it checks for changes in inputs, outputs, parameters, or configurations. Each step generates a new directory in the artifact store: - -- If a step is new or modified, ZenML creates a unique directory structure with a new ID and stores the data using appropriate materializers. -- If unchanged, ZenML may cache the step, saving time and computational resources, allowing users to focus on experimentation without rerunning unchanged pipeline parts. +- **Pipeline Execution**: Each run checks for changes in inputs, outputs, parameters, or configurations. +- **Artifact Store**: New or modified steps create a unique directory in the [Artifact Store](../../../component-guide/artifact-stores/artifact-stores.md) with a unique ID. +- **Caching**: Unchanged steps may be cached to save time and resources, allowing focus on experimenting with configurations without rerunning unchanged parts. +- **Lineage Tracking**: ZenML allows tracing artifacts back to their origins, providing insights into the processing sequence, which is crucial for reproducibility and identifying issues. -This lineage tracking ensures reproducibility and helps identify issues in pipelines. For details on artifact naming, versioning, and tagging, refer to the documentation on [artifact management](../../../user-guide/starter-guide/manage-artifacts.md). +For artifact management details, refer to the [documentation on artifact versioning and configuration](../../../user-guide/starter-guide/manage-artifacts.md). #### Saving and Loading Artifacts with Materializers -Materializers are essential for ZenML's artifact management, handling the serialization and deserialization of artifacts to ensure consistent storage and retrieval. Each materializer saves data in unique directories within the artifact store. - -- ZenML provides built-in materializers for common data types and uses `cloudpickle` for objects without a default materializer. Custom materializers can be created by extending the `BaseMaterializer` class. - -**Warning:** The built-in `CloudpickleMaterializer` can save any object but is not production-ready due to compatibility issues across Python versions and potential security risks from malicious file uploads. For robust serialization, consider building custom materializers. +- **Materializers**: Essential for serialization/deserialization of artifacts, ensuring consistent storage and retrieval. Each materializer saves data in unique directories within the artifact store. +- **Customization**: Users can define custom serialization logic by extending the `BaseMaterializer` class. ZenML includes built-in materializers for common data types and uses `cloudpickle` for unsupported types. +- **Warning**: The built-in [CloudpickleMaterializer](https://sdkdocs.zenml.io/latest/core_code_docs/core-materializers/#zenml.materializers.cloudpickle_materializer.CloudpickleMaterializer) is not production-ready due to potential compatibility issues across Python versions and security risks from arbitrary code execution. -During pipeline execution, ZenML uses materializers to save and load artifacts via the `fileio` system, facilitating artifact caching and lineage tracking. An example of a default materializer (the `numpy` materializer) can be found [here](https://github.com/zenml-io/zenml/blob/main/src/zenml/materializers/numpy_materializer.py). +When pipelines run, ZenML employs materializers to manage artifacts using the `fileio` system, facilitating interaction with various data formats and enabling caching and lineage tracking. An example of a default materializer, the `numpy` materializer, can be found [here](https://github.com/zenml-io/zenml/blob/main/src/zenml/materializers/numpy_materializer.py). ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/load-artifacts-into-memory.md === -# Loading Artifacts into Memory +# Loading Artifacts into Memory in ZenML -ZenML pipelines typically consume artifacts produced by one another directly. However, for external data, such as artifacts from non-ZenML sources, use the [ExternalArtifact](../../../user-guide/starter-guide/manage-artifacts.md#consuming-external-artifacts-within-a-pipeline). For data exchange between ZenML pipelines, late materialization is essential, allowing the use of not-yet-existing artifacts as step inputs. +ZenML pipelines typically consume artifacts produced by one another, but external data may also be needed. For external artifacts from non-ZenML sources, use [ExternalArtifact](../../../user-guide/starter-guide/manage-artifacts.md#consuming-external-artifacts-within-a-pipeline). For data created by other ZenML pipelines, late materialization is essential, allowing the use of artifacts that do not yet exist at compilation time. -## Use Cases for Artifact Exchange -1. **Semantic Grouping**: Group data products using ZenML Models. -2. **Client Methods**: Use the [ZenML Client](../../../reference/python-client.md#client-methods) for data exchange. +## Key Use Cases for Artifact Exchange +1. **Semantic Grouping**: Use ZenML Models to group data products. +2. **Client Methods**: Utilize [ZenML Client](../../../reference/python-client.md#client-methods) for artifact exchange. -**Recommendation**: Use models for grouping and accessing artifacts across pipelines. Learn how to load artifacts from a ZenML Model [here](../../model-management-metrics/model-control-plane/load-artifacts-from-model.md). +**Recommendation**: Use models for grouping artifacts across pipelines. Learn how to load artifacts from a ZenML Model [here](../../model-management-metrics/model-control-plane/load-artifacts-from-model.md). -## Example: Using Client Methods to Exchange Artifacts +## Exchanging Artifacts with Client Methods -If not using the Model Control Plane, you can still exchange data with late materialization. Below is a simplified version of the `do_predictions` pipeline: +If not using the Model Control Plane, artifacts can still be exchanged with late materialization. Below is a revised version of the `do_predictions` pipeline: ```python from typing import Annotated @@ -15113,7 +15063,7 @@ def predict(model1: ClassifierMixin, model2: ClassifierMixin, model1_metric: flo @step def load_data() -> pd.DataFrame: - # load inference data + # Load inference data ... @pipeline @@ -15122,15 +15072,15 @@ def do_predictions(): metric_42 = model_42.run_metadata["MSE"].value model_latest = Client().get_artifact_version("trained_model") metric_latest = model_latest.run_metadata["MSE"].value + inference_data = load_data() - predict(model1=model_42, model2=model_latest, model1_metric=metric_42, model2_metric=metric_latest, data=inference_data) if __name__ == "__main__": do_predictions() ``` -In this example, the `predict` step compares models based on the MSE metric, ensuring predictions are made using the best model. The `load_data` step loads inference data, while artifact retrieval occurs at execution time, ensuring the latest versions are used. +In this code, the `predict` step compares models based on their MSE metrics to select the best one for predictions. The `load_data` step is included to load inference data. Calls to `Client().get_artifact_version()` and accessing `run_metadata` are evaluated at execution time, ensuring the latest artifact versions are used. ================================================== @@ -15138,7 +15088,7 @@ In this example, the `predict` step compares models based on the MSE metric, ens # ZenML Integrations Guide -ZenML facilitates seamless integration with popular tools in the data science and machine learning ecosystem. This guide outlines how to connect ZenML with these tools effectively. +ZenML integrates seamlessly with popular tools in the data science and machine learning ecosystem. This guide provides instructions on how to set up these integrations.  @@ -15146,39 +15096,39 @@ ZenML facilitates seamless integration with popular tools in the data science an === File: docs/book/how-to/popular-integrations/skypilot.md === -### Summary of ZenML SkyPilot VM Orchestrator Documentation +### Summary: Using SkyPilot with ZenML -**Overview**: The ZenML SkyPilot VM Orchestrator enables provisioning and management of VMs across various cloud providers (AWS, GCP, Azure, Lambda Labs) for ML pipelines, offering cost efficiency and high GPU availability. +**Overview**: The ZenML SkyPilot VM Orchestrator enables provisioning and management of VMs across cloud providers (AWS, GCP, Azure, Lambda Labs) for ML pipelines, enhancing cost efficiency and GPU availability. -#### Prerequisites: -- Install ZenML SkyPilot integration for your cloud provider: +#### Prerequisites +- Install ZenML SkyPilot integration for your cloud provider: ```bash zenml integration install <PROVIDER> skypilot_<PROVIDER> ``` -- Docker must be installed and running. -- A remote artifact store and container registry in your ZenML stack. -- A remote ZenML deployment. -- Permissions to provision VMs on your cloud provider. -- A service connector configured for authentication (not required for Lambda Labs). +- Ensure Docker is running. +- Set up a remote artifact store and container registry in your ZenML stack. +- Have a remote ZenML deployment. +- Obtain necessary permissions for VM provisioning. +- Configure a service connector for cloud authentication (not needed for Lambda Labs). -#### Configuration Steps: +#### Configuration Steps **For AWS, GCP, Azure**: 1. Install SkyPilot integration and connectors. -2. Register a service connector with necessary permissions. +2. Register a service connector with the required credentials. 3. Register and connect the orchestrator to the service connector. 4. Register and activate a stack with the orchestrator. ```bash zenml service-connector register <PROVIDER>-skypilot-vm -t <PROVIDER> --auto-configure -zenml orchestrator register <ORCHESTRATOR_NAME> --flavor vm_<PROVIDER> +zenml orchestrator register <ORCHESTRATOR_NAME> --flavor vm_<PROVIDER> zenml orchestrator connect <ORCHESTRATOR_NAME> --connector <PROVIDER>-skypilot-vm zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set ``` **For Lambda Labs**: -1. Install SkyPilot Lambda integration. -2. Register a secret with your Lambda Labs API key. +1. Install the SkyPilot Lambda integration. +2. Register a secret with your API key. 3. Register the orchestrator using the API key secret. 4. Register and activate a stack with the orchestrator. @@ -15188,10 +15138,10 @@ zenml orchestrator register <ORCHESTRATOR_NAME> --flavor vm_lambda --api_key={{l zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set ``` -#### Running a Pipeline: -Once configured, run any ZenML pipeline using the SkyPilot VM Orchestrator, with each step executing in a Docker container on a provisioned VM. +#### Running a Pipeline +After configuration, run any ZenML pipeline with the SkyPilot VM Orchestrator. Each step executes in a Docker container on a provisioned VM. -#### Additional Configuration: +#### Additional Configuration Customize the orchestrator using cloud-specific `Settings` objects: ```python @@ -15202,13 +15152,13 @@ skypilot_settings = Skypilot<PROVIDER>OrchestratorSettings( memory="16", accelerators="V100:2", use_spot=True, - region=<REGION>, + region=<REGION> ) @pipeline(settings={"orchestrator": skypilot_settings}) ``` -Configure resources per step as needed: +Specify resources per step: ```python high_resource_settings = Skypilot<PROVIDER>OrchestratorSettings(...) @@ -15218,28 +15168,28 @@ def resource_intensive_step(): ... ``` -For detailed options, refer to the [full SkyPilot VM Orchestrator documentation](../../component-guide/orchestrators/skypilot-vm.md). +For advanced options, refer to the [full SkyPilot VM Orchestrator documentation](../../component-guide/orchestrators/skypilot-vm.md). ================================================== === File: docs/book/how-to/popular-integrations/kubeflow.md === -### Summary of Kubeflow Orchestrator Documentation +### Summary of ZenML Kubeflow Orchestrator Documentation The ZenML Kubeflow Orchestrator enables running ML pipelines on Kubeflow Pipelines without needing to write Kubeflow code. #### Prerequisites To use the Kubeflow Orchestrator, ensure you have: -- ZenML `kubeflow` integration installed: `zenml integration install kubeflow` +- ZenML `kubeflow` integration: `zenml integration install kubeflow` - Docker installed and running - `kubectl` installed (optional) - A Kubernetes cluster with Kubeflow Pipelines - A remote artifact store and container registry in your ZenML stack -- A remote ZenML server deployed to the cloud -- Kubernetes context name pointing to the remote cluster (optional) +- A remote ZenML server deployed in the cloud +- Kubernetes context name (optional) #### Configuring the Orchestrator -You can configure the orchestrator in two ways: +Configuration can be done in two ways: 1. **Using a Service Connector** (recommended for cloud-managed clusters): ```bash @@ -15249,21 +15199,21 @@ You can configure the orchestrator in two ways: zenml stack update -o <ORCHESTRATOR_NAME> ``` -2. **Using `kubectl` Context**: +2. **Using `kubectl` context**: ```bash zenml orchestrator register <ORCHESTRATOR_NAME> --flavor=kubeflow --kubernetes_context=<KUBERNETES_CONTEXT> zenml stack update -o <ORCHESTRATOR_NAME> ``` #### Running a Pipeline -To run a ZenML pipeline: +To run a ZenML pipeline using the Kubeflow Orchestrator: ```python python your_pipeline.py ``` This creates a Kubernetes pod for each pipeline step, viewable in the Kubeflow UI. #### Additional Configuration -Further configure the orchestrator with `KubeflowOrchestratorSettings`: +Further configuration can be done with `KubeflowOrchestratorSettings`: ```python from zenml.integrations.kubeflow.flavors.kubeflow_orchestrator_flavor import KubeflowOrchestratorSettings @@ -15284,7 +15234,7 @@ For multi-tenant setups, register the orchestrator with the `kubeflow_hostname`: ```bash zenml orchestrator register <NAME> --flavor=kubeflow --kubeflow_hostname=<KUBEFLOW_HOSTNAME> ``` -Provide namespace, username, and password in the settings: +Provide credentials in the settings: ```python kubeflow_settings = KubeflowOrchestratorSettings( client_username="admin", @@ -15303,37 +15253,35 @@ For more details, refer to the [full Kubeflow Orchestrator documentation](../../ # Azure Stack Setup for ZenML Pipelines -This guide provides a streamlined process to set up a minimal production stack on Azure for running ZenML pipelines. +This guide provides a concise process to set up a minimal production stack on Azure for running ZenML pipelines. ## Prerequisites - Active Azure account - ZenML installed - ZenML Azure integration: `zenml integration install azure` -## Steps +## Steps to Set Up Azure Stack ### 1. Set Up Credentials -Create a service principal: -1. Go to Azure Portal > App Registrations > `+ New registration`. -2. Name it and register. +Create a service principal via Azure App Registrations: +1. Navigate to App Registrations in the Azure portal. +2. Click `+ New registration`, name it, and register. 3. Note the Application ID and Tenant ID. -4. Under `Certificates & secrets`, create a client secret and note its value. +4. Under `Certificates & secrets`, create a client secret and save the secret value. ### 2. Create Resource Group and AzureML Instance -1. Go to Azure Portal > Resource Groups > `+ Create`. -2. After creation, click `+ Create` in the resource group overview. -3. Select `Azure Machine Learning` from the marketplace to create an AzureML workspace. +1. Go to the Azure portal and create a resource group. +2. In the resource group, click `+ Create` and select `Azure Machine Learning` to create an AzureML workspace. Consider creating a container registry as well. ### 3. Create Role Assignments -1. In your resource group, go to `Access control (IAM)` > `+ Add role assignment`. -2. Assign the following roles: +1. In the resource group, navigate to `Access control (IAM)` and click `+ Add` for a new role assignment. +2. Assign the following roles to your registered app: - AzureML Compute Operator - AzureML Data Scientist - AzureML Registry User -3. Select your registered app by its ID for each role. ### 4. Create a Service Connector -Register the ZenML Azure Service Connector: +Register a ZenML Azure Service Connector: ```bash zenml service-connector register azure_connector --type azure \ --auth-method service-principal \ @@ -15343,45 +15291,45 @@ zenml service-connector register azure_connector --type azure \ ``` ### 5. Create Stack Components -- **Artifact Store (Azure Blob Storage)**: - 1. Create a container in your AzureML workspace's storage account. - 2. Register the artifact store: - ```bash - zenml artifact-store register azure_artifact_store -f azure \ - --path=<PATH_TO_YOUR_CONTAINER> \ - --connector azure_connector - ``` +#### Artifact Store (Azure Blob Storage) +1. Create a container in your AzureML workspace's storage account. +2. Register the artifact store: +```bash +zenml artifact-store register azure_artifact_store -f azure \ + --path=<PATH_TO_YOUR_CONTAINER> \ + --connector azure_connector +``` -- **Orchestrator (AzureML)**: - Register the orchestrator: - ```bash - zenml orchestrator register azure_orchestrator -f azureml \ - --subscription_id=<YOUR_AZUREML_SUBSCRIPTION_ID> \ - --resource_group=<NAME_OF_YOUR_RESOURCE_GROUP> \ - --workspace=<NAME_OF_YOUR_AZUREML_WORKSPACE> \ - --connector azure_connector - ``` +#### Orchestrator (AzureML) +Register the orchestrator: +```bash +zenml orchestrator register azure_orchestrator -f azureml \ + --subscription_id=<YOUR_AZUREML_SUBSCRIPTION_ID> \ + --resource_group=<NAME_OF_YOUR_RESOURCE_GROUP> \ + --workspace=<NAME_OF_YOUR_AZUREML_WORKSPACE> \ + --connector azure_connector +``` -- **Container Registry (Azure Container Registry)**: - Register the container registry: - ```bash - zenml container-registry register azure_container_registry -f azure \ - --uri=<URI_TO_YOUR_AZURE_CONTAINER_REGISTRY> \ - --connector azure_connector - ``` +#### Container Registry (Azure Container Registry) +Register the container registry: +```bash +zenml container-registry register azure_container_registry -f azure \ + --uri=<URI_TO_YOUR_AZURE_CONTAINER_REGISTRY> \ + --connector azure_connector +``` ### 6. Create a Stack Register the Azure ZenML stack: -```shell +```bash zenml stack register azure_stack \ - -o azure_orchestrator \ - -a azure_artifact_store \ - -c azure_container_registry \ - --set + -o azure_orchestrator \ + -a azure_artifact_store \ + -c azure_container_registry \ + --set ``` ### 7. Run a Pipeline -Define and run a simple ZenML pipeline: +Define and execute a simple ZenML pipeline: ```python from zenml import pipeline, step @@ -15397,13 +15345,13 @@ if __name__ == "__main__": azure_pipeline() ``` Save as `run.py` and execute: -```shell +```bash python run.py ``` ## Next Steps - Explore ZenML's production guide for best practices. -- Check out ZenML integrations with other tools. +- Investigate ZenML integrations with other tools. - Join the ZenML community for support and networking. ================================================== @@ -15412,18 +15360,18 @@ python run.py # Minimal GCP Stack Setup Guide -This guide outlines the steps to set up a minimal production stack on Google Cloud Platform (GCP) for ZenML. +This guide provides steps to set up a minimal production stack on Google Cloud Platform (GCP) for ZenML. ## Steps to Set Up -### 1. Choose a GCP Project -Select or create a Google Cloud project in the console. Ensure a billing account is attached. +### 1) Choose a GCP Project +Select or create a GCP project in the console. Ensure a billing account is attached. ```bash gcloud projects create <PROJECT_ID> --billing-project=<BILLING_PROJECT> ``` -### 2. Enable GCloud APIs +### 2) Enable GCloud APIs Enable the following APIs in your GCP project: - Cloud Functions API - Cloud Run Admin API @@ -15431,20 +15379,20 @@ Enable the following APIs in your GCP project: - Artifact Registry API - Cloud Logging API -### 3. Create a Dedicated Service Account +### 3) Create a Dedicated Service Account Create a service account with the following roles: - AI Platform Service Agent - Storage Object Admin -### 4. Create a JSON Key for Your Service Account -Generate a JSON key for the service account and note the file path. +### 4) Create a JSON Key for the Service Account +Generate a JSON key for the service account. ```bash export JSON_KEY_FILE_PATH=<JSON_KEY_FILE_PATH> ``` -### 5. Create a Service Connector in ZenML -Authenticate ZenML with GCP using the service account. +### 5) Create a Service Connector in ZenML +Authenticate ZenML with GCP using the service connector. ```bash zenml integration install gcp \ @@ -15455,52 +15403,63 @@ zenml integration install gcp \ --project_id=<GCP_PROJECT_ID> ``` -### 6. Create Stack Components +### 6) Create Stack Components + #### Artifact Store Create a GCS bucket and register it as an artifact store. ```bash export ARTIFACT_STORE_NAME=gcp_artifact_store -zenml artifact-store register ${ARTIFACT_STORE_NAME} --flavor gcp --path=gs://<YOUR_BUCKET_NAME> + +zenml artifact-store register ${ARTIFACT_STORE_NAME} --flavor gcp \ + --path=gs://<YOUR_BUCKET_NAME> + zenml artifact-store connect ${ARTIFACT_STORE_NAME} -i ``` #### Orchestrator -Register Vertex AI as the orchestrator. +Use Vertex AI as the orchestrator. ```bash export ORCHESTRATOR_NAME=gcp_vertex_orchestrator -zenml orchestrator register ${ORCHESTRATOR_NAME} --flavor=vertex --project=<PROJECT_NAME> --location=europe-west2 + +zenml orchestrator register ${ORCHESTRATOR_NAME} --flavor=vertex \ + --project=<PROJECT_NAME> --location=europe-west2 + zenml orchestrator connect ${ORCHESTRATOR_NAME} -i ``` #### Container Registry -Register a GCP container registry. +Register a container registry. ```bash export CONTAINER_REGISTRY_NAME=gcp_container_registry + zenml container-registry register ${CONTAINER_REGISTRY_NAME} --flavor=gcp --uri=<GCR-URI> + zenml container-registry connect ${CONTAINER_REGISTRY_NAME} -i ``` -### 7. Create Stack -Register the complete stack. +### 7) Create Stack +Register the stack with the created components. ```bash export STACK_NAME=gcp_stack -zenml stack register ${STACK_NAME} -o ${ORCHESTRATOR_NAME} -a ${ARTIFACT_STORE_NAME} -c ${CONTAINER_REGISTRY_NAME} --set + +zenml stack register ${STACK_NAME} -o ${ORCHESTRATOR_NAME} \ + -a ${ARTIFACT_STORE_NAME} -c ${CONTAINER_REGISTRY_NAME} --set ``` ## Cleanup -To delete the project and all associated resources: +To remove created resources, delete the project. ```bash gcloud project delete <PROJECT_ID_OR_NUMBER> ``` ## Best Practices -- **IAM and Least Privilege**: Grant only necessary permissions and regularly review IAM roles. -- **Resource Labeling**: Use consistent labeling for GCP resources. +- **Use IAM and Least Privilege**: Grant only necessary permissions and regularly audit IAM roles. +- **Resource Labeling**: Implement consistent labeling for GCP resources. ```bash gcloud storage buckets update gs://your-bucket-name --update-labels=project=zenml,environment=production @@ -15518,29 +15477,30 @@ gcloud billing budgets create --billing-account=BILLING_ACCOUNT_ID --display-nam gsutil versioning set on gs://your-bucket-name ``` -By following these steps and best practices, you can efficiently set up and manage a GCP stack for ZenML projects. +By following these steps and best practices, you can effectively set up and manage a GCP stack for ZenML projects. ================================================== === File: docs/book/how-to/popular-integrations/kubernetes.md === -### Summary of ZenML Kubernetes Orchestrator Documentation +### Summary: Deploying ZenML Pipelines on Kubernetes -The ZenML Kubernetes Orchestrator enables the deployment of ML pipelines on a Kubernetes cluster without the need for Kubernetes coding. It serves as a simpler alternative to orchestrators like Airflow or Kubeflow. +The ZenML Kubernetes Orchestrator enables the execution of ML pipelines on a Kubernetes cluster without requiring Kubernetes code. It serves as a simpler alternative to orchestrators like Airflow or Kubeflow. #### Prerequisites To use the Kubernetes Orchestrator, ensure you have: -- ZenML `kubernetes` integration installed: `zenml integration install kubernetes` -- Docker and `kubectl` installed +- ZenML `kubernetes` integration: `zenml integration install kubernetes` +- Docker installed and running +- `kubectl` installed - A remote artifact store and container registry in your ZenML stack - A deployed Kubernetes cluster -- Optionally, a configured `kubectl` context pointing to the cluster +- (Optional) Configured `kubectl` context pointing to the cluster #### Deploying the Orchestrator -A Kubernetes cluster is required to run the orchestrator. Various deployment methods exist across cloud providers or custom infrastructure; refer to the [cloud guide](../../user-guide/cloud-guide/cloud-guide.md) for more information. +You need a Kubernetes cluster to run the orchestrator. Various deployment methods exist, which can be explored in the [cloud guide](../../user-guide/cloud-guide/cloud-guide.md). #### Configuring the Orchestrator -You can configure the orchestrator in two ways: +Configuration can be done in two ways: 1. **Using a Service Connector** (recommended for cloud-managed clusters): ```bash @@ -15557,50 +15517,49 @@ You can configure the orchestrator in two ways: ``` #### Running a Pipeline -To run a ZenML pipeline with the Kubernetes Orchestrator: +To run a ZenML pipeline using the Kubernetes Orchestrator: ```bash python your_pipeline.py ``` -This command will create a Kubernetes pod for each pipeline step. Use `kubectl` commands to interact with the pods. For further details, consult the [full Kubernetes Orchestrator documentation](../../component-guide/orchestrators/kubernetes.md). +This command creates a Kubernetes pod for each pipeline step. You can manage the pods with `kubectl` commands. + +For more advanced configurations, refer to the [full Kubernetes Orchestrator documentation](../../component-guide/orchestrators/kubernetes.md). ================================================== === File: docs/book/how-to/popular-integrations/mlflow.md === -# MLflow Experiment Tracker with ZenML +### MLflow Experiment Tracker with ZenML -## Overview -The ZenML MLflow Experiment Tracker integration allows logging and visualization of pipeline step information using MLflow without additional coding. +The ZenML MLflow Experiment Tracker integration allows for logging and visualizing pipeline step information using MLflow without additional coding. -## Prerequisites +#### Prerequisites - Install ZenML MLflow integration: ```bash zenml integration install mlflow -y ``` - Set up an MLflow deployment (local or remote with proxied artifact storage). -## Configuring the Experiment Tracker -### 1. Local Deployment -- Suitable for local ZenML runs; no additional configuration needed. - ```bash - zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow - zenml stack register custom_stack -e mlflow_experiment_tracker ... --set - ``` +#### Configuring the Experiment Tracker -### 2. Remote Deployment -- Requires authentication configuration (use ZenML secrets for production): - ```bash - zenml secret create mlflow_secret --username=<USERNAME> --password=<PASSWORD> - - zenml experiment-tracker register mlflow --flavor=mlflow \ - --tracking_username={{mlflow_secret.username}} \ - --tracking_password={{mlflow_secret.password}} ... - ``` +1. **Local Deployment**: + - Suitable for local ZenML runs. No extra configuration needed. + ```bash + zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow + zenml stack register custom_stack -e mlflow_experiment_tracker ... --set + ``` + +2. **Remote Deployment**: + - Requires authentication (use ZenML secrets for production). + ```bash + zenml secret create mlflow_secret --username=<USERNAME> --password=<PASSWORD> + zenml experiment-tracker register mlflow --flavor=mlflow --tracking_username={{mlflow_secret.username}} --tracking_password={{mlflow_secret.password}} ... + ``` -## Using the Experiment Tracker +#### Using the Experiment Tracker To log information in a pipeline step: 1. Enable the experiment tracker with the `@step` decorator. -2. Use MLflow's logging capabilities: +2. Use MLflow's logging capabilities. ```python import mlflow @@ -15612,15 +15571,16 @@ To log information in a pipeline step: mlflow.log_artifact(...) ``` -## Viewing Results -Retrieve the MLflow experiment URL for a ZenML run: +#### Viewing Results +Retrieve the URL for the MLflow experiment: ```python last_run = client.get_pipeline("<PIPELINE_NAME>").last_run trainer_step = last_run.get_step("<STEP_NAME>") tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value ``` +This URL links to your MLflow instance UI or local experiment file. -## Additional Configuration +#### Additional Configuration Further configure the experiment tracker using `MLFlowExperimentTrackerSettings`: ```python from zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor import MLFlowExperimentTrackerSettings @@ -15633,27 +15593,26 @@ mlflow_settings = MLFlowExperimentTrackerSettings(nested=True, tags={"key": "val ) ``` -For more details, refer to the full [MLflow Experiment Tracker documentation](../../component-guide/experiment-trackers/mlflow.md). +For more details, refer to the full MLflow Experiment Tracker documentation. ================================================== === File: docs/book/how-to/popular-integrations/aws-guide.md === -# AWS Stack Setup for ZenML Pipelines +### Summary of AWS Stack Setup for ZenML Pipelines -This guide provides a concise process to set up a minimal production stack on AWS for running ZenML pipelines. +This guide provides a streamlined process to set up a minimal AWS stack for running ZenML pipelines, focusing on creating an IAM role with scoped permissions for AWS resource access. -## Prerequisites -- An active AWS account with permissions for S3, SageMaker, ECR, and ECS. +#### Prerequisites +- Active AWS account with permissions for S3, SageMaker, ECR, and ECS. - ZenML installed. - AWS CLI installed and configured. -## Steps to Set Up AWS Stack +#### Steps to Set Up AWS Stack -### 1. Set Up Credentials and Local Environment -1. **Choose AWS Region**: Select the region for deployment (e.g., `us-east-1`). -2. **Create IAM Role**: - - Get your AWS account ID: +1. **Set Up Credentials and Local Environment** + - Choose an AWS region for deployment. + - Retrieve your AWS account ID: ```shell aws sts get-caller-identity --query Account --output text ``` @@ -15673,7 +15632,7 @@ This guide provides a concise process to set up a minimal production stack on AW ] } ``` - - Create the IAM role: + - Create IAM role: ```shell aws iam create-role --role-name zenml-role --assume-role-policy-document file://assume-role-policy.json ``` @@ -15683,101 +15642,87 @@ This guide provides a concise process to set up a minimal production stack on AW aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess ``` -3. **Install ZenML AWS Integration**: - ```shell - zenml integration install aws s3 -y - ``` + - Install AWS and S3 integrations for ZenML: + ```shell + zenml integration install aws s3 -y + ``` -### 2. Create a Service Connector in ZenML -Register an AWS Service Connector to authenticate with AWS: -```shell -zenml service-connector register aws_connector \ - --type aws \ - --auth-method iam-role \ - --role_arn=<ROLE_ARN> \ - --region=<YOUR_REGION> \ - --aws_access_key_id=<YOUR_ACCESS_KEY_ID> \ - --aws_secret_access_key=<YOUR_SECRET_ACCESS_KEY> -``` - -### 3. Create Stack Components -#### Artifact Store (S3) -1. Create an S3 bucket: - ```shell - aws s3api create-bucket --bucket your-bucket-name - ``` -2. Register the S3 Artifact Store: +2. **Create a Service Connector in ZenML** ```shell - zenml artifact-store register cloud_artifact_store -f s3 --path=s3://your-bucket-name --connector aws_connector + zenml service-connector register aws_connector \ + --type aws \ + --auth-method iam-role \ + --role_arn=<ROLE_ARN> \ + --region=<YOUR_REGION> \ + --aws_access_key_id=<YOUR_ACCESS_KEY_ID> \ + --aws_secret_access_key=<YOUR_SECRET_ACCESS_KEY> ``` -#### Orchestrator (SageMaker Pipelines) -1. Create a SageMaker domain (follow AWS documentation). -2. Register the SageMaker orchestrator: - ```shell - zenml orchestrator register sagemaker-orchestrator --flavor=sagemaker --region=<YOUR_REGION> --execution_role=<ROLE_ARN> - ``` +3. **Create Stack Components** + - **Artifact Store (S3)**: Create an S3 bucket: + ```shell + aws s3api create-bucket --bucket your-bucket-name + ``` + - Register S3 Artifact Store: + ```shell + zenml artifact-store register cloud_artifact_store -f s3 --path=s3://your-bucket-name --connector aws_connector + ``` + - **Orchestrator (SageMaker)**: Create a SageMaker domain (follow AWS documentation). + - Register SageMaker Pipelines orchestrator: + ```shell + zenml orchestrator register sagemaker-orchestrator --flavor=sagemaker --region=<YOUR_REGION> --execution_role=<ROLE_ARN> + ``` + - **Container Registry (ECR)**: Create ECR repository: + ```shell + aws ecr create-repository --repository-name zenml --region <YOUR_REGION> + ``` + - Register ECR container registry: + ```shell + zenml container-registry register ecr-registry --flavor=aws --uri=<ACCOUNT_ID>.dkr.ecr.<YOUR_REGION>.amazonaws.com --connector aws_connector + ``` -#### Container Registry (ECR) -1. Create an ECR repository: - ```shell - aws ecr create-repository --repository-name zenml --region <YOUR_REGION> - ``` -2. Register the ECR container registry: +4. **Create Stack** ```shell - zenml container-registry register ecr-registry --flavor=aws --uri=<ACCOUNT_ID>.dkr.ecr.<YOUR_REGION>.amazonaws.com --connector aws_connector + export STACK_NAME=aws_stack + zenml stack register ${STACK_NAME} -o ${ORCHESTRATOR_NAME} -a ${ARTIFACT_STORE_NAME} -c ${CONTAINER_REGISTRY_NAME} --set ``` -### 4. Create Stack -Register the stack: -```shell -export STACK_NAME=aws_stack -zenml stack register ${STACK_NAME} -o ${ORCHESTRATOR_NAME} -a ${ARTIFACT_STORE_NAME} -c ${CONTAINER_REGISTRY_NAME} --set -``` - -### 5. Run a Pipeline -Define and run a simple ZenML pipeline: -```python -from zenml import pipeline, step +5. **Run a ZenML Pipeline** + Define and execute a simple pipeline: + ```python + from zenml import pipeline, step -@step -def hello_world() -> str: - return "Hello from SageMaker!" + @step + def hello_world() -> str: + return "Hello from SageMaker!" -@pipeline -def aws_sagemaker_pipeline(): - hello_world() + @pipeline + def aws_sagemaker_pipeline(): + hello_world() -if __name__ == "__main__": - aws_sagemaker_pipeline() -``` -Execute: -```shell -python run.py -``` + if __name__ == "__main__": + aws_sagemaker_pipeline() + ``` + Execute the pipeline: + ```shell + python run.py + ``` -## Cleanup -To avoid charges, delete unused resources: +#### Cleanup +To avoid charges, delete resources: ```shell -# Delete S3 bucket aws s3 rm s3://your-bucket-name --recursive aws s3api delete-bucket --bucket your-bucket-name - -# Delete SageMaker domain aws sagemaker delete-domain --domain-id <DOMAIN_ID> - -# Delete ECR repository aws ecr delete-repository --repository-name zenml --force - -# Detach policies and delete IAM role aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess aws iam delete-role --role-name zenml-role ``` -## Conclusion -This guide outlined the setup of an AWS stack for ZenML, including IAM roles, service connectors, and stack components. Following these steps allows for efficient management of machine learning pipelines using AWS services. +#### Conclusion +This guide outlines the setup of an AWS stack for ZenML, enabling scalable machine learning pipelines. Key benefits include scalability, reproducibility, collaboration, and flexibility. For best practices, consider IAM role management, resource tagging, cost management, and backup strategies. ================================================== @@ -15785,49 +15730,49 @@ This guide outlined the setup of an AWS stack for ZenML, including IAM roles, se # ZenML Core Concepts Summary -**ZenML** is an open-source MLOps framework designed for building portable, production-ready MLOps pipelines, facilitating collaboration among data scientists, ML engineers, and MLOps developers. Key concepts are categorized into three threads: +**ZenML** is an open-source MLOps framework designed for creating portable, production-ready MLOps pipelines, facilitating collaboration among data scientists, ML engineers, and MLOps developers. The framework is structured around three main threads: -| **Category** | **Focus** | -|---------------|------------------------------------------------------------| -| **1. Development** | Designing machine learning workflows. | -| **2. Execution** | Utilizing MLOps tooling/infrastructure during execution. | -| **3. Management** | Establishing and maintaining efficient production solutions. | +1. **Development**: Designing machine learning workflows. +2. **Execution**: Utilizing MLOps tools and infrastructure during workflow execution. +3. **Management**: Establishing and maintaining production-grade solutions. ## 1. Development ### Steps -- Functions defined with the `@step` decorator. -- Example: - ```python - @step - def step_1() -> str: - return "world" - ``` +- Steps are functions marked with the `@step` decorator. They can have typed inputs and outputs. +```python +@step +def step_1() -> str: + return "world" + +@step(enable_cache=False) +def step_2(input_one: str, input_two: str) -> str: + return f"{input_one} {input_two}" +``` ### Pipelines -- A pipeline is a series of steps, defined using decorators or classes. -- Example: - ```python - @pipeline - def my_pipeline(): - output_step_one = step_1() - step_2(input_one="hello", input_two=output_step_one) - - if __name__ == "__main__": - my_pipeline() - ``` +- A pipeline is a series of steps defined using Python decorators or classes. Steps can only call other steps within a pipeline. +```python +@pipeline +def my_pipeline(): + output_step_one = step_1() + step_2(input_one="hello", input_two=output_step_one) + +if __name__ == "__main__": + my_pipeline() +``` ### Artifacts -- Represent data passing through steps, tracked and stored in the artifact store. +- Artifacts are data inputs/outputs tracked and stored by ZenML, serialized/deserialized by **Materializers**. ### Models -- Represent outputs of training processes, including weights and metadata. +- Models encapsulate outputs of training processes and associated metadata, managed centrally via the ZenML API. ### Materializers -- Define serialization/deserialization of artifacts, using `BaseMaterializer` class. +- Materializers define how artifacts are serialized/deserialized, based on the `BaseMaterializer` class. ### Parameters & Settings -- Steps receive input artifacts and parameters, which are stored for reproducibility. +- Steps receive artifacts as input and produce outputs, with parameters stored by ZenML for reproducibility. Settings configure runtime infrastructure. ### Model Versions - A model consists of multiple versions, linking all entities to a centralized view. @@ -15835,115 +15780,119 @@ This guide outlined the setup of an AWS stack for ZenML, including IAM roles, se ## 2. Execution ### Stacks & Components -- **Stacks**: Collections of components for MLOps functions (e.g., orchestrators, artifact stores). -- **Orchestrator**: Coordinates step execution in a pipeline. -- **Artifact Store**: Houses tracked and versioned artifacts. +- A **Stack** is a collection of components (orchestrators, artifact stores) necessary for executing a pipeline. + +### Orchestrator +- The orchestrator coordinates step execution in a pipeline, managing dependencies. ZenML includes a default local orchestrator for exploration. + +### Artifact Store +- The artifact store tracks and versions all data passing through the pipeline, enabling features like data caching. ### Flavor -- Base abstractions for stack components, allowing for custom solutions. +- Flavors are tailored solutions for stack components, allowing for built-in and custom implementations. ### Stack Switching -- Easily switch between local and cloud stacks via CLI commands. +- ZenML allows easy switching between local and cloud stacks via a CLI command. ## 3. Management ### ZenML Server -- Required for remote stack components and managing ZenML entities (pipelines, steps, models). +- A ZenML Server is required for remote stack components, managing pipelines, steps, and models. ### Server Deployment -- Options include ZenML Pro SaaS or self-hosted environments. +- Users can deploy a ZenML server via the ZenML Pro SaaS or self-hosted environments. ### Metadata Tracking -- ZenML Server tracks metadata for pipeline runs, aiding in troubleshooting. +- The ZenML Server tracks metadata around pipeline runs, aiding in troubleshooting. ### Secrets Management -- Centralized store for sensitive data, configurable with various backends (e.g., AWS Secrets Manager). +- The server acts as a centralized secrets store for sensitive data, configurable with various backends. ### Collaboration -- Enables team structures for sharing resources and streamlining workflows. +- ZenML promotes collaboration among team members, allowing sharing of pipelines, runs, and stacks. -### Dashboard & VS Code Extension -- Dashboard visualizes pipelines and stacks; VS Code extension allows interaction with ZenML resources directly from the editor. +### Dashboard +- The ZenML Dashboard visualizes pipelines, stacks, and components, facilitating user collaboration. -This summary encapsulates the essential concepts and functionalities of ZenML, providing a clear understanding for further exploration or inquiry. +### VS Code Extension +- A VS Code extension allows interaction with ZenML stacks and resources directly from the editor. + +This summary encapsulates the core concepts of ZenML, providing essential details for understanding its functionality and structure. ================================================== === File: docs/book/getting-started/installation.md === -### ZenML Installation and Getting Started +# ZenML Installation and Getting Started -**ZenML** is a Python package that can be installed via `pip`: +## Installation +**ZenML** is a Python package installable via `pip`: ```shell pip install zenml ``` -**Supported Python Versions:** ZenML supports **Python 3.9, 3.10, 3.11, and 3.12**. +**Supported Python Versions:** 3.9, 3.10, 3.11, 3.12. -### Dashboard Installation +## Dashboard Installation To use the ZenML web dashboard locally, install the optional server dependencies: ```shell pip install "zenml[server]" ``` -**Recommendation:** Use a virtual environment (e.g., `virtualenvwrapper` or `pyenv-virtualenv`) for installation. +**Recommendation:** Use a virtual environment (e.g., `virtualenvwrapper`, `pyenv-virtualenv`). -### MacOS Installation (Apple Silicon) -For Macs with Apple Silicon (M1, M2), set the following environment variable to maintain server connections: +## MacOS (Apple Silicon) +Set this environment variable for local server connections: ```bash export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES ``` -This is only necessary for local server use; it can be skipped if using ZenML as a client. - -### Nightly Builds -To install the nightly build (unstable), use: +## Nightly Builds +For the latest unstable features, install the nightly build: ```shell pip install zenml-nightly ``` -### Verifying Installation -Check installation success with: +## Verifying Installation +Check installation success: -**Bash:** +Bash: ```bash zenml version ``` -**Python:** +Python: ```python import zenml print(zenml.__version__) ``` -For more details, visit the [PyPi package page](https://pypi.org/project/zenml). - -### Docker Usage +## Docker Usage ZenML is available as a Docker image: -**To start ZenML in a bash environment:** +Start ZenML in a bash environment: ```shell docker run -it zenmldocker/zenml /bin/bash ``` -**To run the ZenML server with Docker:** +Run the ZenML server: ```shell docker run -it -d -p 8080:8080 zenmldocker/zenml-server ``` -### Deploying the Server -To run ZenML locally with the dashboard: +## Deploying the Server +For local use with the dashboard: ```shell pip install "zenml[server]" -zenml login --local # opens the dashboard locally +zenml login --local ``` -For advanced features, deploy a centrally-accessible ZenML server. You can either [self-host](deploying-zenml/README.md) or register for a free [ZenML Pro](https://cloud.zenml.io/signup?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link) account. +For advanced features, consider a centrally-deployed ZenML server. Options include self-hosting or registering for a free ZenML Pro account. ================================================== @@ -15951,44 +15900,33 @@ For advanced features, deploy a centrally-accessible ZenML server. You can eithe # ZenML System Architecture Overview -This guide outlines the deployment options for ZenML, including ZenML OSS (self-hosted), ZenML Pro (SaaS or self-hosted), and their respective components. +This guide outlines the deployment options for ZenML, including self-hosted OSS, SaaS, and self-hosted ZenML Pro. ## ZenML OSS (Self-hosted) +- **ZenML OSS Server**: A FastAPI app managing metadata for pipelines, artifacts, and stacks. In ZenML Pro, this is referred to as a "Tenant." +- **OSS Metadata Store**: Stores all tenant metadata, including ML metadata for pipelines and models. +- **OSS Dashboard**: A ReactJS app displaying pipelines and runs. +- **Secrets Store**: Secure storage for credentials to access customer infrastructure. Accessible by ZenML Pro API. -A ZenML OSS deployment includes: - -- **ZenML OSS Server**: FastAPI app managing metadata for pipelines, artifacts, and stacks. -- **OSS Metadata Store**: Stores ML metadata, including tracking and versioning information. -- **OSS Dashboard**: ReactJS app displaying pipelines and runs. -- **Secrets Store**: Secure storage for credentials needed to access infrastructure services. - -ZenML OSS is available under the Apache 2.0 license. For deployment details, refer to the [deployment guide](./deploying-zenml/README.md). +ZenML OSS is free under the Apache 2.0 license. For deployment instructions, refer to the [deployment guide](./deploying-zenml/README.md). ## ZenML Pro (SaaS or Self-hosted) - -ZenML Pro enhances OSS with additional components: - - **ZenML Pro Control Plane**: Central management for all tenants. -- **Pro Dashboard**: Extended functionality over the OSS dashboard. +- **Pro Dashboard**: Enhanced dashboard functionality over the OSS dashboard. - **Pro Metadata Store**: PostgreSQL database for roles, permissions, and tenant management. -- **Pro Add-ons**: Python modules for enhanced functionality. -- **Identity Provider**: Flexible authentication options, integrating with Auth0 for cloud deployments or supporting custom OIDC for self-hosted setups. +- **Pro Add-ons**: Python modules for additional features. +- **Identity Provider**: Supports flexible authentication, integrating with Auth0 for cloud deployments or custom OIDC for self-hosted setups. -ZenML Pro can be deployed as a fully-managed SaaS or self-hosted solution, allowing for air-gapped deployments. +ZenML Pro enhances productivity and can be deployed as SaaS or on-premises. Existing ZenML OSS deployments can be integrated into ZenML Pro. ### ZenML Pro SaaS Architecture - -In SaaS deployments, ZenML services are hosted by the ZenML team, with customer secrets managed by the ZenML Pro Control Plane. ML metadata is stored on ZenML infrastructure, while actual ML data artifacts reside on customer cloud storage. This setup is easy to start and manage. - -#### Hybrid SaaS Option - -Customers can opt for a hybrid model where secrets are stored on their side, connecting their secret store to the ZenML server. +- All ZenML services are hosted by ZenML Team, with ML metadata stored on their infrastructure while actual ML data artifacts remain on customer cloud. +- A hybrid option allows customers to connect their secret store to ZenML, ensuring credentials are stored on the customer side. ### ZenML Pro Self-Hosted Architecture +- All services, data, and secrets are deployed on the customer's cloud for maximum security. -For self-hosted deployments, all services, data, and secrets are managed on the customer cloud, ensuring maximum security for sensitive data. - -For more information on ZenML Pro, sign up for a free trial [here](https://cloud.zenml.io/?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link). +For more details on core concepts, refer to the [ZenML OSS](../getting-started/core-concepts.md) and [ZenML Pro](../getting-started/zenml-pro/core-concepts.md) guides. Interested users can sign up for a free trial of ZenML Pro [here](https://cloud.zenml.io/?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link). ================================================== @@ -15998,165 +15936,185 @@ For more information on ZenML Pro, sign up for a free trial [here](https://cloud ## Overview Deploying ZenML to a production environment provides benefits such as: -1. **Scalability**: Handles large-scale workloads for faster results. +1. **Scalability**: Handles large-scale workloads for faster processing. 2. **Reliability**: Ensures high availability and fault tolerance. -3. **Collaboration**: Facilitates teamwork and model iteration. +3. **Collaboration**: Facilitates teamwork through a shared environment. ## Components A ZenML deployment includes: - **FastAPI server** with SQLite or MySQL database - **Python Client** for server interaction -- **ReactJS dashboard** (open-source companion) -- (Optional) **ZenML Pro API + Database + Dashboard** +- **ReactJS dashboard** (optional) +- **ZenML Pro API + Database + Dashboard** (optional) + +For detailed architecture, refer to the [system architecture documentation](../system-architectures.md). -## ZenML Python Client -The ZenML client is a Python package for server interaction, installed via `pip`. It provides: +### ZenML Python Client +The ZenML client is a Python package for server interaction, installable via `pip`. It provides: - Command-line interface (`zenml`) for managing stacks and secrets. - Framework for authoring and deploying pipelines. -- Access to metadata via the Python SDK for custom automations. +- Access to metadata via Python SDK for custom automation. + +Full documentation for the Python SDK is available [here](https://sdkdocs.zenml.io/latest/). ## Deployment Scenarios -Initially, ZenML operates locally with an SQLite database for pipelines and configurations. Users can start with `zenml login --local` for local server access. For production, the server must be centrally deployed for team collaboration and access to cloud components. +Initially, ZenML operates locally with an SQLite database for pipelines and configurations. Use `zenml login --local` to start a local server. For production, deploy the ZenML server centrally to enable cloud components and team collaboration. -## Deployment Options -1. **Managed Deployment**: Using ZenML Pro, where ZenML manages server maintenance while keeping data secure. -2. **Self-hosted Deployment**: Deploy ZenML in your environment using: - - [Docker](./deploy-with-docker.md) - - [Helm](./deploy-with-helm.md) - - [HuggingFace Spaces](./deploy-using-huggingface-spaces.md) +## How to Deploy ZenML +Deploying the ZenML Server is essential for production-grade ML projects. Options include: -## Documentation Links +1. **Managed Deployment**: Use ZenML Pro to create and manage servers (tenants) with secure data handling. +2. **Self-hosted Deployment**: Deploy on your infrastructure using methods like Docker, Helm, or HuggingFace Spaces. + +Both options cater to different organizational needs. + +### Deployment Options +Documentation for deployment strategies: - [Deploying ZenML using ZenML Pro](../zenml-pro/README.md) - [Deploy with Docker](./deploy-with-docker.md) - [Deploy with Helm](./deploy-with-helm.md) - [Deploy with HuggingFace Spaces](./deploy-using-huggingface-spaces.md) -This concise summary retains essential technical details for understanding ZenML deployment while eliminating redundancy. - ================================================== === File: docs/book/getting-started/deploying-zenml/deploy-using-huggingface-spaces.md === -### Deploying ZenML on HuggingFace Spaces - -HuggingFace Spaces is a platform for hosting ML projects that allows for quick deployment of ZenML. It offers a free hosted ZenML server, ideal for experimentation without infrastructure overhead. For production use, enable [persistent storage](https://huggingface.co/docs/hub/en/spaces-storage) to prevent data loss. +### Summary: Deploying ZenML to Hugging Face Spaces -#### Deployment Steps - -1. **Create a Space**: Click [here](https://huggingface.co/new-space?template=zenml/zenml) to set up your ZenML app. Specify: - - Owner (personal account or organization) - - Space name - - Visibility (set to 'Public' for local connections) - -2. **Select Machine Type**: Choose a higher-tier machine to avoid auto-shutdowns. For persistent storage, set up a MySQL database. - -3. **Customize Appearance**: Modify the `README.md` file in "Files and Versions" for titles, emojis, and colors. Refer to the [HuggingFace configuration guide](https://huggingface.co/docs/hub/spaces-config-reference) for details. +**Overview**: Hugging Face Spaces allows for quick, free deployment of ZenML, ideal for testing without infrastructure overhead. For production, ensure persistent storage is enabled to avoid data loss. -4. **Access Your Space**: After creation, wait for the status to change from 'Building' to 'Running'. If the ZenML login UI is not visible, refresh the page. Use the "Embed this Space" option to get the "Direct URL" (e.g., `https://<YOUR_USERNAME>-<SPACE_NAME>.hf.space`) to initialize the ZenML server. +**Deployment Steps**: +1. **Create a Space**: + - Go to [Hugging Face Spaces](https://huggingface.co/new-space?template=zenml/zenml). + - Specify: + - Owner (personal or organization) + - Space name + - Visibility (set to 'Public' for local machine connection) + +2. **Machine Selection**: + - Choose a higher-tier machine to avoid auto-shutdown. Consider setting up a MySQL database for persistence. + +3. **Customize Appearance**: + - Modify `README.md` in "Files and Versions" for title, emojis, and colors. Refer to [Hugging Face configuration guide](https://huggingface.co/docs/hub/spaces-config-reference) for details. + +4. **Accessing ZenML**: + - After creation, wait for the status to change to 'Running'. If the ZenML login UI is not visible, refresh the page. + - Use the "Embed this Space" option to get the "Direct URL": `https://<YOUR_USERNAME>-<SPACE_NAME>.hf.space`. + - Follow instructions to initialize the ZenML server and create an admin account. + +**Connecting from Local Machine**: +- Use the following command after installing ZenML: + ```shell + zenml login '<YOUR_HF_SPACES_DIRECT_URL>' + ``` +- Access the ZenML dashboard directly via the "Direct URL". -#### Connecting to ZenML Server +**Configuration Options**: +- Default is an SQLite non-persistent database. For persistence, modify the `Dockerfile` in the root directory. Refer to [advanced configuration documentation](deploy-with-docker.md#advanced-server-configuration-options) for details. +- Use Hugging Face's 'Repository secrets' for managing secrets in the `Dockerfile`. -To connect from your local machine, use the following command (after installing ZenML): +**Security Note**: If using a cloud secrets backend, update your ZenML server password via the Dashboard to secure access. -```shell -zenml login '<YOUR_HF_SPACES_DIRECT_URL>' -``` +**Troubleshooting**: +- View logs by clicking "Open Logs" for server status. For further assistance, contact the [Slack channel](https://zenml.io/slack/). -You can also access the ZenML dashboard directly via the "Direct URL". +**Upgrading ZenML**: +- The default space uses the latest version. To update, select 'Factory reboot' in 'Settings' (note: this wipes existing data unless using a MySQL database). To use an earlier version, modify the `Dockerfile`'s `FROM` statement. -#### Configuration Options +This summary captures the essential steps and considerations for deploying ZenML on Hugging Face Spaces while maintaining critical technical details. -- **Database**: By default, ZenML uses an SQLite non-persistent database. For a persistent database, modify the `Dockerfile` in your Space's root directory. For advanced configuration options, refer to [our Docker documentation](deploy-with-docker.md#advanced-server-configuration-options). -- **Secrets Management**: Use HuggingFace's 'Repository secrets' for managing secrets in your `Dockerfile`. If using a cloud secrets backend, update your ZenML Server password via the Dashboard to secure access. +================================================== -#### Troubleshooting +=== File: docs/book/getting-started/deploying-zenml/deploy-with-helm.md === -For issues, check server logs by clicking "Open Logs". For further assistance, contact us on our [Slack channel](https://zenml.io/slack/). +# Summary of Deploying ZenML in a Kubernetes Cluster with Helm -#### Upgrading ZenML Server +## Overview +ZenML can be deployed in a Kubernetes cluster using a Helm chart, which can be found on the [ArtifactHub repository](https://artifacthub.io/packages/helm/zenml/zenml). This documentation covers prerequisites, configuration, and deployment scenarios. -The default space uses the latest ZenML version. To update, select 'Factory reboot' in the 'Settings' tab (note: this will wipe existing data unless using a MySQL persistent database). To use an earlier version, update the `FROM` statement in the `Dockerfile`. +## Prerequisites +- A Kubernetes cluster +- Recommended: MySQL-compatible database (version 8.0 or higher) +- Installed and configured [Kubernetes client](https://kubernetes.io/docs/tasks/tools/#kubectl) +- Installed [Helm](https://helm.sh/docs/intro/install/) +- Optional: External Secrets Manager (e.g., AWS Secrets Manager, GCP Secrets Manager) -================================================== +## Helm Configuration +Review the [`values.yaml` file](https://artifacthub.io/packages/helm/zenml/zenml?modal=values) for customizable settings. Collect necessary information for database and secrets management configuration. -=== File: docs/book/getting-started/deploying-zenml/deploy-with-helm.md === +### Database Configuration +Using an external MySQL-compatible database is recommended for production: +- Hostname and port +- Database username and password (create a dedicated user with restricted privileges) +- Database name (can be created by ZenML) +- Optional: SSL certificates for secure connections -### Summary: Deploying ZenML in a Kubernetes Cluster with Helm +### Secrets Management Configuration +Using an external secrets management service is recommended: +- **AWS**: Region, access key ID, secret access key +- **GCP**: Project ID, service account with access +- **Azure**: Key Vault name, tenant ID, client ID, client secret +- **HashiCorp Vault**: Vault server URL, access token -#### Overview -ZenML can be deployed in a Kubernetes cluster using a Helm chart. The chart is available on the [ArtifactHub repository](https://artifacthub.io/packages/helm/zenml/zenml). +## Optional Cluster Services +Consider installing: +- **Ingress service** (e.g., nginx-ingress) for HTTP/HTTPS exposure +- **cert-manager** for TLS certificate management -#### Prerequisites -- Kubernetes cluster -- Recommended: MySQL-compatible database (version 8.0+) -- Installed and configured [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) -- Installed [Helm](https://helm.sh/docs/intro/install/) -- Optional: External Secrets Manager (e.g., AWS Secrets Manager, GCP Secrets Manager) +## ZenML Helm Installation -#### ZenML Helm Configuration -- Review the [`values.yaml` file](https://artifacthub.io/packages/helm/zenml/zenml?modal=values) for customizable settings. -- Collect information for database and secrets management service configurations. - -##### Database Configuration -If using an external MySQL database: -- Hostname, port, username, and password. -- Database name (will be created if not existing). -- Optional: SSL certificates for secure connections. - -##### Secrets Management Configuration -If using an external secrets management service: -- **AWS**: Region, access key ID, secret access key. -- **GCP**: Project ID, service account with access. -- **Azure**: Key Vault name, tenant ID, client ID, client secret. -- **HashiCorp Vault**: Vault server URL, access token. - -#### Optional Cluster Services -- **Ingress Service**: Recommended for exposing HTTP services. -- **cert-manager**: For managing TLS certificates. - -#### ZenML Helm Installation -1. Pull the Helm chart: +### Configure the Helm Chart +To customize the Helm chart: +1. Pull the chart: ```bash helm pull oci://public.ecr.aws/zenml/zenml --version <VERSION> --untar ``` -2. Create a `custom-values.yaml` based on `values.yaml` and customize it (e.g., database URL, Ingress configuration). -3. Install the Helm chart: - ```bash - helm -n <namespace> install zenml-server . --create-namespace --values custom-values.yaml - ``` +2. Create a `custom-values.yaml` file based on `values.yaml`, modifying: + - Database URL: `mysql://<username>:<password>@<hostname>:<port>/<database>` + - TLS certificates if using SSL + - Ingress configuration for hostname and TLS -#### Connecting to the Deployed ZenML Server -- Activate the ZenML server via its URL to create an admin account. -- Connect your local client: - ```bash - zenml login https://zenml.example.com:8080 --no-verify-ssl - ``` -- To disconnect: - ```bash - zenml logout - ``` +### Install the Helm Chart +Run the following command to install: +```bash +helm -n <namespace> install zenml-server . --create-namespace --values custom-values.yaml +``` -#### Deployment Scenarios -1. **Minimal Deployment**: Uses SQLite and ClusterIP service (not exposed). - ```yaml - zenml: - ingress: - enabled: false - ``` - Access via port-forwarding: - ```bash - kubectl -n zenml-server port-forward svc/zenml-server 8080:8080 - zenml login http://localhost:8080 - ``` +### Connect to the Deployed ZenML Server +Activate the ZenML server by visiting its URL. To connect your local client: +```bash +zenml login https://zenml.example.com:8080 --no-verify-ssl +``` +To disconnect: +```bash +zenml logout +``` + +## Deployment Scenarios + +### Minimal Deployment +For testing, use a temporary SQLite database: +```yaml +zenml: + ingress: + enabled: false +``` +Access via port-forwarding: +```bash +kubectl -n zenml-server port-forward svc/zenml-server 8080:8080 +zenml login http://localhost:8080 +``` -2. **Basic Deployment with Local Database**: Uses Ingress with TLS. - Install `cert-manager` and `nginx-ingress`: +### Basic Deployment with Local Database +Expose ZenML using Ingress with TLS: +1. Install cert-manager and nginx-ingress: ```bash helm repo add jetstack https://charts.jetstack.io + helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true helm install nginx-ingress ingress-nginx/ingress-nginx --namespace nginx-ingress --create-namespace ``` - Create a `ClusterIssuer` for TLS: +2. Create a ClusterIssuer for Let's Encrypt: ```bash kubectl apply -f - <<EOF apiVersion: cert-manager.io/v1 @@ -16176,34 +16134,70 @@ If using an external secrets management service: class: nginx EOF ``` - Configure Helm values for Ingress: - ```yaml - zenml: - ingress: - enabled: true - annotations: - cert-manager.io/cluster-issuer: "letsencrypt-staging" - tls: - enabled: true - generateCerts: false - ``` -3. **Shared Ingress Controller**: Use a dedicated hostname or URL path if the root path is in use. - -#### Secrets Store Configuration -- Default: SQL database as secrets store. -- To use an external service, configure it in Helm values. -- Backup secrets store can be configured similarly. - -#### Database Backup and Recovery -- Automated backups before upgrades. -- Backup strategies: `disabled`, `in-memory`, `database`, `dump-file`. +3. Configure Helm values for ZenML: +```yaml +zenml: + ingress: + enabled: true + annotations: + cert-manager.io/cluster-issuer: "letsencrypt-staging" + tls: + enabled: true + generateCerts: false +``` + +### Shared Ingress Controller +If the root URL path is in use: +- **Dedicated hostname**: Use a service like nip.io to create a new DNS name. +- **Dedicated URL path**: Expose ZenML at a specific path (e.g., `/zenml`). + +## Secrets Store Configuration +ZenML defaults to using the SQL database for secrets. To use an external service, configure it in the Helm values. Backup secrets store can also be configured for high availability. + +### Database Backup and Recovery +Automated backups occur before upgrades. Backup strategies include: +- `disabled`: No backup +- `in-memory`: Fast but not persistent +- `database`: Copies to a backup database +- `dump-file`: Dumps to a file, optionally on a persistent volume + +### Custom CA Certificates +To connect using custom CAs: +1. Direct injection: +```yaml +zenml: + certificates: + customCAs: + - name: "my-custom-ca" + certificate: | + -----BEGIN CERTIFICATE----- + ... + -----END CERTIFICATE----- +``` +2. Reference existing Kubernetes secrets: +```yaml +zenml: + certificates: + secretRefs: + - name: "my-secret" + key: "ca.crt" +``` -#### Custom CA Certificates and Proxy Configuration -- Custom CA certificates can be injected directly or referenced from Kubernetes secrets. -- Proxy settings can be configured for external connections. +### HTTP Proxy Configuration +Configure proxy settings if needed: +```yaml +zenml: + proxy: + enabled: true + httpProxy: "http://proxy.example.com:8080" + httpsProxy: "http://proxy.example.com:8080" + additionalNoProxy: + - "internal.example.com" + - "10.0.0.0/8" +``` -This summary retains critical technical details while streamlining the content for clarity. +This summary captures the essential steps and configurations for deploying ZenML in a Kubernetes cluster using Helm while retaining critical technical details. ================================================== @@ -16211,14 +16205,15 @@ This summary retains critical technical details while streamlining the content f ### Summary: Deploying ZenML in a Docker Container -**Overview**: The ZenML server can be deployed using the Docker container image `zenmldocker/zenml-server`. It supports various deployment methods, including Docker, docker-compose, and serverless platforms like Cloud Run. +#### Overview +ZenML can be deployed using the Docker container image [`zenmldocker/zenml-server`](https://hub.docker.com/r/zenmldocker/zenml/). This guide covers configuration options and deployment use cases. #### Local Deployment For a quick local deployment, use the ZenML CLI: ```bash zenml login --local --docker ``` -This command sets up a local ZenML server with a shared SQLite database. +This command sets up a ZenML server in a Docker container with a shared SQLite database. #### Configuration Options When deploying a custom ZenML server, configure the following environment variables: @@ -16227,30 +16222,34 @@ When deploying a custom ZenML server, configure the following environment variab - SQLite: `sqlite:////path/to/zenml.db` - MySQL: `mysql://username:password@host:port/database` -- **ZENML_STORE_SSL_CA, ZENML_STORE_SSL_CERT, ZENML_STORE_SSL_KEY**: For SSL connections to MySQL. - +- **ZENML_STORE_SSL_CA**: Custom CA certificate for MySQL SSL connections. +- **ZENML_STORE_SSL_CERT**: Client SSL certificate for MySQL. +- **ZENML_STORE_SSL_KEY**: Client SSL private key for MySQL. - **ZENML_LOGGING_VERBOSITY**: Controls log verbosity (`NOTSET`, `ERROR`, `WARN`, `INFO`, `DEBUG`, `CRITICAL`). - -- **ZENML_STORE_BACKUP_STRATEGY**: Defines the backup strategy (e.g., `in-memory`, `database`, `dump-file`). - -- **ZENML_SERVER_RATE_LIMIT_ENABLED, ZENML_SERVER_LOGIN_RATE_LIMIT_MINUTE, ZENML_SERVER_LOGIN_RATE_LIMIT_DAY**: Manage API rate limiting. +- **ZENML_STORE_BACKUP_STRATEGY**: Defines the database backup strategy (e.g., `in-memory`, `database`, `dump-file`). +- **ZENML_SERVER_RATE_LIMIT_ENABLED**: Enables rate limiting for the API. +- **ZENML_SERVER_LOGIN_RATE_LIMIT_MINUTE**: Requests allowed per minute for login. +- **ZENML_SERVER_LOGIN_RATE_LIMIT_DAY**: Requests allowed per day for login. If no `ZENML_STORE_*` variables are set, an SQLite database is created at `/zenml/.zenconfig/local_stores/default_zen_store/zenml.db`. -#### Secret Store Configuration -By default, the SQL database serves as the secrets store. To use an external service (AWS, GCP, Azure, HashiCorp), configure: +#### Secrets Management +The ZenML server uses an SQL database as a default secrets store. To configure an external secrets management service (e.g., AWS Secrets Manager, GCP Secrets Manager), set the following: -- **ZENML_SECRETS_STORE_TYPE**: Set to the appropriate service (e.g., `aws`, `gcp`, `azure`, `hashicorp`, `custom`). -- **ZENML_SECRETS_STORE_AUTH_METHOD** and **ZENML_SECRETS_STORE_AUTH_CONFIG**: For authentication. +- **ZENML_SECRETS_STORE_TYPE**: Type of secrets store (e.g., `sql`, `aws`, `gcp`, `azure`, `hashicorp`, `custom`). +- **ZENML_SECRETS_STORE_AUTH_METHOD**: Authentication method for the secrets store. +- **ZENML_SECRETS_STORE_AUTH_CONFIG**: Configuration for authentication in JSON format. -For AWS, ensure permissions for actions like `secretsmanager:GetSecretValue` on secrets prefixed with `zenml/`. +For AWS, GCP, and Azure, specific permissions and roles must be configured for the service accounts used. #### Running ZenML Server To run the ZenML server with Docker: ```bash docker run -it -d -p 8080:8080 --name zenml zenmldocker/zenml-server ``` -For persistent storage: +This command starts a ZenML server with a temporary SQLite database. Access the dashboard at `http://localhost:8080` and create an admin user. + +For persistent storage, mount a directory: ```bash mkdir zenml-server docker run -it -d -p 8080:8080 --name zenml \ @@ -16258,12 +16257,12 @@ docker run -it -d -p 8080:8080 --name zenml \ zenmldocker/zenml-server ``` -#### Connecting to MySQL -To run a MySQL container: +#### Using MySQL Database +To run a MySQL database alongside ZenML: ```bash docker run --name mysql -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password mysql:8.0 ``` -Connect ZenML to MySQL: +Connect ZenML to MySQL by setting `ZENML_STORE_URL`: ```bash docker run -it -d -p 8080:8080 --name zenml \ --env ZENML_STORE_URL=mysql://root:password@host.docker.internal/zenml \ @@ -16271,7 +16270,7 @@ docker run -it -d -p 8080:8080 --name zenml \ ``` #### Using Docker Compose -Create a `docker-compose.yml`: +For multi-container setups, use Docker Compose: ```yaml version: "3.9" services: @@ -16284,31 +16283,30 @@ services: environment: - ZENML_STORE_URL=mysql://root:password@host.docker.internal/zenml ``` -Run with: +Start the containers: ```bash -docker compose up -d +docker compose -p zenml up -d ``` #### Backup and Recovery -ZenML automatically backs up the database in-memory before migrations. Configure backup strategies with `ZENML_STORE_BACKUP_STRATEGY` for long-term solutions. +ZenML automatically backs up the database in-memory before migrations. Configure backup strategies using `ZENML_STORE_BACKUP_STRATEGY` (e.g., `disabled`, `in-memory`, `database`, `dump-file`). #### Troubleshooting -Check logs with: +Check logs: - CLI: `zenml logs -f` -- Docker: `docker logs zenml -f` -- Docker Compose: `docker compose logs -f` +- Manual Docker: `docker logs zenml -f` +- Docker Compose: `docker compose -p zenml logs -f` -This guide provides essential information for deploying and managing ZenML in a Docker environment, covering configuration, secret management, and backup strategies. +This summary provides essential information for deploying and configuring ZenML in a Docker environment, including local setups, MySQL integration, and secrets management. ================================================== === File: docs/book/getting-started/deploying-zenml/secret-management.md === -### Secret Store Configuration and Management - -#### Centralized Secrets Store -ZenML offers a centralized secrets management system for secure registration and management of secrets. Metadata (name, ID, owner, scope) is stored in the ZenML server database, while actual secret values are managed separately in the ZenML Secrets Store. In local deployments, secrets are stored in an SQLite database; in remote deployments, they are stored in the configured secrets management back-end. Supported back-ends include: +# Secret Store Configuration and Management +## Centralized Secrets Store +ZenML provides a centralized secrets management system for secure registration and management of secrets. Metadata (name, ID, owner, scope) is stored in the ZenML server database, while actual secret values are managed separately through the ZenML Secrets Store. In local deployments, secrets are stored in the SQLite database; in remote deployments, they are stored in the configured secrets management back-end. Supported back-ends include: - Default SQL database - AWS Secrets Manager - GCP Secret Manager @@ -16316,24 +16314,23 @@ ZenML offers a centralized secrets management system for secure registration and - HashiCorp Vault - Custom implementations -#### Configuration and Deployment -The secrets store back-end is configured at deployment time, requiring selection of a back-end and authentication mechanism. ZenML reuses the [Service Connector](../../how-to/infrastructure-deployment/auth-management/service-connectors-guide.md) for authentication. It is advised to use the principle of least privilege for credentials. The secrets store configuration can be updated at any time, allowing for easy switching between back-ends. Follow the [secrets migration strategy](secret-management.md#secrets-migration-strategy) to minimize downtime during changes. - -#### Backup Secrets Store -ZenML can connect to a secondary Secrets Store for high availability, backup, and disaster recovery. Ensure the backup store is in a different location or type than the primary store to avoid issues. The server prioritizes the primary store but will use the backup if the primary is unreachable. The CLI commands `zenml secret backup` and `zenml secret restore` facilitate migration between stores. +## Configuration and Deployment +Secrets store back-end configuration occurs at deployment time. This includes selecting a back-end and authentication mechanism, using ZenML Service Connector authentication methods. Follow the principle of least privilege for credentials. The secrets store can be updated by modifying the ZenML Server configuration and redeploying. For migration strategies, refer to the documented guidelines. -#### Secrets Migration Strategy -When changing the provider or location of secrets, follow this migration process: +## Backup Secrets Store +ZenML can connect to a secondary Secrets Store for high availability and disaster recovery. Ensure the backup store is in a different location than the primary to avoid issues. The server prioritizes the primary store for read/write operations, falling back to the backup if necessary. Use the CLI commands: +- `zenml secret backup` to back up secrets +- `zenml secret restore` to restore secrets -1. Configure ZenML to use the new store (Secrets Store B) as secondary. +## Secrets Migration Strategy +To change the provider or location of secrets, follow this migration strategy: +1. Configure ZenML to use the new store as the secondary. 2. Redeploy the server. -3. Use `zenml secret backup` to transfer secrets from the current store (Secrets Store A) to Secrets Store B. -4. Reconfigure ZenML to make Secrets Store B the primary store and remove Secrets Store A. +3. Use `zenml secret backup` to transfer secrets from the primary to the secondary. +4. Set the new store as primary and remove the old one. 5. Redeploy the server. -This strategy is unnecessary if the location of secrets remains unchanged, such as updating credentials or authentication methods. - -For further details on deployment strategies, refer to the deployment guide. +This strategy is unnecessary if only updating credentials or authentication methods without changing the store location. For ZenML Pro users, configure your cloud backend based on deployment scenarios. ================================================== @@ -16341,7 +16338,7 @@ For further details on deployment strategies, refer to the deployment guide. ### Custom Secret Stores -The secrets store is essential for managing secrets in ZenML, responsible for storing, updating, and deleting secret values, while metadata is stored in an SQL database. The interface for all secrets store back-ends is defined in `zenml.zen_stores.secrets_stores.secrets_store_interface`, which includes the following key methods: +The secrets store is essential for managing secret values in ZenML, handling storage, updates, and deletions, while metadata is stored in an SQL database. The interface for all secrets store back-ends is defined in `zenml.zen_stores.secrets_stores.secrets_store_interface` and includes the following key methods: ```python class SecretsStoreInterface(ABC): @@ -16355,7 +16352,7 @@ class SecretsStoreInterface(ABC): @abstractmethod def get_secret_values(self, secret_id: UUID) -> Dict[str, str]: - """Get secret values for an existing secret.""" + """Retrieve secret values for an existing secret.""" @abstractmethod def update_secret_values(self, secret_id: UUID, secret_values: Dict[str, str]) -> None: @@ -16370,9 +16367,11 @@ class SecretsStoreInterface(ABC): To create a custom secrets store: -1. Inherit from `zenml.zen_stores.secrets_stores.base_secrets_store.BaseSecretsStore` and implement the abstract methods from the interface. Set `SecretsStoreType.CUSTOM` as the `TYPE`. -2. If configuration is needed, inherit from `SecretsStoreConfiguration` to define parameters, using this as the `CONFIG_TYPE`. -3. Ensure your code is included in the ZenML server's container image. Configure the server to use your custom store via environment variables or helm chart values, as detailed in the deployment guide. +1. **Inherit from Base Class**: Create a class that inherits from `zenml.zen_stores.secrets_stores.base_secrets_store.BaseSecretsStore` and implement the methods from `SecretsStoreInterface`. Set `SecretsStoreType.CUSTOM` as the `TYPE`. + +2. **Configuration Class**: If needed, create a configuration class inheriting from `SecretsStoreConfiguration` for your parameters, and use it as the `CONFIG_TYPE`. + +3. **ZenML Server Configuration**: Ensure your code is in the container image for the ZenML server. Configure the server to use your custom secrets store via environment variables or helm chart values, as detailed in the [deployment guide](./README.md). For complete documentation, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-zen_stores/#zenml.zen_stores.secrets_stores.secrets_store_interface.SecretsStoreInterface). @@ -16380,18 +16379,19 @@ For complete documentation, refer to the [SDK docs](https://sdkdocs.zenml.io/lat === File: docs/book/getting-started/deploying-zenml/deploy-with-custom-image.md === -### Summary: Deploying ZenML with Custom Docker Images +### Deploying ZenML with Custom Docker Images -#### Overview -Deploying ZenML typically uses the default `zenmlhub/zenml-server` Docker image, but custom images may be necessary for: -- Custom artifact stores requiring visualizations or step logs. -- Forked ZenML repositories with modifications. +Deploying ZenML typically uses the default `zenmlhub/zenml-server` Docker image, but custom images may be necessary in certain scenarios, such as: -**Note:** Custom Docker images are only supported for Docker or Helm deployments. +- Enabling artifact visualizations or step logs for a custom artifact store. +- Deploying a server based on a forked ZenML repository with modifications. -#### Building and Pushing a Custom ZenML Server Docker Image -1. **Set Up a Container Registry:** Create a Docker Hub account and repository. -2. **Clone ZenML Repository:** +**Note:** Custom Docker images can only be deployed using [Docker](deploy-with-docker.md) or [Helm](deploy-with-helm.md). + +### Building and Pushing a Custom ZenML Server Docker Image + +1. **Set Up a Container Registry:** Create a free account on a registry like [Docker Hub](https://hub.docker.com/). +2. **Clone ZenML Repository:** Checkout the desired branch, e.g., for version 0.41.0: ```bash git checkout release/0.41.0 ``` @@ -16399,7 +16399,7 @@ Deploying ZenML typically uses the default `zenmlhub/zenml-server` Docker image, ```bash cp docker/base.Dockerfile docker/custom.Dockerfile ``` -4. **Modify Dockerfile:** +4. **Modify the Dockerfile:** - Add dependencies: ```bash RUN pip install <my_package> @@ -16414,32 +16414,37 @@ Deploying ZenML typically uses the default `zenmlhub/zenml-server` Docker image, docker push <YOUR_CONTAINER_REGISTRY>/<IMAGE_NAME>:<IMAGE_TAG> ``` -#### Deploying ZenML with Your Custom Image -Adjust your deployment strategy to use the custom Docker image. +**Tip:** To verify your custom image locally, refer to the [Deploy a custom ZenML image via Docker](deploy-with-custom-image.md#deploy-a-custom-zenml-image-via-docker) section. -##### Deploy via Docker -Refer to the ZenML Docker Deployment Guide and replace `zenmldocker/zenml-server` with your custom image: -```bash -docker run -it -d -p 8080:8080 --name zenml <YOUR_CONTAINER_REGISTRY>/<IMAGE_NAME>:<IMAGE_TAG> -``` +### Deploying ZenML with Your Custom Image -For `docker-compose`, modify `docker-compose.yml`: -```yaml -services: - zenml: - image: <YOUR_CONTAINER_REGISTRY>/<IMAGE_NAME>:<IMAGE_TAG> -``` +#### Via Docker + +Familiarize yourself with the general [ZenML Docker Deployment Guide](deploy-with-docker.md). Replace `zenmldocker/zenml-server` with your custom image reference: + +- Run the ZenML server: + ```bash + docker run -it -d -p 8080:8080 --name zenml <YOUR_CONTAINER_REGISTRY>/<IMAGE_NAME>:<IMAGE_TAG> + ``` +- Adjust `docker-compose.yml`: + ```yaml + services: + zenml: + image: <YOUR_CONTAINER_REGISTRY>/<IMAGE_NAME>:<IMAGE_TAG> + ``` + +#### Via Helm + +Refer to the general [ZenML Helm Deployment Guide](deploy-with-helm.md). Modify the `image` section in `values.yaml`: -##### Deploy via Helm -Refer to the ZenML Helm Deployment Guide and update the `values.yaml` file: ```yaml zenml: image: repository: <YOUR_CONTAINER_REGISTRY>/<IMAGE_NAME> tag: <IMAGE_TAG> -``` +``` -This summary captures the essential steps and commands for deploying ZenML with custom Docker images while omitting redundant explanations. +This summary captures the essential steps and commands for deploying ZenML with custom Docker images while maintaining clarity and conciseness. ================================================== @@ -16447,14 +16452,14 @@ This summary captures the essential steps and commands for deploying ZenML with ### ZenML Pro Teams Overview -**Teams in ZenML Pro**: A feature for managing groups of users within organizations and tenants, enhancing user management and access control. +**Teams** in ZenML Pro facilitate efficient user management within organizations and tenants. A team is a collection of users that functions as a single entity, allowing for streamlined management of permissions and access control. -#### Key Benefits: +#### Key Benefits of Teams 1. **Group Management**: Manage permissions for multiple users simultaneously. -2. **Organizational Structure**: Reflects company or project team structures. +2. **Organizational Structure**: Align teams with your company's structure or project groups. 3. **Simplified Access Control**: Assign roles to teams instead of individual users. -#### Creating and Managing Teams: +#### Creating and Managing Teams - **Creation Steps**: 1. Navigate to Organization settings. 2. Click on the "Teams" tab. @@ -16465,28 +16470,28 @@ This summary captures the essential steps and commands for deploying ZenML with - Description (optional) - Initial team members -#### Adding Users to Teams: +#### Adding Users to Teams 1. Go to the "Teams" tab in Organization settings. 2. Select the desired team. 3. Click "Add Members". 4. Choose users to add. -#### Assigning Teams to Tenants: -1. Go to tenant settings. -2. Click on the "Members" tab, then "Teams". +#### Assigning Teams to Tenants +1. Go to the tenant settings page. +2. Click on the "Members" tab, then the "Teams" tab. 3. Select "Add Team". 4. Choose the team and assign a role. -#### Team Roles and Permissions: -- Roles assigned to teams (e.g., Admin, Editor, Viewer) grant all members the corresponding permissions. For instance, assigning "Editor" role to a team gives all members Editor permissions in that tenant. +#### Team Roles and Permissions +Assigning a role (e.g., Admin, Editor, Viewer) to a team within a tenant grants all team members the associated permissions. For instance, assigning the "Editor" role means all team members will have Editor permissions in that tenant. -#### Best Practices: -1. **Reflect Organization**: Create teams that mirror company structure. -2. **Custom Roles**: Utilize custom roles for detailed access control. +#### Best Practices for Using Teams +1. **Reflect Your Organization**: Create teams that represent your company's structure. +2. **Combine with Custom Roles**: Use custom roles for detailed access control. 3. **Regular Audits**: Review team memberships and roles periodically. -4. **Documentation**: Keep clear records of each team's purpose and associated projects or tenants. +4. **Document Team Purposes**: Keep clear documentation on each team's purpose and associated projects. -By utilizing Teams in ZenML Pro, organizations can enhance user management, streamline access control, and improve MLOps workflows. +By utilizing Teams in ZenML Pro, organizations can enhance user management, simplify access control, and optimize MLOps workflows. ================================================== @@ -16494,20 +16499,20 @@ By utilizing Teams in ZenML Pro, organizations can enhance user management, stre # ZenML Pro Overview -ZenML Pro enhances the Open Source ZenML product with several key features: +ZenML Pro enhances the Open Source ZenML product with several advanced features: - **Managed Deployment**: Deploy multiple ZenML servers (tenants). - **User Management**: Create organizations and teams for scalable user management. -- **Role-Based Access Control**: Implement customizable roles for secure resource management. -- **Model and Artifact Control**: Utilize the Model Control Plane and Artifact Control Plane for improved ML asset management. -- **Triggers and Run Templates**: Create and run templates via the dashboard or API for quick pipeline iterations. -- **Early-Access Features**: Access pro-specific features like triggers, filters, and usage reports. +- **Access Control**: Implement role-based access control with customizable roles. +- **Model and Artifact Control**: Utilize the Model Control Plane and Artifact Control Plane for better tracking of ML assets. +- **Triggers and Run Templates**: Create and run templates via the dashboard or API for efficient pipeline management. +- **Early Access Features**: Access pro-specific features like triggers, filters, and usage reports. -For more information, visit the [ZenML website](https://zenml.io/pro). +For more details, visit the [ZenML website](https://zenml.io/pro). -## Deployment Scenarios: SaaS vs Self-Hosted +## Deployment Scenarios -ZenML Pro can be deployed as a SaaS solution, simplifying server management and allowing focus on MLOps workflows. It can also be fully self-hosted. For more details, refer to the [self-hosted deployment guide](./self-hosted.md) or [book a demo](https://www.zenml.io/book-your-demo). +ZenML Pro can be deployed as a SaaS solution or fully self-hosted. The SaaS option simplifies server management, allowing focus on MLOps workflows. For self-hosted deployment, refer to the [self-hosted deployment guide](./self-hosted.md) or [book a demo](https://www.zenml.io/book-your-demo). ### Key Resources - [Tenants](./tenants.md) @@ -16516,103 +16521,77 @@ ZenML Pro can be deployed as a SaaS solution, simplifying server management and - [Roles](./roles.md) - [Self-Hosted Deployments](./self-hosted.md) -For a free assessment of ZenML Pro, create an account [here](https://cloud.zenml.io/?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link). - ================================================== === File: docs/book/getting-started/zenml-pro/self-hosted.md === # ZenML Pro Self-Hosted Deployment Guide Summary -## Overview -This guide outlines the installation of ZenML Pro, including the Control Plane and Tenant servers, in a Kubernetes cluster. Access to private ZenML Pro container images and infrastructure components (Kubernetes, database, load balancer, Ingress controller, SSL certificates, and DNS) is required. +This document outlines the installation process for ZenML Pro, including the Control Plane and Tenant servers, in a self-hosted Kubernetes environment. -### Important Notes -- SSO and Run Templates features are not available in the on-prem version. -- Access to private container images requires a demo booking. +## Overview +- ZenML Pro requires access to private container images and infrastructure including a Kubernetes cluster, database server, load balancer, Ingress controller, HTTPS certificates, and DNS rules. +- Features like Single Sign-On (SSO) and Run Templates are not available in the on-prem version. ## Preparation and Prerequisites - ### Software Artifacts -- **ZenML Pro Control Plane Artifacts**: - - **API Server Images**: - - AWS: `715803424590.dkr.ecr.eu-west-1.amazonaws.com/zenml-pro-api` - - GCP: `europe-west3-docker.pkg.dev/zenml-cloud/zenml-pro/zenml-pro-api` - - **Dashboard Images**: - - AWS: `715803424590.dkr.ecr.eu-west-1.amazonaws.com/zenml-pro-dashboard` - - GCP: `europe-west3-docker.pkg.dev/zenml-cloud/zenml-pro/zenml-pro-dashboard` - - **Helm Chart**: `oci://public.ecr.aws/zenml/zenml-pro` - -- **ZenML Pro Tenant Server Artifacts**: - - **Tenant Server Images**: - - AWS: `715803424590.dkr.ecr.eu-central-1.amazonaws.com/zenml-pro-server` - - GCP: `europe-west3-docker.pkg.dev/zenml-cloud/zenml-pro/zenml-pro-server` - - **Open Source Helm Chart**: `oci://public.ecr.aws/zenml/zenml` - -- **ZenML Pro Client Artifacts**: Available at `zenmldocker/zenml` on Docker Hub. +- **Control Plane Artifacts**: + - AWS: `715803424590.dkr.ecr.eu-west-1.amazonaws.com/zenml-pro-api`, `715803424590.dkr.ecr.eu-west-1.amazonaws.com/zenml-pro-dashboard` + - GCP: `europe-west3-docker.pkg.dev/zenml-cloud/zenml-pro/zenml-pro-api`, `europe-west3-docker.pkg.dev/zenml-cloud/zenml-pro/zenml-pro-dashboard` + - Helm Chart: `oci://public.ecr.aws/zenml/zenml-pro` + +- **Tenant Server Artifacts**: + - AWS: `715803424590.dkr.ecr.eu-central-1.amazonaws.com/zenml-pro-server` + - GCP: `europe-west3-docker.pkg.dev/zenml-cloud/zenml-pro/zenml-pro-server` + - Helm Chart: `oci://public.ecr.aws/zenml/zenml` + +- **Client Artifacts**: Public ZenML client image at `zenmldocker/zenml`. ### Accessing ZenML Pro Container Images -- **AWS**: Set up an IAM user/role with `AmazonEC2ContainerRegistryReadOnly` policy. -- **GCP**: Create a service account with access to the Artifact Registry. +- **AWS**: Set up an IAM user/role with `AmazonEC2ContainerRegistryReadOnly` policy to access ECR. +- **GCP**: Create a service account with access to Artifact Registry. ### Air-Gapped Installation -For environments without internet access, download required artifacts on an internet-connected machine and transfer them to the air-gapped environment. +- Download required artifacts on a machine with internet access and transfer them to the air-gapped environment. -## Infrastructure Requirements -1. **Kubernetes Cluster**: A functional cluster is required. -2. **Database Server**: MySQL or Postgres for the Control Plane; MySQL for Tenant servers. +### Infrastructure Requirements +1. **Kubernetes Cluster**: Essential for deployment. +2. **Database Server**: MySQL or Postgres for Control Plane; MySQL only for Tenant servers. 3. **Ingress Controller**: For HTTP(S) traffic routing. -4. **Domain Name**: FQDN for the Control Plane and tenants. -5. **SSL Certificate**: Required for securing traffic. - -## Stage 1: Install the ZenML Pro Control Plane +4. **Domain Name**: FQDN for Control Plane and tenants. +5. **SSL Certificate**: Required for secure connections. +## Stage 1: Install ZenML Pro Control Plane ### Set up Credentials -Create a Kubernetes secret for image pull access if necessary. +Create a Kubernetes secret for pulling images from the container registry. ### Configure the Helm Chart -Customize the Helm chart values, focusing on database credentials, server URL, and image repositories. +Customize the Helm chart with necessary configurations (database credentials, server URL, etc.). ### Install the Helm Chart -Run the following command to install the Helm chart: +Run the Helm command to install the Control Plane: ```bash helm --namespace zenml-pro upgrade --install --create-namespace zenml-pro oci://public.ecr.aws/zenml/zenml-pro --version <version> --values my-values.yaml ``` -### Verify Installation -Check the status of the deployed workloads: -```bash -kubectl -n zenml-pro get all -``` - -### Install CA Certificates -Install custom CA certificates on client machines if using self-signed certificates. - -### Onboard Additional Users -Use a script to create user accounts in ZenML Pro. - ## Stage 2: Enroll and Deploy ZenML Pro Tenants - -### Enrolling a Tenant +### Enroll a Tenant Run the `enroll-tenant.py` script to create a tenant entry and generate a Helm values file. -### Configure the ZenML Pro Tenant Helm Chart +### Configure the Tenant Helm Chart Fill in the necessary values in the generated YAML file. -### Deploy the ZenML Pro Tenant Server -Install the tenant server using Helm: +### Deploy the Tenant Server +Use Helm to install the tenant server: ```bash helm --namespace zenml-pro-<tenant-id> upgrade --install --create-namespace zenml oci://public.ecr.aws/zenml/zenml --version <version> --values zenml-<tenant-id>-values.yaml ``` -### Accessing the Tenant -Log in to the tenant dashboard and CLI using the provided URLs. - ## Day 2 Operations: Upgrades and Updates 1. Upgrade the ZenML Pro Control Plane first, then tenant servers. -2. Use Helm commands to upgrade both components, ensuring compatibility. +2. Use Helm commands to upgrade to new versions while preserving existing configurations if needed. -This summary captures the essential steps and requirements for deploying ZenML Pro in a self-hosted environment while omitting redundant explanations. +This summary captures the essential steps and configurations for deploying ZenML Pro in a self-hosted environment while ensuring critical details are retained. ================================================== @@ -16620,23 +16599,23 @@ This summary captures the essential steps and requirements for deploying ZenML P # ZenML Pro Core Concepts -ZenML Pro features a distinct entity hierarchy compared to the open-source version. Key concepts include: +ZenML Pro features a distinct entity hierarchy compared to the open-source version. Key components include: - **Organization**: A collection of users, teams, and tenants. -- **Tenant**: An isolated ZenML server deployment containing project resources. +- **Tenant**: An isolated ZenML server deployment containing all project resources. - **Teams**: Groups of users within an organization for resource management. -- **Users**: Individual accounts on a ZenML Pro instance. +- **Users**: Individual accounts on ZenML Pro. - **Roles**: Control user actions within a tenant or organization. -- **Templates**: Pipeline runs that can be reconfigured and re-executed. +- **Templates**: Re-runnable pipeline configurations. -For more details, refer to the linked pages: +For detailed information on each concept, refer to the following links: -| Concept | Description | Link | -|--------------------|-------------------------------------------------|-----------------------| -| Organizations | Managing organizations in ZenML Pro | [organization.md](./organization.md) | -| Tenants | Working with tenants in ZenML Pro | [tenants.md](./tenants.md) | -| Teams | Team management in ZenML Pro | [teams.md](./teams.md) | -| Roles & Permissions| Role-based access control in ZenML Pro | [roles.md](./roles.md) | +| Concept | Description | Link | +|---------------------|--------------------------------------------------|--------------------------| +| Organizations | Managing organizations in ZenML Pro. | [organization.md](./organization.md) | +| Tenants | Working with tenants in ZenML Pro. | [tenants.md](./tenants.md) | +| Teams | Team management in ZenML Pro. | [teams.md](./teams.md) | +| Roles & Permissions | Role-based access control in ZenML Pro. | [roles.md](./roles.md) | ================================================== @@ -16644,49 +16623,51 @@ For more details, refer to the linked pages: # ZenML Pro: Roles and Permissions -ZenML Pro utilizes a role-based access control (RBAC) system to manage permissions for users and teams. This guide outlines the available roles, assignment methods, and custom role creation. +ZenML Pro utilizes a role-based access control (RBAC) system for managing permissions within organizations and tenants. This guide outlines available roles, assignment processes, and custom role creation. ## Organization-Level Roles -Three predefined organization roles are available: +ZenML Pro offers three predefined organization roles: -1. **Org Admin**: Full control, can manage members, tenants, billing, and assign roles. -2. **Org Editor**: Manages tenants and teams, but cannot access subscription info or delete the organization. +1. **Org Admin**: Full control; can manage members, tenants, billing, and assign roles. +2. **Org Editor**: Manages tenants and teams; cannot access subscription info or delete the organization. 3. **Org Viewer**: Read-only access to tenants. ### Assigning Organization Roles 1. Go to Organization settings. -2. Click "Members" to update roles or use "Add members" to invite new users. +2. Click "Members" to update roles or use "Add members" to invite new members. **Notes**: -- Admins can add themselves to any tenant role. +- Organization admins can add themselves to any tenant role. - Editors and viewers cannot add themselves to tenants they are not part of. - Custom organization roles can be created via the [ZenML Pro API](https://cloudapi.zenml.io/). ## Tenant-Level Roles -Tenant roles define permissions within a specific ZenML tenant. Predefined roles include: +Tenant roles govern user permissions within a specific tenant. Predefined roles include: -1. **Admin**: Full control over tenant resources. -2. **Editor**: Can create and share resources but cannot modify or delete. +1. **Admin**: Full control over the tenant. +2. **Editor**: Can create and share resources; cannot modify or delete. 3. **Viewer**: Read-only access. ### Custom Roles To create a custom role: -1. Access the tenant settings page. -2. Click "Roles" and select "Add Custom Role". +1. Access the tenant settings. +2. Click "Roles" and select "Add Custom Role." 3. Name the role, choose a base role, and edit permissions. -**Permissions can be set for**: -- Artifacts, Models, Model Versions, Pipelines, Runs, Stacks, Components, Secrets, Service Connectors. +Custom roles can define permissions for various resources, including: +- Artifacts, Models, Pipelines, etc. + +**Permissions**: Create, Read, Update, Delete, Share. ### Managing Role Permissions -1. Go to Roles in tenant settings. -2. Select a role and click "Edit Permissions". -3. Adjust permissions as needed. +1. Go to the Roles page in tenant settings. +2. Select a role and click "Edit Permissions." +3. Adjust permissions as necessary. ## Sharing Individual Resources -Users can share specific resources through the dashboard. +Users can share individual resources through the dashboard. ## Best Practices 1. **Least Privilege**: Assign minimal necessary permissions. @@ -16694,7 +16675,7 @@ Users can share specific resources through the dashboard. 3. **Use Custom Roles**: Tailor roles for specific team needs. 4. **Document Roles**: Keep records of custom roles and their purposes. -Utilizing ZenML Pro's RBAC ensures appropriate access levels, enhancing security and collaboration in MLOps projects. +By implementing ZenML Pro's RBAC, organizations can maintain security while fostering collaboration in MLOps projects. ================================================== @@ -16702,140 +16683,146 @@ Utilizing ZenML Pro's RBAC ensures appropriate access levels, enhancing security # Organizations in ZenML Pro -ZenML Pro organizes your work experience around the concept of an **Organization**, the highest structure in the ZenML Cloud environment. An organization typically includes a group of users and one or more [tenants](./tenants.md). +In ZenML Pro, an **Organization** is the top-level structure that encompasses users and one or more [tenants](./tenants.md). ## Inviting Team Members -To invite users to your organization, click `Add Member` in the Organization settings and assign an initial Role. The invited user will receive an email. Once part of the organization, users can access all tenants they are authorized for. +To invite users to your organization, click `Add Member` in the Organization settings and assign an initial Role. The user will receive an invitation email. Once part of the organization, users can access all tenants they are authorized for. ## Managing Organization Settings -Organization settings, including billing and member roles, are managed at the organization level. Access these settings by clicking your profile picture in the top right corner and selecting "Settings". +Organization settings, including billing and member roles, are accessible by clicking your profile picture in the top right corner and selecting "Settings". ## API Operations -Additional operations related to organizations can be performed through the API. More details are available at [ZenML Cloud API](https://cloudapi.zenml.io/). +Additional operations related to Organizations can be performed via the API. More details are available at [ZenML Cloud API](https://cloudapi.zenml.io/). ================================================== === File: docs/book/getting-started/zenml-pro/pro-api.md === -### ZenML Pro API Overview +# ZenML Pro API Overview -The ZenML Pro API is a RESTful API compliant with OpenAPI 3.1.0, enabling interaction with ZenML resources for both SaaS and self-hosted instances. Key functionalities include management of tenants, organizations, users, roles, and authentication. +The ZenML Pro API is a RESTful API compliant with OpenAPI 3.1.0, enabling interaction with ZenML resources for both SaaS and self-hosted instances. Key functionalities include managing tenants, organizations, users, roles, and implementing role-based access control (RBAC). -#### Authentication +## Authentication -To authenticate API requests: -- **Browser Authentication**: If logged into ZenML Pro, use the same session for API requests via the OpenAPI docs at [https://cloudapi.zenml.io](https://cloudapi.zenml.io). -- **API Tokens**: For programmatic access, generate API tokens valid for 1 hour. +### Browser Authentication +For users logged into ZenML Pro, requests can be authenticated directly in the OpenAPI documentation at [https://cloudapi.zenml.io](https://cloudapi.zenml.io). -**Token Generation Steps**: -1. Go to organization settings in the ZenML Pro dashboard. -2. Select "API Tokens" from the sidebar. -3. Click "Create new token" and copy the generated token. +### API Tokens +API tokens are used for programmatic access and are valid for 1 hour. To generate a token: +1. Go to the organization settings in the ZenML Pro dashboard. +2. Select "API Tokens" and click "Create new token." +3. Use the token as a bearer token in HTTP requests. -**Example Requests**: -- **cURL**: +**Example Requests:** +- **cURL:** ```bash curl -H "Authorization: Bearer YOUR_API_TOKEN" https://cloudapi.zenml.io/users/me ``` -- **Wget**: +- **Wget:** ```bash wget -qO- --header="Authorization: Bearer YOUR_API_TOKEN" https://cloudapi.zenml.io/users/me ``` -- **Python**: +- **Python:** ```python import requests - response = requests.get("https://cloudapi.zenml.io/users/me", headers={"Authorization": f"Bearer YOUR_API_TOKEN"}) + + response = requests.get( + "https://cloudapi.zenml.io/users/me", + headers={"Authorization": f"Bearer YOUR_API_TOKEN"} + ) print(response.json()) ``` -**Important Notes**: +**Important Notes:** - Tokens expire after 1 hour and cannot be retrieved post-generation. -- Tokens are user-scoped and inherit permissions. +- Tokens are user-specific and inherit permissions. -#### Tenant Programmatic Access - -Access the tenant API similarly to the ZenML OSS server API using: +### Tenant Programmatic Access +Access the tenant API via: - Temporary API tokens -- Service accounts with API keys +- Service account API keys -#### Key API Endpoints +## Key API Endpoints -- **Tenant Management**: - - List tenants: `GET /tenants` - - Create tenant: `POST /tenants` - - Get tenant details: `GET /tenants/{tenant_id}` - - Update tenant: `PATCH /tenants/{tenant_id}` +### Tenant Management +- List tenants: `GET /tenants` +- Create a tenant: `POST /tenants` +- Get tenant details: `GET /tenants/{tenant_id}` +- Update a tenant: `PATCH /tenants/{tenant_id}` -- **Organization Management**: - - List organizations: `GET /organizations` - - Create organization: `POST /organizations` - - Get organization details: `GET /organizations/{organization_id}` - - Update organization: `PATCH /organizations/{organization_id}` +### Organization Management +- List organizations: `GET /organizations` +- Create an organization: `POST /organizations` +- Get organization details: `GET /organizations/{organization_id}` +- Update an organization: `PATCH /organizations/{organization_id}` -- **User Management**: - - List users: `GET /users` - - Get current user: `GET /users/me` - - Update user: `PATCH /users/{user_id}` +### User Management +- List users: `GET /users` +- Get current user: `GET /users/me` +- Update user: `PATCH /users/{user_id}` -- **Role-Based Access Control**: - - Create role: `POST /roles` - - Assign role: `POST /roles/{role_id}/assignments` - - Check permissions: `GET /permissions` +### Role-Based Access Control +- Create a role: `POST /roles` +- Assign a role: `POST /roles/{role_id}/assignments` +- Check permissions: `GET /permissions` -#### Error Handling and Rate Limiting +## Error Handling +Standard HTTP status codes indicate request success or failure. Error responses include messages and additional details. -The API uses standard HTTP status codes for success or failure. Error responses include messages with details. Rate limiting may apply, and a 429 status code indicates exceeding the limit; implement backoff and retry logic accordingly. +## Rate Limiting +The API may enforce rate limits. Exceeding these may result in a 429 (Too Many Requests) status code. Implement backoff and retry logic accordingly. -For comprehensive details, refer to the full API documentation at [https://cloudapi.zenml.io](https://cloudapi.zenml.io). +For comprehensive API details, refer to the full documentation at [https://cloudapi.zenml.io](https://cloudapi.zenml.io). ================================================== === File: docs/book/getting-started/zenml-pro/tenants.md === -# ZenML Pro Tenants Overview +### ZenML Pro Tenants Overview -## Tenants -- **Definition**: Tenants are isolated deployments of the ZenML server, each with its own users, roles, and resources. All ZenML Pro activities (pipelines, stacks, runs, connectors) are scoped to a tenant. -- **Features**: ZenML Pro offers enhanced features beyond the open-source version. +**Tenants** are isolated deployments of the ZenML server, each with its own users, roles, and resources. All ZenML Pro activities, including pipelines, stacks, and runs, are scoped to a tenant. The ZenML server in a tenant includes enhanced features beyond the open-source version. -## Creating a Tenant +#### Creating a Tenant +To create a tenant: 1. Navigate to your organization page. -2. Click "+ New Tenant". -3. Name your tenant and click "Create Tenant". +2. Click "+ New Tenant." +3. Enter a name and click "Create Tenant." + +Alternatively, you can create a tenant via the Cloud API using the `POST /organizations` endpoint at `https://cloudapi.zenml.io/`. -Alternatively, create a tenant via the Cloud API using the `POST /organizations` endpoint at `https://cloudapi.zenml.io/`. +#### Organizing Tenants +Effective tenant organization is crucial for MLOps management. Consider the following dimensions: -## Organizing Tenants -### By Development Stage -- **Staging Tenants**: For development, testing, and experimentation. -- **Production Tenants**: For live services, with stricter access controls and performance optimization. +1. **Development Stage**: + - **Staging Tenants**: For development, testing, and experimentation. + - **Production Tenants**: For live services, requiring stricter access controls and monitoring. -### By Business Logic -1. **Project-based**: Separate tenants for different ML projects (e.g., Recommendation System, NLP). -2. **Team-based**: Align tenants with teams (e.g., Data Science, ML Engineering). -3. **Data Sensitivity**: Classify tenants based on data sensitivity (e.g., Public, Internal, Confidential). +2. **Business Logic**: + - **Project-based Separation**: Tenants for different ML projects (e.g., Recommendation System, NLP). + - **Team-based Separation**: Align with organizational structure (e.g., Data Science Team). + - **Data Sensitivity Levels**: Separate tenants based on data classification (e.g., Public, Internal). -### Best Practices -- **Naming Conventions**: Use clear, descriptive names. +#### Best Practices for Tenant Organization +- **Clear Naming Conventions**: Use descriptive names for easy identification. - **Access Control**: Implement role-based access control. -- **Documentation**: Maintain documentation for each tenant. +- **Documentation**: Maintain clear records of tenant purposes. - **Regular Reviews**: Periodically assess tenant structure. - **Scalability**: Design for future growth. -## Using Your Tenant -- Tenants enable running pipelines, experiments, and utilizing Pro features such as: - - Model Control Plane - - Artifact Control Plane - - Pipeline execution from the Dashboard - - Creating templates from pipeline runs +#### Using Your Tenant +A tenant allows you to run pipelines and experiments with Pro-only features such as: +- Model Control Plane +- Artifact Control Plane +- Running pipelines from the Dashboard +- Creating templates from pipeline runs -### Accessing Tenant Documentation -- Each tenant has a connection URL to access the ZenML server and OpenAPI specification. Visit `<TENANT_URL>/docs` for available methods, including pipeline execution via REST API. +#### Accessing Tenant Documentation +Each tenant has a connection URL for the `zenml` client and to access the OpenAPI specification. Visit `<TENANT_URL>/docs` for a list of executable methods, including REST API pipeline execution. -For further details, refer to the API documentation. +For further details on API access, refer to the API reference documentation. ================================================== @@ -16843,85 +16830,69 @@ For further details, refer to the API documentation. # ZenML API Reference Summary -## Overview -The ZenML server is a FastAPI application, with OpenAPI-compliant documentation accessible at `/docs` or `/redoc`. For local instances, access the docs at `http://127.0.0.1:8237/docs`. +The ZenML server operates as a FastAPI application, with OpenAPI-compliant documentation accessible at `/docs` or `/redoc`. For local setups, access the documentation at `http://127.0.0.1:8237/docs` after logging in with `zenml login --local`. -## Accessing the API Programmatically +## Programmatic API Access with Bearer Tokens -### Using a Short-Lived API Token -1. Generate a short-lived API token (valid for 1 hour) from the API Tokens page in your ZenML dashboard. -2. Use the token as a bearer token in HTTP requests. +### Short-lived API Token +1. Generate a short-lived API token (valid for 1 hour) from the ZenML dashboard. +2. Use the token as a bearer token in HTTP requests. Example commands: -**Example Requests:** - **Curl:** ```bash curl -H "Authorization: Bearer YOUR_API_TOKEN" https://your-zenml-server/api/v1/current-user ``` + - **Wget:** ```bash wget -qO- --header="Authorization: Bearer YOUR_API_TOKEN" https://your-zenml-server/api/v1/current-user ``` -- **Python:** - ```python - import requests - response = requests.get( - "https://your-zenml-server/api/v1/current-user", - headers={"Authorization": f"Bearer YOUR_API_TOKEN"} - ) +- **Python:** + ```python + import requests + response = requests.get("https://your-zenml-server/api/v1/current-user", headers={"Authorization": f"Bearer YOUR_API_TOKEN"}) print(response.json()) ``` **Important Notes:** - Tokens expire after 1 hour and cannot be retrieved post-generation. -- Tokens are user-scoped and inherit permissions. -- For long-term access, consider using a service account and API key. +- Tokens inherit user permissions. +- For long-term access, consider using a service account. -### Using a Service Account and API Key +### Service Account and API Key 1. Create a service account: ```shell zenml service-account create myserviceaccount ``` - This will provide a `<ZENML_API_KEY>`. -2. Obtain an API token using the API key via a POST request to `/api/v1/login`. +2. Obtain an API token using the API key with a POST request to `/api/v1/login`. Example commands: -**Example Requests:** - **Curl:** ```bash curl -X POST -d "password=<YOUR_API_KEY>" https://your-zenml-server/api/v1/login ``` + - **Wget:** ```bash wget -qO- --post-data="password=<YOUR_API_KEY>" --header="Content-Type: application/x-www-form-urlencoded" https://your-zenml-server/api/v1/login ``` + - **Python:** ```python import requests - - response = requests.post( - "https://your-zenml-server/api/v1/login", - data={"password": "<YOUR_API_KEY>"}, - headers={"Content-Type": "application/x-www-form-urlencoded"} - ) + response = requests.post("https://your-zenml-server/api/v1/login", data={"password": "<YOUR_API_KEY>"}, headers={"Content-Type": "application/x-www-form-urlencoded"}) print(response.json()) ``` -**Response Example:** -```json -{ - "access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...", - "token_type": "bearer", - "expires_in": 3600 -} -``` - -3. Use the obtained API token for authenticated requests as shown previously. +3. Use the obtained API token in the `Authorization` header for requests, similar to the short-lived token example. **Important Notes:** -- Tokens are scoped to the service account and inherit permissions. +- Tokens are scoped to the service account and inherit its permissions. - Tokens expire after a configured duration (typically 1 hour). -- Handle API tokens securely; rotate compromised keys via the ZenML dashboard or command line. +- Rotate compromised API keys via the ZenML dashboard or command line. + +This summary captures the essential details for accessing the ZenML API programmatically, including methods for obtaining and using API tokens. ================================================== @@ -16929,81 +16900,97 @@ The ZenML server is a FastAPI application, with OpenAPI-compliant documentation ### ZenML Global Settings Overview -The **ZenML Global Config Directory** stores global settings for ZenML installations, typically located at: -- **Linux:** `~/.config/zenml` -- **Mac:** `~/Library/Application Support/zenml` -- **Windows:** `C:\Users\%USERNAME%\AppData\Local\zenml` +The global settings for ZenML are stored in the **ZenML Global Config Directory**, typically located at: + +- **Linux**: `~/.config/zenml` +- **Mac**: `~/Library/Application Support/zenml` +- **Windows**: `C:\Users\%USERNAME%\AppData\Local\zenml` -The default path can be overridden with the `ZENML_CONFIG_PATH` environment variable. To retrieve the current config directory, use: +You can override the default path using the `ZENML_CONFIG_PATH` environment variable. To retrieve the current config directory, use: ```shell zenml status python -c 'from zenml.utils.io_utils import get_global_config_directory; print(get_global_config_directory())' ``` -**Warning:** Do not manually alter or delete files in the global config directory. Use CLI commands for management: +**Warning**: Avoid manually altering or deleting files in the global config directory. Use CLI commands for management: + - `zenml analytics` - Manage analytics settings - `zenml clean` - Reset configuration to default - `zenml downgrade` - Downgrade ZenML version to match the installed package -Upon first run, ZenML initializes the global config directory, creating a default configuration and stack: +### Initialization -``` +On first run, ZenML initializes the global config directory and creates a default stack: + +```plaintext Initializing the ZenML global configuration version to 0.13.2 Creating default user 'default' ... Creating default stack for user 'default'... ``` -#### Global Config Directory Structure +The directory structure after initialization: -After initialization, the directory structure includes: ``` /home/stefan/.config/zenml -├── config.yaml # Global Configuration Settings -└── local_stores # Local data storage for stack components - ├── <UUID> # Local Store paths +├── config.yaml <- Global Configuration +└── local_stores <- Local component data + ├── <UUID> <- Local Store paths └── default_zen_store - └── zenml.db # SQLite database for ZenML data + └── zenml.db <- SQLite database for ZenML data ``` -**Key Configurations in `config.yaml`:** -```yaml -active_stack_id: ... -analytics_opt_in: true -store: - database: ... - url: ... - username: ... -user_id: d980f13e-05d1-4765-92d2-1dc7eb7addb7 -version: 0.13.2 -``` +### Configuration Details + +1. **`config.yaml`**: Contains global settings like client ID, database configuration, analytics options, and active Stack. + + Example content: + ```yaml + active_stack_id: ... + analytics_opt_in: true + store: + database: ... + url: ... + username: ... + user_id: <UUID> + version: 0.13.2 + ``` -#### Usage Analytics +2. **`local_stores`**: Subdirectories for local stack components, each named by UUID. + +3. **`zenml.db`**: Default SQLite database for storing stack and component information. + +### Usage Analytics + +ZenML collects anonymized usage statistics to improve the tool. You can opt out with: -ZenML collects anonymized usage statistics to improve the tool. Users can opt-out using: ```bash zenml analytics opt-out ``` -Analytics are processed through a central ZenML server before being sent to Segment for aggregation. +Analytics are aggregated via Segment, processed through a ZenML analytics server for optimization. -#### Version Mismatch (Downgrading) +### Version Mismatch (Downgrading) If you encounter a version mismatch error: + ```shell `The ZenML global configuration version (%s) is higher than the version of ZenML currently being used (%s).` ``` -You can downgrade the configuration with: + +To downgrade the global configuration version, run: + ```shell zenml downgrade ``` -**Warning:** Downgrading may lead to unexpected behavior or data loss. To reset, run: +**Warning**: Downgrading may cause unexpected behavior. To reset to default, use: + ```shell zenml clean ``` -This documentation provides essential information on managing ZenML global settings, directory structure, analytics, and handling version mismatches effectively. +This will purge the local database and reinitialize the global configuration. ================================================== @@ -17015,27 +17002,27 @@ This documentation provides essential information on managing ZenML global setti ## Key Questions and Guidance -- **Contributing to ZenML**: Refer to the [Contribution guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). For small changes, open a pull request. For larger features, discuss on [Slack](https://zenml.io/slack/) or [create an issue](https://github.com/zenml-io/zenml/issues/new/choose). +- **Contributing to ZenML**: Refer to the [Contribution guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). For small changes, open a pull request. For larger features, discuss on [Slack](https://zenml.io/slack/) or create an issue. -- **Adding Custom Components**: Start with the [general documentation](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). For specific types, like custom orchestrators, see the dedicated section [here](../component-guide/orchestrators/custom.md). +- **Adding Custom Components**: Start with the [custom stack component documentation](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). For specific types, like custom orchestrators, see the relevant section [here](../component-guide/orchestrators/custom.md). -- **Mitigating Dependency Clashes**: Visit our page on [handling dependencies](../how-to/pipeline-development/configure-python-environments/handling-dependencies.md). +- **Mitigating Dependency Clashes**: Consult the [handling dependencies documentation](../how-to/pipeline-development/configure-python-environments/handling-dependencies.md) for solutions. -- **Deploying Cloud Infrastructure/MLOps Stacks**: ZenML is stack-agnostic. Documentation for stack components details deployment on popular cloud providers. +- **Deploying Cloud Infrastructure/MLOps Stacks**: ZenML is stack-agnostic. Documentation for each stack component explains deployment on popular cloud providers. -- **Self-Hosting ZenML**: Check the documentation on [self-hosted deployments](../getting-started/deploying-zenml/README.md). +- **Self-Hosting ZenML**: Review the [self-hosted ZenML deployment documentation](../getting-started/deploying-zenml/README.md) for options. -- **Hyperparameter Tuning**: Refer to our guide on [hyperparameter tuning](../how-to/pipeline-development/build-pipelines/hyper-parameter-tuning.md). +- **Hyperparameter Tuning**: Learn more in our [hyperparameter tuning guide](../how-to/pipeline-development/build-pipelines/hyper-parameter-tuning.md). -- **Resetting ZenML**: Use `zenml clean` to reset your client and wipe local metadata. This is destructive; consult us on [Slack](https://zenml.io/slack/) if unsure. +- **Resetting ZenML Client**: Run `zenml clean` to reset your client and wipe local metadata. This is destructive; consult us on [Slack](https://zenml.io/slack/) if unsure. -- **Dynamic Pipelines and Steps**: Read about composing steps and pipelines in our [starter guide](../user-guide/starter-guide/create-an-ml-pipeline.md). Code examples are also in the hyperparameter tuning guide. +- **Dynamic Pipelines and Steps**: Read about composing steps and pipelines in the [starter guide](../user-guide/starter-guide/create-an-ml-pipeline.md) and check the hyperparameter tuning guide for code examples. -- **Using Project Templates**: Utilize [project templates](../how-to/project-setup-and-management/collaborate-with-team/project-templates/README.md) for quick setup. The Starter template (`starter`) is recommended. +- **Using Project Templates**: Use [project templates](../how-to/project-setup-and-management/collaborate-with-team/project-templates/README.md) for quick setup. The Starter template (`starter`) is recommended for most cases. - **Upgrading ZenML**: Upgrade the client with `pip install --upgrade zenml`. For server upgrades, see the [upgrade documentation](../how-to/manage-zenml-server/upgrade-zenml-server.md). -- **Using Specific Stack Components**: Refer to the [component guide](../component-guide/README.md) for usage tips on each integration. +- **Using Specific Stack Components**: Refer to the [component guide](../component-guide/README.md) for tips on using each integration and component with ZenML.  @@ -17045,100 +17032,105 @@ This documentation provides essential information on managing ZenML global setti # Environment Variables for ZenML -ZenML allows control over its behavior through several pre-defined environment variables. Below are the key variables, their default values, and options: +ZenML allows configuration through several pre-defined environment variables: ## Logging Configuration -- **Verbosity**: +- **Verbosity**: Controls the logging level. ```bash export ZENML_LOGGING_VERBOSITY=INFO # Options: INFO, WARN, ERROR, CRITICAL, DEBUG ``` -- **Format**: + +- **Format**: Sets the logging output format. ```bash export ZENML_LOGGING_FORMAT='%(asctime)s %(message)s' ``` ## Step Logs -- **Disable Step Logs Storage**: +- **Disable Step Logs Storage**: Prevents storing logs from steps. ```bash - export ZENML_DISABLE_STEP_LOGS_STORAGE=false # Set to true to disable storage + export ZENML_DISABLE_STEP_LOGS_STORAGE=false # Set to true to disable ``` ## Repository Path -- **ZenML Repository Path**: +- **ZenML Repository Path**: Specifies where ZenML looks for its repository. ```bash export ZENML_REPOSITORY_PATH=/path/to/somewhere ``` ## Analytics -- **Opt-out of Analytics**: +- **Opt-out of Analytics**: Disable analytics tracking. ```bash export ZENML_ANALYTICS_OPT_IN=false ``` ## Debugging and Execution Control -- **Debug Mode**: +- **Debug Mode**: Enables developer mode. ```bash export ZENML_DEBUG=true ``` -- **Active Stack**: + +- **Active Stack**: Sets the active stack by UUID. ```bash export ZENML_ACTIVE_STACK_ID=<UUID-OF-YOUR-STACK> ``` -- **Prevent Pipeline Execution**: + +- **Prevent Pipeline Execution**: Stops pipeline execution when true. ```bash - export ZENML_PREVENT_PIPELINE_EXECUTION=false # Set to true to prevent execution + export ZENML_PREVENT_PIPELINE_EXECUTION=false # Set to true to prevent ``` -## Traceback and Logging -- **Disable Rich Traceback**: +## Traceback and Logging Options +- **Rich Traceback**: Enable or disable rich traceback. ```bash export ZENML_ENABLE_RICH_TRACEBACK=true # Set to false to disable ``` -- **Disable Colorful Logging**: + +- **Colorful Logging**: Disable colorful logging. ```bash export ZENML_LOGGING_COLORS_DISABLED=true ``` ## Stack Validation and Code Repository -- **Disable Stack Validation**: +- **Skip Stack Validation**: Disable stack validation. ```bash export ZENML_SKIP_STACK_VALIDATION=true ``` -- **Ignore Untracked Files**: - Set `ZENML_CODE_REPOSITORY_IGNORE_UNTRACKED_FILES=True` to ignore untracked files. + +- **Ignore Untracked Files**: Allows untracked files in code repositories. + ```bash + export ZENML_CODE_REPOSITORY_IGNORE_UNTRACKED_FILES=true + ``` ## Global Config Path -- **ZenML Global Config Path**: +- **ZenML Global Config Path**: Sets the path for the global config file. ```bash export ZENML_CONFIG_PATH=/path/to/somewhere ``` ## Client Configuration -- **Connect to ZenML Server**: +- **Store URL and API Key**: Connects the ZenML client to a server. ```bash export ZENML_STORE_URL=https://... export ZENML_STORE_API_KEY=<API_KEY> ``` -For additional details on server configuration, refer to the ZenML Server documentation. +For further details, refer to the respective sections in the ZenML documentation. ================================================== === File: docs/book/reference/python-client.md === -# ZenML Python Client Documentation Summary +### ZenML Python Client Overview -## Overview -The ZenML Python `Client` allows programmatic interaction with ZenML resources such as pipelines, runs, and stacks, which are stored in a database within your ZenML instance. For other programming languages, resources can be accessed via REST API endpoints. +The ZenML Python `Client` enables programmatic interaction with various ZenML resources, such as pipelines, runs, and stacks, stored in a database within your ZenML instance. For other programming languages, interaction is possible via REST API endpoints. -## Usage Example +#### Usage Example To fetch the last 10 pipeline runs for the current stack: ```python from zenml.client import Client client = Client() - my_runs_on_current_stack = client.list_pipeline_runs( stack_id=client.active_stack_model.id, user_id=client.active_user.id, @@ -17150,28 +17142,27 @@ for pipeline_run in my_runs_on_current_stack: print(pipeline_run.name) ``` -## Main ZenML Resources -### Pipelines, Runs, Artifacts +#### Main ZenML Resources - **Pipelines**: Tracked pipelines. - **Pipeline Runs**: Details of executed runs. -- **Run Templates**: Templates for pipeline execution. +- **Run Templates**: Templates for running pipelines. - **Step Runs**: Steps of pipeline runs. -- **Artifacts**: Information on artifacts generated. +- **Artifacts**: Information on artifacts from runs. - **Schedules**: Metadata for scheduled runs. - **Builds**: Docker images for pipelines. - **Code Repositories**: Connected git repositories. -### Stacks, Infrastructure, Authentication +For stacks and infrastructure: - **Stack**: Registered stacks. - **Stack Components**: Components like orchestrators and artifact stores. - **Flavors**: Available stack component flavors. -- **User**: Registered users (default user for local runs). -- **Secrets**: Authentication secrets in the ZenML Secret Store. -- **Service Connectors**: Connectors for infrastructure integration. +- **User**: Registered users. +- **Secrets**: Authentication secrets. +- **Service Connectors**: Connectors to infrastructure. + +#### Client Methods +**List Methods**: Retrieve lists of resources. -## Client Methods -### Reading and Writing Resources -**List Methods**: ```python client.list_pipeline_runs( stack_id=client.active_stack_model.id, @@ -17180,18 +17171,20 @@ client.list_pipeline_runs( size=10, ) ``` -- Returns a `Page` of resources (default 50 results). Modify with `size` and `page` arguments. +Returns a `Page` of resources, defaulting to 50 results. Modify with `size` or `page` arguments. + +**Get Methods**: Fetch specific resources by ID, name, or prefix. -**Get Methods**: ```python client.get_pipeline_run("413cfb42-a52c-4bf1-a2fd-78af2f7f0101") # By ID -client.get_pipeline_run("first_pipeline-2023_06_20-16_20_13_274466") # By Name +client.get_pipeline_run("first_pipeline-2023_06_20-16") # By Name Prefix ``` -**Create, Update, and Delete Methods**: Available for some resources; check the Client SDK documentation for specifics. +**Create, Update, Delete Methods**: Available for certain resources; check the Client SDK for specifics. + +#### Active User and Stack +Access the current user and stack: -### Active User and Stack -Access the current user and stack information: ```python my_runs_on_current_stack = client.list_pipeline_runs( stack_id=client.active_stack_model.id, @@ -17199,12 +17192,12 @@ my_runs_on_current_stack = client.list_pipeline_runs( ) ``` -## Resource Models -Client methods return **Response Models** (Pydantic Models) that validate the returned data. For example, `client.list_pipeline_runs` returns `Page[PipelineRunResponseModel]`. +#### Resource Models +Client methods return **Response Models**, which are Pydantic Models ensuring data validation. For example, `client.list_pipeline_runs` returns `Page[PipelineRunResponseModel]`. -**Request, Update, and Filter Models** are used for server API endpoints but not for Client methods. For details on model fields, refer to the ZenML Models SDK Documentation. +**Request, Update, and Filter Models** are used for server API endpoints but not for Client methods. For detailed field information, refer to the ZenML Models SDK Documentation. -This summary captures the essential technical details and usage of the ZenML Python Client while maintaining clarity and conciseness. +This summary provides a concise overview of the ZenML Python Client, its usage, resources, methods, and models, ensuring critical information is retained for effective understanding and application. ================================================== @@ -17212,19 +17205,19 @@ This summary captures the essential technical details and usage of the ZenML Pyt ### ZenML Community & Content Overview -The ZenML community provides various ways to connect with the development team and enhance understanding of the framework. +The ZenML community provides various channels for engagement and support: -- **Slack Channel**: Join the [ZenML Slack channel](https://zenml.io/slack) for community support, discussions, and project sharing. It's a key resource for getting help and finding answers to common questions. +- **Slack Channel**: Join the [ZenML Slack channel](https://zenml.io/slack) for direct interaction with the core team and community discussions. It's a great resource for questions and sharing projects. -- **Social Media**: Follow us on [LinkedIn](https://www.linkedin.com/company/zenml) and [Twitter](https://twitter.com/zenml_io) for updates on releases, events, and MLOps. Engagement through comments and shares is encouraged. +- **Social Media**: Follow us on [LinkedIn](https://www.linkedin.com/company/zenml) and [Twitter](https://twitter.com/zenml_io) for updates on releases, events, and MLOps. Engage with our posts to help spread the word. -- **YouTube Channel**: Our [YouTube channel](https://www.youtube.com/c/ZenML) offers video tutorials and workshops for visual learners. +- **YouTube Channel**: Access our [YouTube channel](https://www.youtube.com/c/ZenML) for video tutorials and workshops that guide you through the ZenML framework. -- **Public Roadmap**: Contribute to our [public roadmap](https://zenml.io/roadmap) by sharing feature ideas or voting on existing suggestions, which helps guide ZenML's development. +- **Public Roadmap**: Check our [public roadmap](https://zenml.io/roadmap) to provide feedback and vote on feature priorities, helping shape ZenML's development. -- **Blog**: Visit our [Blog](https://zenml.io/blog/) for articles on tool implementation, new features, and team insights. +- **Blog**: Visit our [Blog](https://zenml.io/blog/) for articles on new features, implementation processes, and insights from our team. -- **Podcast**: Listen to our [Podcast](https://podcast.zenml.io/) for interviews and discussions on machine learning, deep learning, and MLOps. +- **Podcast**: Listen to our [Podcast](https://podcast.zenml.io/) for interviews and discussions on machine learning, deep learning, and MLOps with industry experts. - **Newsletter**: Subscribe to our [Newsletter](https://zenml.io/newsletter-signup) for updates on open-source tooling and ZenML news. @@ -17232,67 +17225,68 @@ The ZenML community provides various ways to connect with the development team a === File: docs/book/reference/llms-txt.md === -# Summary of llms.txt Documentation for ZenML +## Summary of llms.txt Documentation for ZenML -## About llms.txt -The `llms.txt` file format, proposed by [llmstxt.org](https://llmstxt.org/), provides a standardized way to supply information for LLMs to answer questions about products or websites. It includes background information, guidance, and links to detailed markdown files. The `llms.txt` file serves as a summary of ZenML documentation for answering basic questions. The base version is available at [zenml.io/llms.txt](https://zenml.io/llms.txt). +### Overview of llms.txt +The `llms.txt` file format, proposed by [llmstxt.org](https://llmstxt.org/), standardizes information delivery to assist LLMs in answering questions about products/websites. It combines human and LLM readability with a structured format suitable for processing methods like parsers and regex. The ZenML `llms.txt` file serves as a summary of its documentation, facilitating basic inquiries about ZenML. The base version is accessible at [zenml.io/llms.txt](https://zenml.io/llms.txt). -## Available llms.txt Files -ZenML offers multiple `llms.txt` files to cover its extensive documentation, accessible via ZenML's [HuggingFace dataset](https://huggingface.co/datasets/zenml/llms.txt): +### Available llms.txt Files +ZenML provides multiple `llms.txt` files tailored to different documentation sections, available on the ZenML [HuggingFace dataset](https://huggingface.co/datasets/zenml/llms.txt): -| File | Tokens | Purpose | -|--------------------------|--------|--------------------------------------------------------------| -| [llms.txt](https://zenml.io/llms.txt) | 120k | Basic ZenML concepts and getting started information | -| [component-guide.txt](https://zenml.io/component-guide.txt) | 180k | Details about ZenML integrations and stack components | -| [how-to-guides.txt](https://zenml.io/how-to-guides.txt) | 75k | Summarized how-to guides for common ZenML workflows | -| [llms-full.txt](https://zenml.io/llms-full.txt) | 600k | Complete, unabridged ZenML documentation | +| File | Tokens | Purpose | +|--------------------------|--------|-----------------------------------------------------------| +| [llms.txt](https://zenml.io/llms.txt) | 120k | Basic concepts and getting started information | +| [component-guide.txt](https://zenml.io/component-guide.txt) | 180k | Details on ZenML integrations and stack components | +| [how-to-guides.txt](https://zenml.io/how-to-guides.txt) | 75k | Summarized how-to guides for common workflows | +| [llms-full.txt](https://zenml.io/llms-full.txt) | 600k | Complete ZenML documentation, unabridged | ### File Details -1. **llms.txt**: Covers [User Guides](../user-guide/starter-guide/README.md) and [Getting Started](../getting-started/installation.md) sections. -2. **component-guide.txt**: Contains details on all [stack components in ZenML](../component-guide/README.md). -3. **how-to-guides.txt**: Summarizes the [how-to section](../how-to/manage-zenml-server/README.md) of documentation. -4. **llms-full.txt**: Comprehensive ZenML documentation for the most accurate answers. - -## How to Use the llms.txt Files -- Select the file relevant to your inquiry about ZenML. -- Each file prefixes text with its filename, aiding in referencing when answering questions. -- You can combine files for enhanced accuracy, provided your context window allows it. -- Instruct the LLM to avoid answers not directly sourced from the text to prevent hallucinations. +1. **llms.txt**: Covers User Guides and Getting Started sections; ideal for basic questions. +2. **component-guide.txt**: Contains information on stack components and integrations. +3. **how-to-guides.txt**: Summarized pages from the how-to section; useful for process-related queries. +4. **llms-full.txt**: Comprehensive documentation for in-depth answers. + +### Usage Recommendations +- Select the appropriate file based on the information needed. +- Each file prefixes text with its filename, aiding in referencing during responses. +- Combine files if context allows for more accurate answers. +- Instruct the LLM to provide answers directly sourced from the text to prevent hallucinations. - Use models with large context windows, like Gemini, due to high token counts. ================================================== === File: docs/book/reference/faq.md === -# ZenML FAQ Summary +### ZenML FAQ Summary -### Purpose of ZenML -ZenML was created to address challenges faced while deploying machine learning models in production, aiming for a simple, production-ready solution for large-scale ML pipelines. +#### Purpose of ZenML +ZenML was developed to address challenges in deploying machine learning models in production, providing a simple, production-ready solution for large-scale ML pipelines. -### ZenML vs. Orchestrators -ZenML is not just another orchestrator like Airflow or Kubeflow. Instead, it is a framework that allows you to run pipelines on any orchestrator, coordinating with various components of an ML system. Standard orchestrators are supported out-of-the-box, and users can create custom orchestrators for more control. +#### ZenML vs. Orchestrators +ZenML is not just another orchestrator like Airflow or Kubeflow. It is a framework that allows users to run pipelines on any orchestrator, coordinating with all components of an ML system. Users can also create custom orchestrators for more control. -### Tool Integration -For integration with tools, refer to the [documentation](https://docs.zenml.io) and the [component guide](../component-guide/README.md) for instructions and sample code. The ZenML team is continuously adding more integrations, and users can contribute ideas to the [roadmap](https://zenml.io/roadmap) and [upvote features](https://zenml.io/discussion). ZenML is designed to be extensible with other tools. +#### Tool Integration +For integration with tools, refer to the [documentation](https://docs.zenml.io) and the [component guide](../component-guide/README.md) for instructions and sample code. The ZenML team continuously adds new integrations, and users can suggest features via the [roadmap](https://zenml.io/roadmap) and [discussion page](https://zenml.io/discussion). ZenML is extensible, allowing integration with various tools in the ML process. -### OS Support +#### OS Support - **Windows**: Officially supported via WSL; limited functionality outside WSL. -- **Macs with Apple Silicon**: Supported, but set the following environment variable for local server use: +- **Apple Silicon Macs**: Supported with the environment variable: ```bash export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES ``` + This is necessary for local server use but not required for CLI use with a deployed server. -### Custom Tool Integration -For guidance on integrating custom tools, refer to the [custom stack component guide](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). +#### Custom Tool Integration +Extending ZenML for custom tools depends on the tool and its MLOps category. A comprehensive guide is available [here](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). -### Contribution +#### Community Contribution To contribute, start with issues labeled as [`good-first-issue`](https://github.com/zenml-io/zenml/labels/good%20first%20issue) and review the [Contributing Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). -### Community Engagement -Join the [Slack group](https://zenml.io/slack/) for community support and discussions. +#### Community Engagement +Join the [Slack group](https://zenml.io/slack/) for community support and inquiries. -### License -ZenML is licensed under the Apache License Version 2.0. Full license details are available in the [LICENSE.md](https://github.com/zenml-io/zenml/blob/main/LICENSE). Contributions are also licensed under this license. +#### Licensing +ZenML is licensed under the Apache License Version 2.0. Contributions to the project will also be licensed under this agreement. The full license is available in the [LICENSE.md](https://github.com/zenml-io/zenml/blob/main/LICENSE). ================================================== @@ -17303,42 +17297,37 @@ ZenML is licensed under the Apache License Version 2.0. Full license details are The ZenML Starter Guide is designed for MLOps engineers and data scientists to build robust ML platforms using the ZenML framework. It provides foundational knowledge and tools for managing machine learning operations. ## Key Topics Covered: -- **Creating Your First ML Pipeline**: Learn how to set up a basic ML pipeline. -- **Understanding Caching**: Explore caching mechanisms between pipeline steps. -- **Managing Data and Versioning**: Techniques for data management and version control. -- **Tracking ML Models**: Methods for tracking and managing machine learning models. +- **Creating your first ML pipeline**: Guidance on setting up a basic ML pipeline. +- **Understanding caching between pipeline steps**: Techniques for caching previous executions to improve efficiency. +- **Managing data and data versioning**: Best practices for handling data and maintaining version control. +- **Tracking your machine learning models**: Methods for monitoring and managing ML models throughout their lifecycle. ## Prerequisites: - A Python environment and `virtualenv` installed. -By the end of the guide, users will complete a starter project, marking the beginning of their MLOps journey with ZenML. - -For additional support, refer to the [SDK Docs](https://sdkdocs.zenml.io/) for internal ZenML functions and classes. +By the end of the guide, users will complete a starter project, marking their entry into MLOps with ZenML. For additional support, refer to the [SDK Docs](https://sdkdocs.zenml.io/) for internal functions and classes. -Prepare your development environment to get started! +Prepare your development environment and start your MLOps journey with ZenML! ================================================== === File: docs/book/user-guide/starter-guide/create-an-ml-pipeline.md === -### ZenML Pipeline Overview - -ZenML simplifies the creation of production-ready machine learning (ML) pipelines by decoupling stages like data ingestion, preprocessing, and model evaluation into modular **Steps** that can be integrated into an end-to-end **Pipeline**. This approach enhances manageability, reusability, and scalability, ensuring reproducibility and efficiency. - -#### Installation +### Summary of ZenML Pipeline Documentation -Before starting, install ZenML and initialize your project: +**Overview of ZenML Pipelines** +ZenML facilitates the creation of modular and scalable machine learning (ML) pipelines by decoupling stages like data ingestion, preprocessing, and model evaluation into **Steps** that integrate into an end-to-end **Pipeline**. This structure enhances reproducibility and efficiency in ML workflows. +**Installation** +To get started, install ZenML and initialize your project: ```shell pip install "zenml[server]" -zenml login --local # Launches the dashboard locally -zenml init # Set up project repository +zenml login --local +zenml init # Recommended for new projects ``` -### Simple ML Pipeline Example - -Here’s how to set up a basic pipeline in ZenML: - +**Creating a Simple ML Pipeline** +A basic ML pipeline can be set up as follows: ```python from zenml import pipeline, step @@ -17363,127 +17352,94 @@ def simple_ml_pipeline(): if __name__ == "__main__": run = simple_ml_pipeline() ``` - -Run the script with: - +Run the script using: ```bash -python run.py +$ python run.py ``` -### Dashboard Exploration - +**Dashboard Exploration** After execution, view results in the ZenML Dashboard by running: - ```bash zenml login --local ``` +Access the dashboard at [http://127.0.0.1:8237/](http://127.0.0.1:8237/) with the username **"default"**. -Access the dashboard at [http://127.0.0.1:8237/](http://127.0.0.1:8237/) and log in with the username **"default"**. Explore execution history, artifacts, and DAG visualization. - -### Steps and Artifacts - -Each function in the pipeline is a `step`, connected by `artifacts`, which are the outputs of these functions. ZenML automatically tracks artifacts, parameters, and configurations for reproducibility. - -### Expanding to a Full ML Workflow - -Using the Iris dataset, we can create a more complex workflow: - -#### Required Imports - -```python -from typing_extensions import Annotated, Tuple -import pandas as pd -from sklearn.datasets import load_iris -from sklearn.model_selection import train_test_split -from sklearn.base import ClassifierMixin -from sklearn.svm import SVC -from zenml import pipeline, step -``` - -Install required packages: - -```bash -pip install matplotlib -zenml integration install sklearn -y -``` - -#### Data Loader Step - -Define a data loader with multiple outputs: - -```python -@step -def training_data_loader() -> Tuple[ - Annotated[pd.DataFrame, "X_train"], - Annotated[pd.DataFrame, "X_test"], - Annotated[pd.Series, "y_train"], - Annotated[pd.Series, "y_test"], -]: - iris = load_iris(as_frame=True) - return train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) -``` - -#### Training Step +**Understanding Steps and Artifacts** +Each function in the pipeline is a `step`, connected by `artifacts` (returned objects). ZenML automatically tracks artifacts, parameters, and configurations for reproducibility. -Create a training step for the SVC classifier: - -```python -@step -def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[ - Annotated[ClassifierMixin, "trained_model"], - Annotated[float, "training_acc"], -]: - model = SVC(gamma=gamma) - model.fit(X_train.to_numpy(), y_train.to_numpy()) - return model, model.score(X_train.to_numpy(), y_train.to_numpy()) -``` +**Expanding to a Full ML Workflow** +For a complete workflow using the Iris dataset, follow these steps: -#### Pipeline Definition +1. **Imports and Requirements**: + ```python + from typing_extensions import Annotated, Tuple + import pandas as pd + from sklearn.datasets import load_iris + from sklearn.model_selection import train_test_split + from sklearn.svm import SVC + from zenml import pipeline, step + ``` + Install additional requirements: + ```bash + pip install matplotlib + zenml integration install sklearn -y + ``` -Combine the steps into a pipeline: +2. **Data Loader**: + ```python + @step + def training_data_loader() -> Tuple[ + Annotated[pd.DataFrame, "X_train"], + Annotated[pd.DataFrame, "X_test"], + Annotated[pd.Series, "y_train"], + Annotated[pd.Series, "y_test"], + ]: + iris = load_iris(as_frame=True) + return train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) + ``` -```python -@pipeline -def training_pipeline(gamma: float = 0.002): - X_train, X_test, y_train, y_test = training_data_loader() - svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) +3. **Training Step**: + ```python + @step + def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[ + Annotated[ClassifierMixin, "trained_model"], + Annotated[float, "training_acc"], + ]: + model = SVC(gamma=gamma) + model.fit(X_train.to_numpy(), y_train.to_numpy()) + return model, model.score(X_train.to_numpy(), y_train.to_numpy()) + ``` -if __name__ == "__main__": - training_pipeline(gamma=0.0015) -``` +4. **Pipeline Definition**: + ```python + @pipeline + def training_pipeline(gamma: float = 0.002): + X_train, X_test, y_train, y_test = training_data_loader() + svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) -### YAML Configuration + if __name__ == "__main__": + training_pipeline() + ``` +**Configuration with YAML** You can configure pipeline runs using a YAML file: - ```python training_pipeline = training_pipeline.with_options(config_path='/local/path/to/config.yaml') training_pipeline() ``` - -Example YAML file: - +Sample YAML configuration: ```yaml parameters: gamma: 0.01 ``` -Generate a template config file with: - -```python -training_pipeline.write_run_configuration_template(path='/local/path/to/config.yaml') -``` - -### Full Code Example - -Here’s the complete code for the ML pipeline: - +**Full Code Example** +Here’s a complete script combining all the components: ```python from typing_extensions import Tuple, Annotated import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split -from sklearn.base import ClassifierMixin from sklearn.svm import SVC from zenml import pipeline, step @@ -17507,24 +17463,24 @@ if __name__ == "__main__": training_pipeline() ``` -This summary encapsulates the key points and technical details needed to understand and implement ZenML pipelines effectively. +This summary captures the essential information and code snippets necessary for understanding and implementing ZenML pipelines. ================================================== === File: docs/book/user-guide/starter-guide/track-ml-models.md === -# Summary of ZenML Model Control Plane Documentation +# ZenML Model Control Plane Overview -## Overview -ZenML's Model Control Plane (MCP) provides a centralized way to manage ML models, which consist of multiple versions, pipelines, artifacts, and metadata. A ZenML Model encapsulates the business logic of an ML product and includes technical models, training data, and predictions. +## ZenML Model Definition +A **ZenML Model** is an entity that groups pipelines, artifacts, metadata, and business data into a unified structure, encapsulating the business logic of an ML product. Key artifacts associated with a model include the technical model (model file with weights and parameters), training data, and production predictions. -## Key Concepts -- **Model**: An entity that groups pipelines, artifacts, and metadata. -- **Technical Model**: The actual model file(s) containing weights and parameters. -- **Model Management**: Models can be managed via the ZenML API, CLI, or ZenML Pro dashboard. +## Model Management +Models are managed through the ZenML API and can be viewed via: +- **CLI**: Use `zenml model list` to list all models. +- **ZenML Pro Dashboard**: Offers visualization capabilities. -## Model Configuration in Pipelines -Models can be linked to pipelines, ensuring all artifacts generated during runs are associated with the specified model. This allows for lineage tracking. +## Configuring a Model in a Pipeline +To link a model to a pipeline, pass a `Model` object either at the pipeline or step level. This ensures all artifacts generated during the pipeline run are associated with the model, enabling lineage tracking. ### Example Code ```python @@ -17532,6 +17488,10 @@ from zenml import pipeline, Model model = Model(name="iris_classifier", version=None, license="Apache 2.0", description="A classification model for the iris dataset.") +@step(model=model) +def svc_trainer(...): + ... + @pipeline(model=model) def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() @@ -17541,19 +17501,20 @@ if __name__ == "__main__": training_pipeline() ``` -## Viewing Models and Versions -Models can be viewed and managed using: -- **CLI**: - - `zenml model list` to list all models. - - `zenml model version list <MODEL_NAME>` for model versions. -- **ZenML Pro Dashboard**: Offers visualizations of models and associated artifacts. +## Viewing Model Versions and Artifacts +To list model versions and associated artifacts: +- **CLI**: + - `zenml model version list <MODEL_NAME>` + - `zenml model version runs <MODEL_NAME> <MODEL_VERSIONNAME>` + - `zenml model version data_artifacts <MODEL_NAME> <MODEL_VERSIONNAME>` +- **ZenML Pro Dashboard**: Visualizes runs and artifacts. -## Fetching Models in Pipelines -Models can be accessed in pipeline steps using `get_step_context()` or `get_pipeline_context()`. +## Fetching the Model in a Pipeline +Models can be accessed through the `StepContext` or `PipelineContext`. ### Example Code ```python -from zenml import get_step_context, step, pipeline +from zenml import get_step_context, get_pipeline_context, step, pipeline @step def svc_trainer(X_train, y_train, gamma=0.001): @@ -17564,27 +17525,36 @@ def training_pipeline(gamma=0.002): model = get_pipeline_context().model ``` -## Logging Metadata -Models can log metadata using the `log_model_metadata` method, allowing for tracking of performance metrics. +## Logging Metadata to the Model +Models can log metadata using the `log_model_metadata` method. ### Example Code ```python -from zenml import log_model_metadata +from zenml import get_step_context, step, log_model_metadata @step def svc_trainer(X_train, y_train, gamma=0.001): - model = get_step_context().model + ... log_model_metadata(model_name="iris_classifier", metadata={"accuracy": float(accuracy)}) ``` +## Accessing Model Metadata +To retrieve logged metadata: +```python +from zenml.client import Client + +model_version = Client().get_model_version('iris_classifier') +accuracy = model_version.run_metadata["accuracy"].value +``` + ## Model Stages Models can exist in various stages: - **staging**: Ready for production. -- **production**: Active in production. +- **production**: Currently in use. - **latest**: Most recent version. - **archived**: No longer relevant. -### Example Code +### Example Code for Stages ```python model = Model(name="iris_classifier", version="latest") model.set_stage(stage="production", force=True) @@ -17595,25 +17565,26 @@ model.set_stage(stage="production", force=True) - Update to production: `zenml model version update <MODEL_NAME> <MODEL_VERSIONNAME> -s production` ## Conclusion -ZenML's Model Control Plane facilitates effective management of ML models, enabling traceability and reproducibility in ML workflows. For further details, refer to the dedicated Model Management guide. +ZenML's Model Control Plane provides powerful features for managing ML models, including versioning, metadata logging, and stage management. For in-depth usage, refer to the dedicated Model Management guide. ================================================== === File: docs/book/user-guide/starter-guide/starter-project.md === -### Starter Project Overview +### Summary of Starter Project Documentation -This documentation outlines the steps to initiate a simple MLOps project using ZenML, covering essential components such as pipelines, artifacts, and models. +#### Overview +This documentation guides you through a simple starter project to apply foundational MLOps concepts, including pipelines, artifacts, and models. #### Getting Started - -1. **Set Up Environment**: Create a fresh virtual environment and install dependencies: +1. **Create a Virtual Environment**: Begin with a fresh virtual environment. +2. **Install Dependencies**: ```bash pip install "zenml[templates,server]" notebook zenml integration install sklearn -y ``` -2. **Initialize Project with ZenML Templates**: +3. **Set Up Project with ZenML Templates**: ```bash mkdir zenml_starter cd zenml_starter @@ -17621,7 +17592,7 @@ This documentation outlines the steps to initiate a simple MLOps project using Z pip install -r requirements.txt ``` - **Alternative Method**: If the above steps fail, clone the starter example: + **Alternative Setup**: If the above steps don't work, clone the starter template: ```bash git clone --depth 1 git@github.com:zenml-io/zenml.git cd zenml/examples/mlops_starter @@ -17630,15 +17601,20 @@ This documentation outlines the steps to initiate a simple MLOps project using Z ``` #### Learning Outcomes - -You can follow along with the [Jupyter notebook](https://github.com/zenml-io/zenml/blob/main/examples/quickstart/quickstart.ipynb) or refer to the [README file](https://github.com/zenml-io/zenml/blob/main/examples/quickstart/README.md). You will execute three key pipelines: +You will run three exemplary pipelines: - **Feature Engineering Pipeline**: Loads and prepares data for training. -- **Training Pipeline**: Trains a model using the preprocessed dataset. -- **Batch Inference Pipeline**: Runs predictions on new data with the trained model. +- **Training Pipeline**: Loads the preprocessed dataset and trains a model. +- **Batch Inference Pipeline**: Makes predictions on new data using the trained model. + +#### Next Steps +Experiment with ZenML to solidify your understanding. Once comfortable, proceed to the [production guide](../production-guide/) for advanced topics. -#### Conclusion and Next Steps +#### Additional Resources +- Accompanying Jupyter notebook: [Quickstart Notebook](https://github.com/zenml-io/zenml/blob/main/examples/quickstart/quickstart.ipynb) +- README for further instructions: [Quickstart README](https://github.com/zenml-io/zenml/blob/main/examples/quickstart/README.md) +- ZenML starter template: [Starter Template](https://github.com/zenml-io/template-starter) -This concludes the introductory chapter of your MLOps journey with ZenML. Experiment with ZenML to solidify your understanding, and when ready, proceed to the [production guide](../production-guide/) for further learning. +This concludes the initial chapter of your MLOps journey with ZenML. ================================================== @@ -17646,16 +17622,14 @@ This concludes the introductory chapter of your MLOps journey with ZenML. Experi # ZenML Artifact Management Overview -ZenML provides a framework for managing and versioning data artifacts in machine learning workflows, ensuring reproducibility and traceability. This guide covers artifact versioning, naming, metadata management, and external artifact handling. +## Introduction +ZenML automates the versioning and management of artifacts in machine learning workflows, ensuring reproducibility and traceability. This guide covers how to name, organize, and utilize artifacts effectively within the ZenML framework. ## Managing Artifacts - -Artifacts are outputs from ZenML steps and pipelines, automatically versioned and stored in the artifact store. +Artifacts are outputs from steps and pipelines, automatically versioned and stored. Proper configuration is essential for efficient pipeline development. ### Naming Artifacts - -Use the `Annotated` object to assign human-readable names to artifacts for better discoverability: - +Use the `Annotated` object to assign human-readable names to outputs: ```python from typing_extensions import Annotated import pandas as pd @@ -17674,13 +17648,10 @@ def feature_engineering_pipeline(): if __name__ == "__main__": feature_engineering_pipeline() ``` - -Unspecified outputs default to `{pipeline_name}::{step_name}::output`. Use ZenML CLI or dashboard to list artifacts. +**Default Naming:** Unnamed outputs follow the pattern `{pipeline_name}::{step_name}::output`. ### Manual Versioning - -ZenML auto-versions artifacts but allows custom versions via `ArtifactConfig` for critical runs: - +ZenML auto-versions artifacts, but you can specify custom versions using `ArtifactConfig`: ```python from zenml import step, ArtifactConfig @@ -17688,30 +17659,28 @@ from zenml import step, ArtifactConfig def training_data_loader() -> Annotated[pd.DataFrame, ArtifactConfig(name="iris_dataset", version="raw_2023")]: ... ``` - -Custom versions must be unique and can be listed using CLI or dashboard. +**Note:** Custom versions must be unique. ### Adding Metadata and Tags - -Extend artifacts with metadata and tags: - +You can enrich artifacts with metadata and tags: ```python -from zenml import step, get_step_context, ArtifactConfig -from typing_extensions import Annotated - @step -def annotation_approach() -> Annotated[str, ArtifactConfig(name="artifact_name", run_metadata={"metadata_key": "metadata_value"}, tags=["tag_name"])]: +def annotation_approach() -> Annotated[str, ArtifactConfig(name="artifact_name", run_metadata={"key": "value"}, tags=["tag_name"])]: return "string" ``` -### Comparing Metadata (Pro Feature) +## Comparing Metadata Across Runs (Pro) +The ZenML Pro dashboard offers tools for visualizing and analyzing metadata across pipeline runs, including: +- **Table View:** Compare metadata values, track changes, and filter results. +- **Parallel Coordinates View:** Identify relationships between metadata parameters. -The ZenML Pro dashboard includes an Experiment Comparison tool for visualizing metadata changes across runs, offering table and parallel coordinates views. - -### Specifying Artifact Types - -Assign types to artifacts for better filtering in the dashboard: +### Accessing the Comparison Tool +1. Navigate to a pipeline in the dashboard. +2. Click "Compare" and select runs. +3. Switch between views. +## Artifact Types +Specify artifact types for better filtering and visualization: ```python from zenml import ArtifactConfig, step from zenml.enums import ArtifactType @@ -17722,9 +17691,7 @@ def trainer() -> Annotated[MyCustomModel, ArtifactConfig(artifact_type=ArtifactT ``` ## Consuming External Artifacts - -Use `ExternalArtifact` to initialize artifacts from external sources: - +Use `ExternalArtifact` to initialize artifacts from non-ZenML sources: ```python import numpy as np from zenml import ExternalArtifact, pipeline, step @@ -17742,78 +17709,37 @@ if __name__ == "__main__": printing_pipeline() ``` -### Fetching Artifacts from Other Pipelines - -Use the `Client` to fetch artifacts from previous runs: - -```python -from uuid import UUID -import pandas as pd -from zenml import step, pipeline -from zenml.client import Client - -@step -def trainer(dataset: pd.DataFrame): - ... - -@pipeline -def training_pipeline(): - client = Client() - dataset_artifact = client.get_artifact_version(name_id_or_prefix="iris_dataset") - trainer(dataset=dataset_artifact) - -if __name__ == "__main__": - training_pipeline() -``` - ## Managing Non-ZenML Artifacts - -You can also manage artifacts created outside ZenML: - +You can save predictions or other artifacts produced externally: ```python -from zenml.client import Client -from zenml import save_artifact +from zenml.client import Client, save_artifact -model = ... # Fetch or create your model +model = ... prediction = model.predict([[1, 1, 1, 1]]) save_artifact(prediction, name="iris_predictions") ``` -Load artifacts using: - -```python -load_artifact("iris_predictions") -``` - ## Linking Existing Data - -Link external data as ZenML artifacts: - +Link pre-existing data as ZenML artifacts: ```python import os -from zenml.client import Client -from zenml import register_artifact -from pytorch_lightning import Trainer +from zenml.client import Client, register_artifact +from pytorch_lightning import Trainer, ModelCheckpoint from uuid import uuid4 prefix = Client().active_stack.artifact_store.path default_root_dir = os.path.join(prefix, uuid4().hex) -model = ... -trainer = Trainer(default_root_dir=default_root_dir) +trainer = Trainer(default_root_dir=default_root_dir, callbacks=[ModelCheckpoint(every_n_epochs=1, save_top_k=-1)]) trainer.fit(model) register_artifact(default_root_dir, name="all_my_model_checkpoints") ``` ## Logging Metadata - -Log metadata for artifacts: - +Associate metadata with artifacts using `log_artifact_metadata`: ```python -from zenml import step, log_artifact_metadata - @step -def model_finetuner_step(model: ClassifierMixin, dataset: Tuple[np.ndarray, np.ndarray]) -> Annotated[ClassifierMixin, ArtifactConfig(name="my_model", tags=["SVC", "trained"])]: +def model_finetuner_step(model: ClassifierMixin, dataset: Tuple[np.ndarray, np.ndarray]) -> Annotated[ClassifierMixin, ArtifactConfig(name="my_model")]: model.fit(dataset[0], dataset[1]) accuracy = model.score(dataset[0], dataset[1]) log_artifact_metadata(metadata={"accuracy": float(accuracy)}) @@ -17821,9 +17747,7 @@ def model_finetuner_step(model: ClassifierMixin, dataset: Tuple[np.ndarray, np.n ``` ## Code Example - -Here’s a consolidated example of artifact management: - +Here’s a complete example combining the concepts: ```python from typing import Optional, Tuple from typing_extensions import Annotated @@ -17834,27 +17758,27 @@ from sklearn.svm import SVC from zenml import ArtifactConfig, pipeline, step, log_artifact_metadata, save_artifact, load_artifact, Client @step -def versioned_data_loader_step() -> Annotated[Tuple[np.ndarray, np.ndarray], ArtifactConfig(name="my_dataset", tags=["digits"])]: +def versioned_data_loader_step() -> Annotated[Tuple[np.ndarray, np.ndarray], ArtifactConfig(name="my_dataset")]: digits = load_digits() return (digits.images.reshape((len(digits.images), -1)), digits.target) @step -def model_finetuner_step(model: ClassifierMixin, dataset: Tuple[np.ndarray, np.ndarray]) -> Annotated[ClassifierMixin, ArtifactConfig(name="my_model", tags=["SVC", "trained"])]: +def model_finetuner_step(model: ClassifierMixin, dataset: Tuple[np.ndarray, np.ndarray]) -> Annotated[ClassifierMixin, ArtifactConfig(name="my_model")]: model.fit(dataset[0], dataset[1]) accuracy = model.score(dataset[0], dataset[1]) log_artifact_metadata(metadata={"accuracy": float(accuracy)}) return model @pipeline -def model_finetuning_pipeline(dataset_version: Optional[str] = None, model_version: Optional[str] = None): +def model_finetuning_pipeline(dataset_version: Optional[str] = None): client = Client() dataset = client.get_artifact_version(name_id_or_prefix="my_dataset", version=dataset_version) if dataset_version else versioned_data_loader_step() - model = client.get_artifact_version(name_id_or_prefix="my_model", version=model_version) + model = client.get_artifact_version(name_id_or_prefix="my_model") model_finetuner_step(model=model, dataset=dataset) def main(): untrained_model = SVC(gamma=0.001) - save_artifact(untrained_model, name="my_model", version="1", tags=["SVC", "untrained"]) + save_artifact(untrained_model, name="my_model", version="1") model_finetuning_pipeline() model_finetuning_pipeline(dataset_version="1") latest_trained_model = load_artifact("my_model") @@ -17865,7 +17789,7 @@ if __name__ == "__main__": main() ``` -This example demonstrates artifact creation, versioning, metadata logging, and consumption within a ZenML pipeline. +This overview provides a comprehensive guide to managing artifacts in ZenML, ensuring efficient and reproducible machine learning workflows. For further details, refer to the [ZenML documentation](https://zenml.io/docs). ================================================== @@ -17873,43 +17797,46 @@ This example demonstrates artifact creation, versioning, metadata logging, and c ### Summary of ZenML Caching Documentation -**Overview**: ZenML facilitates iterative development of machine learning pipelines through caching, which speeds up execution by reusing outputs from previous runs when inputs, parameters, or code remain unchanged. +#### Overview +ZenML enhances the iterative development of machine learning pipelines through step caching, which reuses outputs from previous runs when inputs, parameters, or code remain unchanged. Caching is enabled by default, allowing faster execution by avoiding unnecessary re-runs of steps. -**Caching Behavior**: -- Caching is enabled by default in ZenML. -- Outputs are stored in the artifact store, allowing reuse in subsequent runs. -- If no changes are detected, ZenML will use cached outputs, avoiding unnecessary re-execution of steps. +#### Caching Behavior +- **Default Caching**: ZenML automatically caches outputs in the artifact store, enabling faster subsequent runs. +- **Client-Side Caching**: When running pipelines without a schedule, cached steps are computed on the client machine, saving time and resources. To prevent client-side caching, set the `ZENML_PREVENT_CLIENT_SIDE_CACHING` environment variable to `True`. +- **Manual Caching Control**: Steps that rely on external inputs or require execution regardless of caching should have caching disabled manually. -**Client-Side Caching**: -- When running pipelines without a schedule, cached steps are computed on the client machine, saving time and costs. -- To prevent client-side caching, set the environment variable `ZENML_PREVENT_CLIENT_SIDE_CACHING=True`. +```python +@step(enable_cache=False) +def load_data_from_external_system(...) -> ...: + # This step will always be run +``` -**Manual Caching Control**: -- Caching does not automatically detect changes in external inputs or file systems. Use `enable_cache=False` for steps that depend on such changes. +#### Configuring Caching +1. **Pipeline Level**: Caching can be controlled at the pipeline level using the `@pipeline` decorator. -**Configuring Caching**: -1. **Pipeline Level**: Set caching policy in the `@pipeline` decorator. - ```python - @pipeline(enable_cache=False) - def first_pipeline(...): - pass - ``` - This disables caching for all steps unless overridden at the step level. +```python +@pipeline(enable_cache=False) +def first_pipeline(...): + """Pipeline with cache disabled""" +``` -2. **Runtime Control**: Override caching settings at runtime using `with_options`. - ```python - first_pipeline = first_pipeline.with_options(enable_cache=False) - ``` +2. **Runtime Configuration**: Caching settings can be overridden at runtime using `with_options`. -3. **Step Level**: Configure caching for individual steps. - ```python - @step(enable_cache=False) - def import_data_from_api(...): - pass - ``` - This can also be adjusted using `with_options`. +```python +first_pipeline = first_pipeline.with_options(enable_cache=False) +``` + +3. **Step Level**: Caching can also be configured for individual steps using the `@step` decorator. + +```python +@step(enable_cache=False) +def import_data_from_api(...): + """Import most up-to-date data from public API""" +``` + +#### Example Code +The following code demonstrates caching in a simple training pipeline: -**Code Example**: ```python from typing_extensions import Tuple, Annotated import pandas as pd @@ -17940,13 +17867,16 @@ def training_pipeline(gamma: float = 0.002): if __name__ == "__main__": training_pipeline() - training_pipeline(gamma=0.0001) # Caching behavior changes due to parameter change + logger.info("\n\nFirst step cached, second not due to parameter change") + training_pipeline(gamma=0.0001) + logger.info("\n\nFirst step cached, second not due to settings") svc_trainer = svc_trainer.with_options(enable_cache=False) - training_pipeline() # Disable cache for second step - training_pipeline.with_options(enable_cache=False)() # Disable cache for entire pipeline + training_pipeline() + logger.info("\n\nCaching disabled for the entire pipeline") + training_pipeline.with_options(enable_cache=False)() ``` -This summary captures the essential points of ZenML's caching mechanism, including configuration options and a concise code example demonstrating its usage. +This code illustrates how ZenML handles caching, including scenarios where caching is preserved or disabled based on input changes or explicit settings. ================================================== @@ -17955,22 +17885,21 @@ This summary captures the essential points of ZenML's caching mechanism, includi ### Summary of ZenML Pipeline Configuration Documentation #### Overview -This documentation details how to configure a ZenML pipeline, particularly focusing on adding compute resources and utilizing a YAML configuration file. +This documentation explains how to configure a ZenML pipeline to add compute resources and manage dependencies through a YAML configuration file. #### Configuring the Pipeline -To configure a pipeline, the `run.py` script sets the configuration path and executes the pipeline: +To configure the pipeline, the following code snippet is used to set the configuration path and execute the pipeline: ```python pipeline_args["config_path"] = os.path.join(config_folder, "training_rf.yaml") training_pipeline_configured = training_pipeline.with_options(**pipeline_args) training_pipeline_configured() ``` - -The configuration file `training_rf.yaml` can be defined in YAML format. +The `training_rf.yaml` file contains the pipeline configuration. #### YAML Configuration Breakdown -1. **Docker Settings**: +1. **Docker Settings** ```yaml settings: docker: @@ -17979,9 +17908,9 @@ The configuration file `training_rf.yaml` can be defined in YAML format. requirements: - pyarrow ``` - - Specifies Docker settings for the pipeline, including required libraries. + This section specifies Docker settings, including required libraries like `pyarrow` and the `sklearn` integration. -2. **Model Association**: +2. **Model Association** ```yaml model: name: breast_cancer_classifier @@ -17990,37 +17919,39 @@ The configuration file `training_rf.yaml` can be defined in YAML format. description: A breast cancer classifier tags: ["breast_cancer", "classifier"] ``` - - Associates a ZenML model with the pipeline. + This section associates a ZenML model with the pipeline, allowing tracking of model versions. -3. **Parameters**: +3. **Parameters** ```yaml parameters: model_type: "rf" # Choose between rf/sgd ``` - - Defines parameters expected by the pipeline. + This key defines parameters expected by the pipeline, such as `model_type`. -#### Scaling Compute on the Cloud -To scale resources, modify the `training_rf.yaml` file: +#### Scaling Compute Resources +To scale compute resources, modify the `training_rf.yaml` file as follows: ```yaml settings: orchestrator: memory: 32 # in GB + steps: model_trainer: settings: orchestrator: cpus: 8 ``` -- This configuration allocates 32 GB of memory for the pipeline and 8 CPU cores for the model trainer step. +This configuration allocates 32 GB of memory for the entire pipeline and 8 CPU cores for the model trainer step. -**For Microsoft Azure Users**: -Use the Kubernetes orchestrator with the following configuration: +##### Azure Users +For Azure, the configuration should look like this: ```yaml settings: resources: memory: "32GB" + steps: model_trainer: settings: @@ -18029,17 +17960,17 @@ steps: ``` #### Running the Pipeline -Execute the pipeline with the command: +To execute the pipeline with the new configuration, use the command: ```bash python run.py --training-pipeline ``` #### Additional Notes -- Not all orchestrators support `ResourceSettings`. For more details, refer to the ZenML documentation on runtime configuration and GPU training. -- The `with_options` method is one way to configure a pipeline; direct configuration in decorators is also possible but should be avoided to maintain code clarity. +- Not all orchestrators support `ResourceSettings`. +- For more information on settings and GPU attachment, refer to the ZenML documentation on runtime configuration and GPU training. -This summary encapsulates the essential technical details and instructions for configuring and scaling a ZenML pipeline while omitting redundant explanations. +This summary captures the essential details of configuring and scaling a ZenML pipeline while maintaining clarity and conciseness. ================================================== @@ -18048,69 +17979,64 @@ This summary encapsulates the essential technical details and instructions for c ### Transitioning to Remote Artifact Storage #### Overview -Transitioning to remote artifact storage enhances collaboration and scalability for production workloads by storing artifacts in the cloud, making them accessible from anywhere with the right permissions. - -#### Connecting Remote Storage -When using remote storage, the only change is that artifacts are stored centrally. +Transitioning from local artifact storage to remote storage enhances collaboration and scalability in production environments. Remote storage allows artifacts to be stored in the cloud, making them accessible from anywhere with the right permissions. #### Provisioning and Registering a Remote Artifact Store -ZenML supports various artifact store flavors. Here are instructions for major cloud providers: +ZenML supports various artifact store flavors. Below are instructions for major cloud providers: -**AWS S3** -1. Install AWS CLI: [AWS CLI Documentation](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). -2. Install ZenML S3 integration: +##### AWS +1. Install AWS CLI: ```shell zenml integration install s3 -y ``` -3. Register S3 Artifact Store: +2. Register S3 Artifact Store: ```shell zenml artifact-store register cloud_artifact_store -f s3 --path=s3://bucket-name ``` -**GCP GCS** -1. Install Google Cloud CLI: [Google Cloud Documentation](https://cloud.google.com/sdk/docs/install-sdk). -2. Install ZenML GCP integration: +##### GCP +1. Install Google Cloud CLI: ```shell zenml integration install gcp -y ``` -3. Register GCS Artifact Store: +2. Register GCS Artifact Store: ```shell zenml artifact-store register cloud_artifact_store -f gcp --path=gs://bucket-name ``` -**Azure Blob Storage** -1. Install Azure CLI: [Azure Documentation](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli). -2. Install ZenML Azure integration: +##### Azure +1. Install Azure CLI: ```shell zenml integration install azure -y ``` -3. Register Azure Artifact Store: +2. Register Azure Artifact Store: ```shell zenml artifact-store register cloud_artifact_store -f azure --path=az://container-name ``` -**Other Providers** -You can use cloud-agnostic solutions like Minio or create a custom stack component. +##### Other Providers +You can create a remote artifact store using cloud-agnostic solutions like Minio or by implementing a custom stack component. #### Configuring Permissions with Service Connectors -Service connectors manage credentials for stack components to access cloud infrastructure securely. +Service connectors manage credentials for stack components to access cloud infrastructure. They store credentials as secrets and broker temporary tokens for access. -**AWS Service Connector** +##### AWS Service Connector ```shell AWS_PROFILE=<AWS_PROFILE> zenml service-connector register cloud_connector --type aws --auto-configure ``` -**GCP Service Connector** +##### GCP Service Connector ```shell zenml service-connector register cloud_connector --type gcp --auth-method service-account --service_account_json=@<PATH_TO_SERVICE_ACCOUNT_JSON> --project_id=<PROJECT_ID> --generate_temporary_tokens=False ``` -**Azure Service Connector** +##### Azure Service Connector ```shell zenml service-connector register cloud_connector --type azure --auth-method service-principal --tenant_id=<TENANT_ID> --client_id=<CLIENT_ID> --client_secret=<CLIENT_SECRET> ``` -Attach the service connector to the artifact store: +#### Connecting the Service Connector +Connect the service connector to the remote artifact store: ```shell zenml artifact-store connect cloud_artifact_store --connector cloud_connector ``` @@ -18129,12 +18055,12 @@ zenml artifact-store connect cloud_artifact_store --connector cloud_connector python run.py --training-pipeline ``` -Artifacts will be stored in remote storage, making them accessible for future runs. You can list artifact versions: +Artifacts will be stored in the remote storage, making them accessible for future runs and by team members. You can list artifact versions: ```shell zenml artifact version list --created="gte:$(date -v-15M '+%Y-%m-%d %H:%M:%S')" ``` -By connecting remote storage, you enable a collaborative and scalable MLOps workflow, ensuring artifacts are part of a cloud-based ecosystem. +By integrating remote storage, you enhance collaboration and scalability in your MLOps workflow, ensuring artifacts are part of a cloud-based ecosystem. ================================================== @@ -18142,51 +18068,46 @@ By connecting remote storage, you enable a collaborative and scalable MLOps work # Production Guide Summary -The ZenML Production Guide is designed for ML practitioners looking to implement MLOps in a workplace setting, building upon the Starter Guide. It focuses on transitioning from local pipeline execution to running pipelines in the cloud. +The ZenML production guide is designed for ML practitioners looking to implement MLOps in a production environment, building on the concepts from the Starter guide. It focuses on transitioning from local pipeline execution to cloud production. ## Key Topics Covered: -- **Deploying ZenML** -- **Understanding Stacks** -- **Connecting Remote Storage** -- **Cloud Orchestration** -- **Configuring Pipeline for Scalable Compute** -- **Connecting a Code Repository** - -### Prerequisites: -- A Python environment with `virtualenv` installed. -- A major cloud provider (AWS, GCP, Azure) with corresponding CLIs installed and authorized. +- **Deploying ZenML**: Instructions for setting up ZenML in a production environment. +- **Understanding Stacks**: Overview of the components and configurations needed for MLOps. +- **Connecting Remote Storage**: Guidelines for integrating cloud storage solutions. +- **Orchestrating on the Cloud**: Techniques for managing and executing pipelines in the cloud. +- **Configuring the Pipeline for Scalability**: Strategies to ensure pipelines can scale with demand. +- **Connecting a Code Repository**: Steps to link your codebase with ZenML. -By following this guide, you will complete an end-to-end MLOps project that can serve as inspiration for your own initiatives. +## Prerequisites: +- A Python environment with `virtualenv` installed. +- A major cloud provider (AWS, GCP, Azure) with the respective CLIs installed and authorized. -### Additional Resources: -Refer to the [SDK Docs](https://sdkdocs.zenml.io/) for internal ZenML functions and classes for further assistance. +By following this guide, you will complete an end-to-end MLOps project that serves as a template for your own initiatives. For further assistance, refer to the [SDK Docs](https://sdkdocs.zenml.io/) for internal ZenML functions and classes. ================================================== === File: docs/book/user-guide/production-guide/cloud-orchestration.md === -### Summary: Orchestrating MLOps Pipelines on the Cloud - -**Overview**: Transitioning MLOps pipelines from local execution to cloud environments enhances scalability and robustness. Key components for cloud orchestration include: +### Summary: Orchestrating Pipelines on the Cloud with ZenML -- **Orchestrator**: Manages workflow and execution of pipelines. -- **Container Registry**: Stores Docker container images. -- **Remote Storage**: Complements the cloud stack. +**Overview**: Transitioning from local to cloud-based MLOps pipelines enhances scalability and robustness. Key components include: +- **Orchestrator**: Manages workflow execution. +- **Container Registry**: Stores Docker images of your pipeline. -**Cloud Stack Setup**: -1. **Skypilot**: Recommended orchestrator for public cloud, provisions a VM to execute pipelines. -2. **Docker**: Used to package code and dependencies into images, which are pushed to the container registry. +**Basic Cloud Stack**: +- **Skypilot**: Recommended orchestrator for public cloud, provisions a VM to execute pipelines. +- **Docker**: Used to package and ship code to the cloud. **Pipeline Execution Sequence**: -1. User runs a pipeline, triggering `run.py` where ZenML interprets the `@pipeline` function. -2. Client retrieves stack configuration. -3. Client builds and pushes an image to the container registry. -4. Client creates a run in the orchestrator (e.g., Skypilot). -5. Orchestrator pulls the image from the registry to execute the pipeline. +1. User runs a pipeline via `run.py`, which reads the `@pipeline` function. +2. Client fetches stack info from the server. +3. Client builds and pushes a Docker image to the container registry. +4. Client creates a run in the orchestrator, provisioning a VM. +5. Orchestrator pulls the Docker image to execute the pipeline. 6. Artifacts are stored in the artifact store (cloud storage). 7. Pipeline status is reported back to the ZenML server. -### Provisioning Components +### Provisioning and Registering Components **AWS Setup**: 1. Install integrations: @@ -18219,7 +18140,7 @@ Refer to the [SDK Docs](https://sdkdocs.zenml.io/) for internal ZenML functions ``` 3. Register orchestrator: ```shell - zenml orchestrator register cloud_orchestrator -f vm_gcp + zenml orchestrator register cloud_orchestrator -f vm_gcp zenml orchestrator connect cloud_orchestrator --connect cloud_connector ``` 4. Register container registry: @@ -18228,7 +18149,7 @@ Refer to the [SDK Docs](https://sdkdocs.zenml.io/) for internal ZenML functions zenml container-registry connect cloud_container_registry --connector cloud_connector ``` -**Azure Setup** (using Kubernetes): +**Azure Setup**: 1. Install integrations: ```shell zenml integration install azure kubernetes -y @@ -18237,7 +18158,7 @@ Refer to the [SDK Docs](https://sdkdocs.zenml.io/) for internal ZenML functions ```shell zenml service-connector register cloud_connector --type azure --auth-method service-principal --tenant_id=<TENANT_ID> --client_id=<CLIENT_ID> --client_secret=<CLIENT_SECRET> ``` -3. Register orchestrator: +3. Register Kubernetes orchestrator: ```shell zenml orchestrator register cloud_orchestrator --flavor kubernetes zenml orchestrator connect cloud_orchestrator --connect cloud_connector @@ -18249,7 +18170,7 @@ Refer to the [SDK Docs](https://sdkdocs.zenml.io/) for internal ZenML functions ``` ### Running a Pipeline -1. Register the cloud stack: +1. Register a new stack: ```shell zenml stack register minimal_cloud_stack -o cloud_orchestrator -a cloud_artifact_store -c cloud_container_registry ``` @@ -18262,65 +18183,74 @@ Refer to the [SDK Docs](https://sdkdocs.zenml.io/) for internal ZenML functions python run.py --training-pipeline ``` -### Conclusion -With these steps, users can efficiently run MLOps pipelines on cloud infrastructure, leveraging ZenML's integrations for orchestrators and container registries. For further exploration, refer to the Component Guide for additional stack components. +This process allows the pipeline to execute in the cloud, with logs streamed back to the user. For further exploration of stack components, refer to the ZenML Component Guide. ================================================== === File: docs/book/user-guide/production-guide/understand-stacks.md === -# Summary of ZenML Stack Documentation - -## Overview -This documentation explains how to switch the infrastructure backend of your ZenML code. A **stack** is the configuration of tools and infrastructure for running pipelines. By default, ZenML uses a **default** stack if no other configuration is specified. +# Summary: Switching Infrastructure Backend in ZenML -### Key Concepts -- **Separation of Code and Configuration**: ZenML separates the code domain (user's Python code) from the infrastructure domain (the stack). This allows for easy switching of environments without altering the code. -- **Active Stack**: The stack currently in use for running pipelines can be checked with `zenml stack list`. +## Understanding Stacks +A **stack** in ZenML is the configuration of tools and infrastructure for running machine learning pipelines. By default, pipelines run on the `default` stack unless specified otherwise. ZenML acts as a translation layer, allowing code to run on any configured stack without modifying the code itself. -### Stack Components -1. **Orchestrator**: Executes the pipeline code. The default orchestrator runs locally. - - List orchestrators: `zenml orchestrator list` - -2. **Artifact Store**: Persists outputs of pipeline steps. By default, this is also local. - - List artifact stores: `zenml artifact-store list` +### Stack Configuration +- Use `zenml stack describe` to view details of the active stack: + ```bash + zenml stack describe + ``` +- Use `zenml stack list` to see all registered stacks: + ```bash + zenml stack list + ``` -3. **Additional Components**: Other components include experiment trackers and model deployers. A crucial component is the **container registry**, which stores containerized images. +### Components of a Stack +A stack includes at least: +- **Orchestrator**: Executes pipeline code (e.g., local Python thread). + ```bash + zenml orchestrator list + ``` +- **Artifact Store**: Persists outputs of pipeline steps. + ```bash + zenml artifact-store list + ``` -### Registering a Stack -To create a new stack, follow these steps: +Additional components can include experiment trackers and model deployers. A crucial component is the **container registry**, which stores containerized images of the code and environment. -1. **Create an Artifact Store**: - ```bash - zenml artifact-store register my_artifact_store --flavor=local - ``` +## Registering a Stack +### Create an Artifact Store +Register a local artifact store: +```bash +zenml artifact-store register my_artifact_store --flavor=local +``` -2. **Create a New Stack**: - ```bash - zenml stack register a_new_local_stack -o default -a my_artifact_store - ``` +### Create a Local Stack +After creating the artifact store, register a new stack: +```bash +zenml stack register a_new_local_stack -o default -a my_artifact_store +``` -3. **Inspect the New Stack**: - ```bash - zenml stack describe a_new_local_stack - ``` +### Inspecting the Stack +To view the stack details: +```bash +zenml stack describe a_new_local_stack +``` ### Switching Stacks -If using the VS Code extension, you can easily switch stacks via the sidebar. +If using the ZenML VS Code extension, you can switch stacks easily via the sidebar. -### Running a Pipeline -To run a pipeline on the new stack: -1. Set the stack as active: +## Running a Pipeline on the New Local Stack +1. Set the new stack as active: ```bash zenml stack set a_new_local_stack ``` -2. Execute the pipeline: +2. Run the pipeline: ```bash python run.py --training-pipeline ``` ### Additional Resources -For further details on ZenML functions or classes, refer to the [SDK Docs](https://sdkdocs.zenml.io/). +For more information on ZenML functions and classes, refer to the [SDK Docs](https://sdkdocs.zenml.io/). ================================================== @@ -18328,55 +18258,64 @@ For further details on ZenML functions or classes, refer to the [SDK Docs](https ### Deploying ZenML -Deploying ZenML is essential for moving from local development to production. Initially, ZenML uses a local SQLite database to store metadata (pipelines, models, artifacts). For production, the ZenML server must be deployed centrally to facilitate collaboration and interaction among infrastructure components. +Deploying ZenML is essential for moving from local development to production. Initially, ZenML runs on a local architecture using an SQLite database to store metadata (pipelines, models, artifacts, etc.). For production, the ZenML server must be deployed centrally to facilitate collaboration and interaction among infrastructure components. #### Deployment Options 1. **ZenML Pro Trial**: - - A managed SaaS solution with one-click deployment. - - To connect to a trial instance, run: + - Sign up for a free trial of [ZenML Pro](https://zenml.io/pro), a managed SaaS solution offering one-click deployment. + - Connect to the trial instance using: ```bash zenml login --pro ``` - - Additional features and a new dashboard are included. You can revert to self-hosting later. + - ZenML Pro includes additional features and a new dashboard. -2. **Self-hosting on Cloud Provider**: - - ZenML is open source and can be self-hosted in a Kubernetes cluster. - - Create a Kubernetes cluster using your cloud provider's documentation: +2. **Self-hosting**: + - ZenML can be self-hosted in a Kubernetes cluster. If you don’t have a cluster, create one using your cloud provider’s documentation: - [AWS](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html) - [Azure](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-portal?tabs=azure-cli) - [GCP](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-zonal-cluster#before_you_begin) -#### Connecting to Deployed ZenML +#### Connecting to a Deployed ZenML -To connect your local ZenML client to the ZenML Server, use: +To connect your local ZenML client to the ZenML Server, use the command: ```bash zenml login <server-url> ``` -This command initiates a browser-based validation process. Once connected, all metadata will be centrally tracked. You can revert to the local experience with `zenml logout`. +This command initiates a browser-based validation process. Once connected, all metadata will be centrally tracked. To revert to local usage, use: +```bash +zenml logout +``` #### Further Resources - [Deploying ZenML](../../getting-started/deploying-zenml/README.md): Overview of deployment options and architecture. -- [Full how-to guides](../../getting-started/deploying-zenml/README.md): Instructions for deploying ZenML on various platforms (Docker, Hugging Face Spaces, Kubernetes). +- [Full how-to guides](../../getting-started/deploying-zenml/README.md): Instructions for deploying ZenML on various platforms (Docker, Hugging Face Spaces, Kubernetes, etc.). ================================================== === File: docs/book/user-guide/production-guide/end-to-end.md === -# End-to-End MLOps Project with ZenML +### End-to-End MLOps Project with ZenML -## Overview -This documentation outlines the steps to create an end-to-end MLOps project using ZenML, integrating advanced concepts such as deployment, infrastructure abstraction, remote storage, cloud orchestration, and pipeline configuration. +This documentation outlines the creation of an end-to-end MLOps project using ZenML, integrating various advanced MLOps concepts. -## Getting Started +#### Key Concepts Covered: +- Deploying ZenML +- Abstracting infrastructure with stacks +- Connecting remote storage +- Cloud orchestration +- Configuring scalable pipelines +- Integrating with a Git repository + +#### Getting Started 1. **Set Up Environment**: Create a fresh virtual environment and install dependencies: ```bash pip install "zenml[templates,server]" notebook zenml integration install sklearn -y ``` -2. **Initialize Project**: Use ZenML templates to set up your project: +2. **Initialize Project**: Use ZenML templates to set up the project: ```bash mkdir zenml_batch_e2e cd zenml_batch_e2e @@ -18392,52 +18331,42 @@ This documentation outlines the steps to create an end-to-end MLOps project usin zenml init ``` -## Learning Objectives -The e2e project template covers major ZenML use cases, including: -- Steps and pipelines for supervised ML with batch predictions. -- A simple CLI for project management. -- Advanced concepts built on the starter project. - -As you progress, practice running pipelines on a remote cloud stack and a tracked git repository. +#### Learning Outcomes +The e2e project template demonstrates core ZenML concepts for supervised ML with batch predictions, building on the starter project with advanced features. Users are encouraged to run pipelines on a remote cloud stack and a tracked Git repository. -## Conclusion -You are now equipped to develop your own pipelines and stacks using ZenML. For further learning, explore the [how-to section](../../how-to/pipeline-development/build-pipelines/README.md). Good luck with your MLOps journey! +#### Conclusion +This guide equips you with the knowledge to create an end-to-end MLOps project using ZenML. For further learning on advanced topics, refer to the how-to section on pipeline development. Good luck with your MLOps endeavors! ================================================== === File: docs/book/user-guide/production-guide/ci-cd.md === -### Managing ZenML Pipeline Lifecycle with CI/CD +### Managing the Lifecycle of a ZenML Pipeline with CI/CD -**Overview**: This documentation outlines how to manage ZenML pipelines using Continuous Integration (CI) and Continuous Delivery (CD) through GitHub Actions. It enables data scientists to test and validate code changes automatically before deploying to production. - -#### Setting Up CI/CD +#### Overview +To enhance ZenML pipeline management in production, it's beneficial to integrate a CI/CD workflow using a central workflow engine. This allows for local experimentation by data scientists, followed by automated testing and deployment of validated changes. -1. **GitHub Repository**: Use GitHub Actions to create a CI/CD workflow. For a practical example, refer to the [ZenML Gitflow Repository](https://github.com/zenml-io/zenml-gitflow/). +#### Setting Up CI/CD with GitHub Actions +1. **GitHub Repository**: Use the [ZenML Gitflow Repository](https://github.com/zenml-io/zenml-gitflow/) as a template for automating CI/CD with continuous model training and deployment. -2. **API Key Configuration**: - - Create an API key in ZenML for machine-to-machine connections: +2. **API Key Configuration**: Create an API key in ZenML for machine-to-machine connections. ```bash zenml service-account create github_action_api_key ``` - - Store the returned API key securely as it will not be shown again. + Store the generated API key securely as it will not be shown again. -3. **GitHub Secrets**: - - Store the `ZENML_API_KEY` in GitHub Secrets for use in your repository's actions. +3. **GitHub Secrets**: Store the `ZENML_API_KEY` in GitHub secrets for use in GitHub Actions. -4. **Optional Staging and Production Stacks**: - - You can configure different stacks for staging and production environments, allowing for different resources and configurations. +4. **(Optional) Staging and Production Stacks**: You can configure different stacks for staging and production environments, allowing for parameterization of pipelines and different resource settings. -5. **Triggering Pipelines on Pull Requests**: - - Set up a GitHub Action to run the pipeline upon pull requests: +5. **Triggering Pipelines on Pull Requests**: Set up a GitHub Action workflow to run your pipeline on pull requests. Configure the workflow to trigger on specific branches: ```yaml on: pull_request: branches: [ staging, main ] ``` -6. **Environment Variables**: - - Define essential environment variables in your workflow: +6. **Workflow Configuration**: Set environment variables and run the pipeline in the workflow: ```yaml jobs: run-staging-workflow: @@ -18451,7 +18380,6 @@ You are now equipped to develop your own pipelines and stacks using ZenML. For f ``` 7. **Pipeline Execution Steps**: - - Include steps to check out code, set up Python, install requirements, connect to ZenML, set the stack, and run the pipeline: ```yaml steps: - name: Check out repository code @@ -18474,34 +18402,29 @@ You are now equipped to develop your own pipelines and stacks using ZenML. For f run: python run.py --pipeline end-to-end --dataset production --version ${{ env.ZENML_GITHUB_SHA }} --github-pr-url ${{ env.ZENML_GITHUB_URL_PR }} ``` -8. **Optional PR Metrics Reporting**: - - Configure your workflow to leave a report on the pull request based on the pipeline results. +8. **(Optional) Commenting Metrics on PRs**: Configure the workflow to leave a report on the pull request based on the pipeline execution. -This setup ensures that only validated code is deployed to production, enhancing the reliability of the CI/CD process for ZenML pipelines. +This setup ensures that only fully tested code is deployed to production, enhancing the reliability of the CI/CD process for ZenML pipelines. ================================================== === File: docs/book/user-guide/production-guide/connect-code-repository.md === -### ZenML Git Repository Integration +### ZenML Git Integration Documentation Summary -**Overview:** -Connect a Git repository to ZenML to enhance collaboration and optimize MLOps pipeline execution by avoiding redundant Docker builds. +**Overview**: Connect a Git repository to ZenML to streamline code changes and enhance collaboration in MLOps projects. This integration optimizes Docker builds by reusing existing images based on Git commit hashes. -**Pipeline Execution Flow:** +**Pipeline Execution Flow**: 1. Trigger a pipeline run locally. -2. ZenML parses the `@pipeline` function for steps. -3. Local client requests stack info from ZenML server. -4. If a code repository is detected, it checks for an existing Docker image using the current Git commit hash. +2. ZenML parses the `@pipeline` function. +3. Local client requests stack info from the ZenML server. +4. If a Git repository is detected, the client checks for reusable Docker images. 5. The orchestrator sets up the execution environment in the cloud. -6. Code is downloaded from the Git repository, and the existing Docker image is used to run the pipeline. -7. Artifacts are stored in a cloud-based artifact store. -8. Pipeline status and metadata are reported back to the ZenML server. +6. Code is downloaded from the Git repository, and the existing Docker image is used. +7. Pipeline steps execute, storing artifacts in the cloud. +8. Execution status and metadata are reported back to the ZenML server. -**Benefits:** -- Reduces redundant builds. -- Facilitates simultaneous code collaboration. -- Ensures correct code versioning for each run. +**Benefits**: Avoid redundant builds, improve efficiency, and enable simultaneous code collaboration with version tracking. ### Creating a GitHub Repository 1. Sign in to [GitHub](https://github.com/). @@ -18509,7 +18432,7 @@ Connect a Git repository to ZenML to enhance collaboration and optimize MLOps pi 3. Name the repository, set visibility, and optionally add a README or .gitignore. 4. Click "Create repository." -**Push Local Code to GitHub:** +**Push Local Code to GitHub**: ```sh git init git add . @@ -18517,15 +18440,15 @@ git commit -m "Initial commit" git remote add origin https://github.com/YOUR_USERNAME/YOUR_REPOSITORY_NAME.git git push -u origin master ``` -*Replace `YOUR_USERNAME` and `YOUR_REPOSITORY_NAME` accordingly.* +*Replace `YOUR_USERNAME` and `YOUR_REPOSITORY_NAME` with your details.* ### Linking to ZenML -1. Obtain a GitHub Personal Access Token (PAT): - - Go to GitHub settings > Developer settings > Personal access tokens. - - Click "Generate new token," name it, and set `contents` to read-only for the specific repository. - - Generate and copy the token. +1. **Generate a GitHub Personal Access Token (PAT)**: + - Go to GitHub settings > Developer settings > Personal access tokens > Generate new token. + - Name the token and select `contents` read-only access for the specific repository. + - Copy the generated token. -2. Install GitHub integration and register the repository: +2. **Install GitHub Integration and Register Repository**: ```sh zenml integration install github zenml code-repository register <REPO_NAME> --type=github \ @@ -18544,43 +18467,41 @@ python run.py --training-pipeline python run.py --training-pipeline ``` -For more details, refer to the [ZenML Git Integration documentation](../../how-to/project-setup-and-management/setting-up-a-project-repository/connect-your-git-repository.md). +For more details, refer to the [ZenML Git Integration guide](../../how-to/project-setup-and-management/setting-up-a-project-repository/connect-your-git-repository.md). ================================================== === File: docs/book/user-guide/llmops-guide/README.md === -# LLMOps Guide Summary +# ZenML LLMOps Guide Summary -The ZenML LLMOps Guide provides a framework for integrating Large Language Models (LLMs) into MLOps workflows. It is aimed at ML practitioners and MLOps engineers seeking to enhance their pipelines with LLM capabilities while ensuring scalability and robustness. +The ZenML LLMOps Guide provides a framework for integrating Large Language Models (LLMs) into MLOps workflows. It targets ML practitioners and MLOps engineers aiming to utilize LLMs while ensuring workflow robustness and scalability. ## Key Topics Covered: -- **RAG with ZenML**: Introduction to Retrieval-Augmented Generation (RAG) and its implementation. +- **RAG with ZenML**: Overview of Retrieval-Augmented Generation (RAG) and its implementation. +- **Code Examples**: + - RAG in 85 lines of code. + - Evaluation in 65 lines of code. + - Finetuning LLMs in 100 lines of code. - **Data Handling**: - Data ingestion and preprocessing. - Generating and storing embeddings in a vector database. -- **Inference Pipeline**: Basic RAG inference setup. - **Evaluation Metrics**: - - Evaluation methods for retrieval and generation. - - Practical evaluation strategies. -- **Reranking**: - - Understanding and implementing reranking to improve retrieval performance. - - Evaluating reranking effectiveness. -- **Embedding Fine-tuning**: - - Techniques for improving retrieval through embedding fine-tuning. - - Synthetic data generation and using Sentence Transformers for fine-tuning. -- **LLM Fine-tuning**: - - Methods for fine-tuning LLMs, including using 🤗 Accelerate. - - Evaluation and deployment of fine-tuned models. + - Retrieval and generation evaluation. + - Reranking for improved retrieval performance. +- **Finetuning**: + - Techniques for finetuning embeddings and LLMs. + - Using Sentence Transformers for embedding finetuning. + - Deployment of finetuned models. ## Practical Application: -The guide includes a practical example of building a question-answering system for ZenML, progressing from a simple RAG pipeline to advanced techniques like embedding fine-tuning and reranking. +The guide includes a practical example of building a question-answering system for ZenML, illustrating the transition from a basic RAG pipeline to advanced techniques like embedding finetuning and document reranking. -## Prerequisites: +## Requirements: - Python environment with ZenML installed. -- Familiarity with concepts from the Starter and Production Guides. +- Familiarity with concepts from the Starter and Production Guides is recommended. -By the end of the guide, users will understand how to effectively utilize LLMs in their MLOps workflows, enabling the development of scalable and maintainable LLM-powered applications. +By the end of the guide, users will understand how to effectively leverage LLMs in their MLOps workflows, enabling the development of scalable and maintainable applications. ================================================== @@ -18588,14 +18509,14 @@ By the end of the guide, users will understand how to effectively utilize LLMs i ### Generating Embeddings for Retrieval -This section details the process of generating embeddings to enhance retrieval performance in a Retrieval-Augmented Generation (RAG) pipeline. Embeddings are vector representations that capture the semantic meaning of data, allowing for improved retrieval of relevant information based on similarity rather than mere keyword matching. +This section covers generating embeddings to enhance retrieval performance in a Retrieval-Augmented Generation (RAG) pipeline. Embeddings are vector representations that capture the semantic meaning of data, facilitating the identification of relevant information based on similarity rather than simple keyword matching. -**Key Points:** -- **Embeddings**: High-dimensional vectors that represent data semantically, generated using models like those from the [`sentence-transformers`](https://www.sbert.net/) library. The model used here is `sentence-transformers/all-MiniLM-L12-v2`, which produces 384-dimensional embeddings. -- **Purpose**: To quickly identify relevant data chunks during inference, improving the robustness of retrieval, especially for complex queries. -- **Dimensionality Reduction**: Techniques like UMAP and t-SNE can visualize embeddings in 2D, helping to identify patterns and relationships in the data. +#### Key Points: +- **Embeddings**: High-dimensional vectors representing data, generated using models like `sentence-transformers`. They are essential for improving retrieval accuracy in NLP tasks. +- **Purpose**: To quickly find relevant data chunks during inference, capturing semantic meaning and context. +- **Model Used**: The `sentence-transformers/all-MiniLM-L12-v2` model, which produces 384-dimensional embeddings. -**Code for Generating Embeddings:** +#### Code for Generating Embeddings: ```python from typing import Annotated, List import numpy as np @@ -18615,24 +18536,24 @@ def generate_embeddings(split_documents: List[Document]) -> Annotated[List[Docum return split_documents ``` -**Visualization Code:** +- **Document Model Update**: The `Document` model is updated to include an `embedding` attribute for storing generated embeddings. + +#### Dimensionality Reduction and Visualization: +To visualize embeddings, dimensionality reduction techniques like UMAP and t-SNE can be applied. This helps in understanding the clustering of similar data based on semantic meaning. + +#### Visualization Code: ```python -from matplotlib.colors import ListedColormap import matplotlib.pyplot as plt import numpy as np from sklearn.manifold import TSNE import umap from zenml.client import Client -artifact = Client().get_artifact_version('EMBEDDINGS_ARTIFACT_UUID_GOES_HERE') +artifact = Client().get_artifact_version('EMBEDDINGS_ARTIFACT_UUID') embeddings = np.array([doc.embedding for doc in documents]) parent_sections = [doc.parent_section for doc in documents] unique_parent_sections = list(set(parent_sections)) -# Color mapping -tol_colors = ["#4477AA", "#EE6677", "#228833", "#CCBB44", "#66CCEE", "#AA3377", "#BBBBBB"] -section_color_dict = dict(zip(unique_parent_sections, tol_colors[:len(unique_parent_sections)])) - def visualize(embeddings, parent_sections, method='tsne'): if method == 'tsne': embeddings_2d = TSNE(n_components=2, random_state=42).fit_transform(embeddings) @@ -18642,33 +18563,35 @@ def visualize(embeddings, parent_sections, method='tsne'): plt.figure(figsize=(8, 8)) for section in unique_parent_sections: mask = [section == ps for ps in parent_sections] - plt.scatter(embeddings_2d[mask, 0], embeddings_2d[mask, 1], c=section_color_dict[section], label=section) + plt.scatter(embeddings_2d[mask, 0], embeddings_2d[mask, 1], label=section) + plt.title(f"{method.upper()} Visualization") plt.legend() plt.show() ``` -**Conclusion**: The embeddings are stored as artifacts in the ZenML artifact store, allowing for modularity in the pipeline. Future steps will involve storing these embeddings in a vector database for efficient retrieval during inference. For complete code examples, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). +#### Conclusion: +Embeddings are generated and stored in a ZenML artifact store, allowing for modularity in the pipeline. Future steps will involve storing these embeddings in a vector database for efficient retrieval. For complete code examples, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/README.md === -### RAG Pipelines with ZenML +### Summary of RAG Pipelines with ZenML -**Overview**: Retrieval-Augmented Generation (RAG) combines retrieval-based and generation-based models, enhancing the capabilities of Large Language Models (LLMs). This guide outlines setting up RAG pipelines using ZenML, focusing on data ingestion, index management, and tracking artifacts. +**Retrieval-Augmented Generation (RAG)** is a technique that integrates retrieval-based and generation-based models, enhancing the capabilities of Large Language Models (LLMs). This guide outlines the setup of RAG pipelines using ZenML, focusing on: -**Key Points**: -- **LLM Limitations**: LLMs can generate human-like responses but may produce incorrect outputs, especially with ambiguous prompts. Most LLMs handle fewer tokens than advanced models like Google's Gemini 1.5 Pro, which can manage up to 1 million tokens. - -- **Components of RAG**: - - **Purpose of RAG**: Addresses the limitations of LLMs by integrating retrieval mechanisms. - - **Data Ingestion**: Techniques for preprocessing data for RAG pipelines. - - **Embeddings**: Use of embeddings to represent data, forming the basis for retrieval. - - **Vector Database**: Storage of embeddings in a vector database for efficient retrieval. - - **Artifact Tracking**: Utilizing ZenML to track artifacts associated with RAG processes. +1. **Purpose of RAG**: RAG addresses the limitations of LLMs, which can generate incorrect responses, especially with ambiguous prompts, and are restricted in text handling capacity. While some LLMs, like Google's Gemini 1.5 Pro, can manage up to 1 million tokens, most open-source models handle significantly less. -**Conclusion**: The guide culminates in demonstrating how these components work together for basic RAG inference. +2. **Key Components**: + - **Data Ingestion and Preprocessing**: Preparing data for the RAG pipeline. + - **Embeddings**: Utilizing embeddings to represent data for retrieval. + - **Vector Database**: Storing embeddings for efficient access. + - **Artifact Tracking**: Managing RAG-related artifacts with ZenML. + +3. **Final Integration**: The guide culminates in demonstrating how these components work together for basic RAG inference. + +This overview provides a concise framework for understanding and implementing RAG pipelines with ZenML. ================================================== @@ -18677,33 +18600,33 @@ def visualize(embeddings, parent_sections, method='tsne'): ### Summary of Retrieval-Augmented Generation (RAG) **Overview:** -Retrieval-Augmented Generation (RAG) enhances the capabilities of Large Language Models (LLMs) by integrating a retrieval mechanism that fetches relevant documents from a large corpus to generate more accurate and contextually grounded responses. This technique addresses LLM limitations, such as incorrect responses and token processing constraints. +Retrieval-Augmented Generation (RAG) enhances the capabilities of Large Language Models (LLMs) by integrating a retrieval mechanism to fetch relevant documents from a large corpus, which aids in generating more accurate and contextually grounded responses. This technique addresses LLM limitations, such as generating incorrect responses and handling extensive text inputs. **RAG Pipeline:** -1. **Retriever**: Identifies relevant documents from a corpus. +1. **Retriever**: Identifies relevant documents from a large corpus. 2. **Generator**: Produces responses based on the retrieved documents. - -This combination is effective for tasks requiring contextual understanding, such as question answering, summarization, and dialogue generation. RAG improves response accuracy and reduces the computational load by focusing on a smaller set of documents. + - Useful for tasks like question answering, summarization, and dialogue generation. + - Reduces the likelihood of incorrect responses by grounding them in relevant information. + - More cost-effective than pure generation models, especially in resource-constrained environments. **When to Use RAG:** -- Ideal for generating long-form responses that need contextual grounding. -- Suitable for tasks like question answering and summarization. -- A good starting point for exploring LLMs due to lower data and resource requirements compared to other methods. +- Ideal for generating long-form responses requiring contextual understanding. +- Effective when working with large document corpora. +- Recommended for initial experiments with LLMs due to lower data and computational resource requirements. **Integration with ZenML:** -ZenML facilitates the creation of RAG pipelines, offering tools for: -- Data ingestion and index management. -- Tracking RAG artifacts (hyperparameters, model weights, etc.). -- Scaling to more complex setups, including fine-tuning embeddings and LLMs. +- ZenML facilitates the creation of RAG pipelines, combining retrieval and generation strengths. +- Offers tools for data ingestion, index management, and tracking RAG artifacts. +- Supports scalability and complexity, allowing for finetuning of embeddings and models. -**Advantages of ZenML:** +**Key Advantages of ZenML:** - **Reproducibility**: Rerun pipelines to update documents or parameters while preserving previous versions. -- **Scalability**: Deploy on cloud providers for larger document handling. -- **Artifact Tracking**: Monitor generated artifacts and their metadata through the ZenML dashboard. -- **Maintainability**: Modular pipeline structure allows easy updates and experimentation. -- **Collaboration**: Share pipelines and insights with team members for collaborative development. +- **Scalability**: Easily scale to larger corpora using cloud deployment and scalable vector stores. +- **Artifact Tracking**: Monitor and associate artifacts with metadata for performance insights. +- **Maintainability**: Modular pipeline format simplifies updates and experimentation. +- **Collaboration**: Share pipelines and insights with team members via the ZenML dashboard. -ZenML provides a structured approach to building RAG pipelines, making it an effective tool for leveraging LLMs in MLOps workflows. Future sections will explore advanced topics like document reranking and LLM fine-tuning. +This summary provides a concise understanding of RAG and its implementation within the ZenML ecosystem, highlighting its benefits and practical applications in MLOps workflows. ================================================== @@ -18711,49 +18634,51 @@ ZenML provides a structured approach to building RAG pipelines, making it an eff ### Storing Embeddings in a Vector Database -To efficiently retrieve documents based on their embeddings, we store them in a vector database, specifically PostgreSQL, which is scalable for high-dimensional vectors. Other vector databases can also be used. For PostgreSQL setup instructions, refer to the [repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). +To efficiently retrieve documents based on similarity, embeddings can be stored in a vector database. This guide uses PostgreSQL, a scalable choice for high-dimensional vectors, but other vector databases can also be utilized. -#### Key Steps in the Process: -1. **Connect to the Database**: Use the `psycopg2` package for database interactions. -2. **Create Extensions and Tables**: - - Enable the `vector` extension. - - Create an `embeddings` table if it doesn't exist. -3. **Insert Data**: Only insert new embeddings if they don't already exist. -4. **Index Creation**: Calculate optimal index parameters and create an index using `ivfflat` for cosine similarity. +**Setup Information:** +- For PostgreSQL setup, refer to the [repository instructions](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). +- Use the `psycopg2` package for database connection and raw SQL for interaction. -#### Code Snippet: +**Key Code Functionality:** ```python from zenml import step -from typing import List @step def index_generator(documents: List[Document]) -> None: - conn = get_db_conn() try: + conn = get_db_conn() with conn.cursor() as cur: cur.execute("CREATE EXTENSION IF NOT EXISTS vector") + conn.commit() + cur.execute(""" - CREATE TABLE IF NOT EXISTS embeddings ( - id SERIAL PRIMARY KEY, - content TEXT, - token_count INTEGER, - embedding VECTOR({EMBEDDING_DIMENSIONALITY}), - filename TEXT, - parent_section TEXT, - url TEXT - ); + CREATE TABLE IF NOT EXISTS embeddings ( + id SERIAL PRIMARY KEY, + content TEXT, + token_count INTEGER, + embedding VECTOR({EMBEDDING_DIMENSIONALITY}), + filename TEXT, + parent_section TEXT, + url TEXT + ); """) conn.commit() + register_vector(conn) + for doc in documents: - if cur.execute("SELECT COUNT(*) FROM embeddings WHERE content = %s", (doc.page_content,)).fetchone()[0] == 0: + cur.execute("SELECT COUNT(*) FROM embeddings WHERE content = %s", (doc.page_content,)) + if cur.fetchone()[0] == 0: cur.execute(""" - INSERT INTO embeddings (content, token_count, embedding, filename, parent_section, url) - VALUES (%s, %s, %s, %s, %s, %s) - """, (doc.page_content, doc.token_count, doc.embedding.tolist(), doc.filename, doc.parent_section, doc.url)) + INSERT INTO embeddings (content, token_count, embedding, filename, parent_section, url) + VALUES (%s, %s, %s, %s, %s, %s)""", + (doc.page_content, doc.token_count, doc.embedding.tolist(), doc.filename, doc.parent_section, doc.url)) conn.commit() num_records = cur.execute("SELECT COUNT(*) FROM embeddings;").fetchone()[0] + logger.info(f"Number of vector records in table: {num_records}") + num_lists = max(num_records / 1000, 10) if num_records <= 1000000 else math.sqrt(num_records) cur.execute(f"CREATE INDEX IF NOT EXISTS embeddings_idx ON embeddings USING ivfflat (embedding vector_cosine_ops) WITH (lists = {num_lists});") conn.commit() @@ -18766,80 +18691,96 @@ def index_generator(documents: List[Document]) -> None: conn.close() ``` -#### Important Considerations: -- **Embedding Updates**: Decide when to update embeddings based on data changes. -- **Performance**: For large datasets, consider running on a GPU-enabled machine for efficiency. -- **Index Tuning**: Experiment with index parameters for optimal performance in similarity searches. +**Process Overview:** +1. Connect to the database. +2. Create the `vector` extension for PostgreSQL. +3. Create the `embeddings` table if it doesn't exist. +4. Insert new documents and their embeddings. +5. Calculate index parameters based on the number of records. +6. Create an index on embeddings for efficient similarity search using cosine distance. -This setup allows for rapid retrieval of relevant documents based on query similarity, enhancing the efficiency of a question-answering system. For the complete code and further details, visit the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). +**Considerations:** +- Update strategy for embeddings should depend on data change frequency. +- For large datasets, consider running on a GPU-enabled machine for performance. + +**Next Steps:** +Once embeddings are stored, the next step is to retrieve relevant documents based on queries, enhancing the question-answering capabilities of the system. + +For complete code and additional details, visit the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/basic-rag-inference-pipeline.md === -### Summary of RAG Inference Documentation +### Summary of Simple RAG Inference Documentation -**Overview**: This documentation describes how to implement a Retrieval-Augmented Generation (RAG) inference pipeline using an index store to generate responses to user queries without requiring external libraries beyond the LLM interface. +This documentation outlines the process of using Retrieval-Augmented Generation (RAG) components to generate responses based on indexed documents. It provides a simple inference setup without requiring external libraries beyond the interface to the index store and the LLM. -#### Simple RAG Inference +#### Running Inference +To run a query against the indexed documents, use the following command: -1. **Running the Inference**: - To execute a query, use the following command: - ```bash - python run.py --rag-query "your_query_here" --model=gpt4 - ``` +```bash +python run.py --rag-query "how do I use a custom materializer inside my own zenml steps? i.e. how do I set it? inside the @step decorator?" --model=gpt4 +``` -2. **Inference Function**: - The inference process is encapsulated in the `process_input_with_retrieval` function: - ```python - def process_input_with_retrieval(input: str, model: str = OPENAI_MODEL, n_items_retrieved: int = 5) -> str: - related_docs = get_topn_similar_docs(get_embeddings(input), get_db_conn(), n=n_items_retrieved) - system_message = """You are a friendly chatbot. You can answer questions about ZenML, its features and its use cases. You respond in a concise, technically credible tone. You ONLY use the context from the ZenML documentation to provide relevant answers. If unsure, say so.""" - messages = [ - {"role": "system", "content": system_message}, - {"role": "user", "content": f"```{input}```"}, - {"role": "assistant", "content": "Relevant ZenML documentation:\n" + "\n".join(doc[0] for doc in related_docs)}, - ] - return get_completion_from_messages(messages, model=model) - ``` +#### Inference Pipeline Code +The inference pipeline consists of the following key function: -3. **Document Retrieval**: - The `get_topn_similar_docs` function retrieves similar documents based on query embeddings: - ```python - def get_topn_similar_docs(query_embedding: List[float], conn: psycopg2.extensions.connection, n: int = 5) -> List[Tuple]: - embedding_array = np.array(query_embedding) - register_vector(conn) - cur = conn.cursor() - cur.execute(f"SELECT content FROM embeddings ORDER BY embedding <=> %s LIMIT {n}", (embedding_array,)) - return cur.fetchall() - ``` +```python +def process_input_with_retrieval(input: str, model: str = OPENAI_MODEL, n_items_retrieved: int = 5) -> str: + delimiter = "```" + related_docs = get_topn_similar_docs(get_embeddings(input), get_db_conn(), n=n_items_retrieved) + + system_message = """You are a friendly chatbot. You can answer questions about ZenML, its features, and its use cases. You respond in a concise, technically credible tone. You ONLY use the context from the ZenML documentation to provide relevant answers. If unsure, just say so.""" + + messages = [ + {"role": "system", "content": system_message}, + {"role": "user", "content": f"{delimiter}{input}{delimiter}"}, + {"role": "assistant", "content": "Relevant ZenML documentation:\n" + "\n".join(doc[0] for doc in related_docs)}, + ] + + return get_completion_from_messages(messages, model=model) +``` -4. **Completion Generation**: - The `get_completion_from_messages` function generates a response using the specified model: - ```python - def get_completion_from_messages(messages, model=OPENAI_MODEL, temperature=0.4, max_tokens=1000): - completion_response = litellm.completion(model=model, messages=messages, temperature=temperature, max_tokens=max_tokens) - return completion_response.choices[0].message.content - ``` +#### Document Retrieval +The `get_topn_similar_docs` function retrieves the most similar documents based on the query embedding: -#### Key Points +```python +def get_topn_similar_docs(query_embedding: List[float], conn: psycopg2.extensions.connection, n: int = 5) -> List[Tuple]: + embedding_array = np.array(query_embedding) + register_vector(conn) + cur = conn.cursor() + cur.execute(f"SELECT content FROM embeddings ORDER BY embedding <=> %s LIMIT {n}", (embedding_array,)) + return cur.fetchall() +``` + +This function utilizes the `pgvector` plugin for PostgreSQL to efficiently order documents by similarity. + +#### Generating Responses +The `get_completion_from_messages` function generates a response using the specified model: + +```python +def get_completion_from_messages(messages, model=OPENAI_MODEL, temperature=0.4, max_tokens=1000): + model = MODEL_NAME_MAP.get(model, model) + completion_response = litellm.completion(model=model, messages=messages, temperature=temperature, max_tokens=max_tokens) + return completion_response.choices[0].message.content +``` -- **Database Efficiency**: The use of `pgvector` allows efficient similarity searches in PostgreSQL with the query `ORDER BY embedding <=> %s`. -- **Flexibility with LLMs**: The `litellm` library provides a universal interface for different LLMs, facilitating experimentation with new models without code rewrites. -- **Foundation for Improvement**: This basic RAG pipeline serves as a foundation for more complex implementations and enhancements, such as fine-tuning embeddings for better retrieval performance. +The `litellm` library serves as a universal interface for various LLMs, allowing flexibility in model selection without code rewrites. -For complete code and further details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) and specifically the [`llm_utils.py` file](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/utils/llm_utils.py). +### Conclusion +This documentation provides a foundational understanding of a basic RAG inference pipeline, focusing on document retrieval and response generation. Future sections will address improving retrieval performance through fine-tuning embeddings, especially in large and diverse document sets. For complete code, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) and specifically the [`llm_utils.py` file](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/utils/llm_utils.py). ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/data-ingestion.md === -### Summary: Ingesting and Preprocessing Data for RAG Pipelines with ZenML +### Summary of Data Ingestion and Preprocessing for RAG Pipelines with ZenML -To set up a Retrieval-Augmented Generation (RAG) pipeline, the first step is to ingest relevant data, including documents and metadata for training retriever and generator models. ZenML facilitates data ingestion by integrating with various tools for downloading, preprocessing, and indexing documents. +This documentation outlines the process of ingesting and preprocessing data for Retrieval-Augmented Generation (RAG) pipelines using ZenML. The initial step involves gathering a corpus of documents and relevant metadata for training retriever and generator models. ZenML facilitates integration with various tools for managing data ingestion, preprocessing, and indexing. #### URL Scraping Step -You can create a ZenML step to scrape URLs from documentation. The following code snippet demonstrates how to implement a URL scraper: +A ZenML step is defined to scrape URLs from ZenML documentation: ```python from typing import List @@ -18859,10 +18800,10 @@ def url_scraper( return docs_urls ``` -The `get_all_pages` function crawls the documentation site to retrieve a unique set of URLs, focusing on the latest releases to ensure relevant information is ingested. +The `get_all_pages` function retrieves unique URLs from the documentation, ensuring only the most recent content is ingested. The count of URLs is logged for visibility. -#### Document Loading -Once URLs are obtained, the `unstructured` library is used to load and parse the pages: +#### Document Loading Step +Next, the `unstructured` library is used to load and parse the scraped URLs: ```python from typing import List @@ -18872,13 +18813,13 @@ from zenml import step @step def web_url_loader(urls: List[str]) -> List[str]: """Loads documents from a list of URLs.""" - return ["\n\n".join(map(str, partition_html(url))) for url in urls] + return ["\n\n".join([str(el) for el in partition_html(url)]) for url in urls] ``` -This function extracts text from HTML without dealing with the markup, ensuring efficiency in LLM processing. +This step simplifies text extraction from HTML, making it easier to process without dealing with HTML complexities. -#### Data Preprocessing -After loading documents, they need to be preprocessed into manageable chunks. The following code demonstrates how to split documents into chunks: +#### Data Preprocessing Step +After loading documents, they are preprocessed into manageable chunks: ```python import logging @@ -18890,17 +18831,15 @@ logging.basicConfig(level=logging.INFO) @step(enable_cache=False) def preprocess_documents(documents: List[str]) -> Annotated[List[str], ArtifactConfig(name="split_chunks")]: - """Preprocesses a list of documents by splitting them into chunks.""" + """Preprocesses documents by splitting them into chunks.""" log_artifact_metadata({"chunk_size": 500, "chunk_overlap": 50}) return split_documents(documents, chunk_size=500, chunk_overlap=50) ``` -A chunk size of 500 characters with a 50-character overlap is recommended for documentation to ensure important information is retained. - -#### Additional Considerations -Understanding your data is crucial for determining the appropriate chunk size. Depending on the structure of your data, you may need to adjust the chunk size or perform additional preprocessing steps, such as cleaning text or extracting metadata. +The documents are split into chunks of 500 characters with a 50-character overlap, balancing retrieval efficiency and LLM processing capabilities. Adjustments may be made based on the data's structure and use case. -For complete code and further details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) and the specific [steps code](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide/steps/). +#### Additional Notes +For more complex ingestion and preprocessing, additional logic can be implemented, such as filtering URLs or cleaning text. The complete code and further details can be found in the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) repository. ================================================== @@ -18908,118 +18847,98 @@ For complete code and further details, refer to the [Complete Guide](https://git ### Summary of RAG Pipeline Implementation -This documentation describes how to implement a Retrieval-Augmented Generation (RAG) pipeline in 85 lines of Python code. The pipeline performs the following tasks: +This documentation outlines a simple implementation of a Retrieval-Augmented Generation (RAG) pipeline in 85 lines of Python code. The pipeline performs the following tasks: -1. **Data Loading**: Utilizes a fictional dataset about 'ZenML World' as the corpus. +1. **Data Loading**: Uses a fictional dataset about 'ZenML World' as the corpus. 2. **Text Processing**: Splits text into chunks and tokenizes it (converts text into words). -3. **Query Handling**: Accepts a query and retrieves the most relevant text chunks from the corpus. -4. **Response Generation**: Uses OpenAI's GPT-3.5 model to generate answers based on the relevant chunks. +3. **Query Handling**: Takes a user query and retrieves the most relevant text chunks from the corpus. +4. **Answer Generation**: Utilizes OpenAI's GPT-3.5 model to generate answers based on the retrieved chunks. + +### Key Functions -#### Key Functions: +- **`preprocess_text(text)`**: Normalizes text by converting to lowercase, removing punctuation, and trimming whitespace. + +- **`tokenize(text)`**: Tokenizes preprocessed text into words. -- **`preprocess_text(text)`**: Cleans and normalizes the input text. -- **`tokenize(text)`**: Tokenizes the preprocessed text into words. - **`retrieve_relevant_chunks(query, corpus, top_n=2)`**: - - Computes Jaccard similarity between the query and corpus chunks. + - Tokenizes the query and computes Jaccard similarity with each chunk in the corpus. - Returns the top `n` relevant chunks based on similarity. -- **`answer_question(query, corpus, top_n=2)`**: - - Retrieves relevant chunks and generates an answer using the OpenAI API. -#### Example Corpus: -The corpus includes descriptions of various fictional entities in 'ZenML World', such as: -- Plasma Phoenixes -- Crystalline Crabs -- Telepathic Treants +- **`answer_question(query, corpus, top_n=2)`**: + - Retrieves relevant chunks and formats them into a context for the model. + - Calls OpenAI's API to generate an answer based on the context. -#### Example Queries: -1. "What are Plasma Phoenixes?" -2. "What kinds of creatures live on the prismatic shores of ZenML World?" -3. "What is the capital of Panglossia?" +### Example Usage -#### Output: -The program generates answers based on the provided context. If the context does not contain relevant information, it indicates insufficient data. +A sample corpus about "ZenML World" is provided, and two questions are answered using the implemented functions: -#### Code Snippet: ```python -import os -import re -import string -from openai import OpenAI - -def preprocess_text(text): - return re.sub(r"\s+", " ", text.lower().translate(str.maketrans("", "", string.punctuation))).strip() - -def tokenize(text): - return preprocess_text(text).split() - -def retrieve_relevant_chunks(query, corpus, top_n=2): - query_tokens = set(tokenize(query)) - similarities = [(chunk, len(query_tokens.intersection(set(tokenize(chunk)))) / len(query_tokens.union(set(tokenize(chunk))))) for chunk in corpus] - return [chunk for chunk, _ in sorted(similarities, key=lambda x: x[1], reverse=True)[:top_n]] - -def answer_question(query, corpus, top_n=2): - relevant_chunks = retrieve_relevant_chunks(query, corpus, top_n) - if not relevant_chunks: - return "I don't have enough information to answer the question." - context = "\n".join(relevant_chunks) - client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) - return client.chat.completions.create(messages=[{"role": "system", "content": f"Based on the provided context, answer the following question: {query}\n\nContext:\n{context}"}, {"role": "user", "content": query}], model="gpt-3.5-turbo").choices[0].message.content.strip() +corpus = [ + "The luminescent forests of ZenML World are inhabited by glowing Zenbots...", + "In the neon skies of ZenML World, Cosmic Butterflies flutter gracefully...", + # Additional sentences omitted for brevity +] + +corpus = [preprocess_text(sentence) for sentence in corpus] + +question1 = "What are Plasma Phoenixes?" +answer1 = answer_question(question1, corpus) -corpus = [preprocess_text(sentence) for sentence in [ - "The luminescent forests of ZenML World...", - "In the neon skies of ZenML World...", - # Other sentences... -]] +question2 = "What kinds of creatures live on the prismatic shores of ZenML World?" +answer2 = answer_question(question2, corpus) -# Example usage -print(answer_question("What are Plasma Phoenixes?", corpus)) +irrelevant_question_3 = "What is the capital of Panglossia?" +answer3 = answer_question(irrelevant_question_3, corpus) ``` -#### Additional Notes: -- The similarity calculation is basic and can be improved with more advanced techniques like embeddings. -- The implementation is designed for educational purposes, demonstrating the core components of a RAG pipeline. +### Output + +The implementation produces answers based on the context provided in the corpus, demonstrating the basic functionality of the RAG pipeline. + +### Technical Notes + +- The similarity measurement is based on the Jaccard coefficient, which is a basic method for comparing sets. +- While the implementation is straightforward, it is not optimized for performance. More advanced techniques, such as embeddings, can enhance similarity measurement and retrieval efficiency. + +This guide serves as an introductory example of a RAG pipeline, with the potential for more sophisticated implementations to be explored later. ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/evaluation-in-practice.md === -### Summary of RAG System Evaluation Documentation +### Summary: Evaluating RAG System Performance -This documentation outlines the evaluation process for a Retrieval-Augmented Generation (RAG) system, emphasizing the separation of embedding generation and evaluation into distinct pipelines. +This documentation outlines the evaluation of Retrieval-Augmented Generation (RAG) systems, emphasizing the separation of embedding generation and evaluation processes. #### Key Points: 1. **Evaluation Pipeline**: - - The evaluation is a separate pipeline that can run after the main embedding generation pipeline. This separation allows for focused evaluation without interfering with the embedding generation process. - - Evaluations can be integrated into the main pipeline as a gating mechanism to assess the quality of embeddings for production readiness. + - The evaluation is structured as a separate pipeline that runs after the main embedding generation. This separation allows for focused evaluation without interfering with embedding creation. + - Depending on the use case, evaluations can also be integrated into the main pipeline to assess embedding quality for production readiness. -2. **LLM Judge**: - - For development, consider using a local LLM judge to expedite evaluations. Full evaluations can then be performed using cloud-based LLMs like Anthropic's Claude or OpenAI's GPT-3.5/4, which may incur higher costs. +2. **Local vs. Cloud LLM Judges**: + - For development, using a local LLM judge can expedite evaluations. Full evaluations can be conducted later using cloud LLMs (e.g., Anthropic's Claude, OpenAI's GPT-3.5/4) to manage costs and speed. -3. **Human Oversight**: - - Automated evaluations are beneficial but do not replace the need for human review. The LLM judge is costly and time-consuming, necessitating human oversight to ensure the RAG system performs as expected. +3. **Human Review**: + - Automated evaluations are beneficial but do not replace the necessity for human oversight. The LLM judge is costly and slow, necessitating human validation of results to ensure the RAG system's performance. 4. **Evaluation Frequency**: - - The evaluation frequency and depth should align with project constraints. Quick and inexpensive tests (e.g., retrieval system tests) can be run more frequently, while more complex evaluations (e.g., LLM judge) should be scheduled less often. + - The evaluation frequency and depth should align with project constraints. Quick tests (e.g., retrieval system tests) can be run frequently, while more complex evaluations (e.g., LLM judge) should be less frequent to balance cost and iteration speed. -5. **Next Steps**: - - After establishing the evaluation system, the next step involves adding a reranker to enhance retrieval performance without retraining embeddings. +5. **Next Steps**: + - Future improvements include adding a reranker to enhance retrieval performance without retraining embeddings. #### Running the Evaluation Pipeline: -To run the evaluation pipeline, clone the project repository and follow the instructions in the `README.md` file. Ensure the main pipeline has been executed to generate embeddings first. -**Clone the repository**: -```bash -git clone https://github.com/zenml-io/zenml-projects.git -``` +To run the evaluation pipeline, clone the project repository and execute the evaluation command after generating embeddings: -**Navigate and run the evaluation**: ```bash +git clone https://github.com/zenml-io/zenml-projects.git cd llm-complete-guide python run.py --evaluation ``` -This command executes the evaluation pipeline and displays results in the console, allowing for progress and log inspection via the dashboard. +Results will be displayed in the console, with logs and progress available in the dashboard. ================================================== @@ -19027,29 +18946,26 @@ This command executes the evaluation pipeline and displays results in the consol ### Evaluation and Metrics for RAG Pipeline -This section covers how to evaluate the performance of your Retrieval-Augmented Generation (RAG) pipeline using metrics and visualizations. Evaluating a RAG pipeline is essential for understanding its effectiveness and identifying improvement areas. Traditional metrics like accuracy, precision, and recall are often inadequate for language models due to their subjective nature. Thus, a holistic evaluation approach is necessary. +This section outlines how to evaluate the performance of a Retrieval-Augmented Generation (RAG) pipeline using metrics and visualizations. Evaluating the RAG pipeline is essential for understanding its effectiveness and identifying areas for improvement. Traditional metrics like accuracy, precision, and recall are often inadequate for language models due to the subjective nature of text generation. #### Key Evaluation Areas - 1. **Retrieval Evaluation**: Assessing the relevance of retrieved documents or document chunks to the query. 2. **Generation Evaluation**: Evaluating the coherence and helpfulness of the generated text for the specific use case. -#### Considerations for Evaluation - -When evaluating your RAG pipeline, consider your specific use case and acceptable error levels. For example, in a user-facing chatbot, you might evaluate: -- Relevance of retrieved documents. -- Coherence and helpfulness of generated answers. -- Presence of harmful language (e.g., hate speech). - -Generation evaluation serves as an end-to-end assessment of the RAG pipeline, allowing for subjective metrics since it evaluates the system's final output. - -#### Baseline Comparison +#### Evaluation Considerations +- The evaluation criteria depend on the specific use case and acceptable error levels. For example, in a user-facing chatbot, consider: + - Relevance of retrieved documents. + - Coherence and helpfulness of generated answers. + - Presence of toxic language in responses. -In production settings, it is advisable to evaluate a raw LLM model (without RAG components) as a baseline to compare against the RAG pipeline's performance, helping to quantify the added value of retrieval and generation components. +#### End-to-End Evaluation +The generation evaluation serves as an end-to-end assessment of the RAG pipeline, allowing for subjective metrics since it evaluates the system's final output. -#### Next Steps +#### Practical Guidance +A high-level code example is provided to illustrate the two main evaluation areas. Subsequent sections will delve deeper into these evaluations and offer practical advice on their execution and interpretation of results. -A high-level code example demonstrating the two main evaluation areas is provided, followed by detailed sections on each evaluation area and practical guidance on evaluation timing and result interpretation. +#### Additional Note +In production settings, it's advisable to establish a baseline by evaluating a raw LLM model (without RAG components) and comparing it to the RAG pipeline's performance to gauge the added value of retrieval and generation components. ================================================== @@ -19057,22 +18973,20 @@ A high-level code example demonstrating the two main evaluation areas is provide ### Retrieval Evaluation in RAG Pipeline -The retrieval component in a Retrieval-Augmented Generation (RAG) pipeline is essential for finding relevant documents based on incoming queries. This evaluation focuses on assessing the accuracy of semantic search and the relevance of retrieved documents. +The retrieval component of the RAG (Retrieval-Augmented Generation) pipeline is crucial for finding relevant documents to support the generation component. This section outlines how to evaluate the retrieval component's performance, focusing on the accuracy of semantic search. #### Manual Evaluation with Handcrafted Queries -Manual evaluation involves creating specific queries to check if the retrieval component can fetch the correct documents. This process, while time-consuming, helps identify edge cases and areas for improvement. Example queries include: +Manual evaluation involves creating specific queries to check if the retrieval component can fetch the expected documents. This method, while time-consuming, helps identify edge cases and areas for improvement. Example queries include: | Question | URL Ending | |----------|------------| -| How do I get going with the Label Studio integration? What are the first steps? | stacks-and-components/component-guide/annotators/label-studio | -| How can I write my own custom materializer? | user-guide/advanced-guide/data-management/handle-custom-data-types | -| How do I generate embeddings as part of a RAG pipeline when using ZenML? | user-guide/llmops-guide/rag-with-zenml/embeddings-generation | -| How do I use failure hooks in my ZenML pipeline? | user-guide/advanced-guide/pipelining-features/use-failure-success-hooks | -| Can I deploy ZenML self-hosted with Helm? How do I do it? | deploying-zenml/zenml-self-hosted/deploy-with-helm | +| How do I get going with the Label Studio integration? What are the first steps? | component-guide/annotators/label-studio | +| How can I write my own custom materializer? | advanced-guide/data-management/handle-custom-data-types | +| How do I generate embeddings as part of a RAG pipeline when using ZenML? | llmops-guide/rag-with-zenml/embeddings-generation | -The retrieval process involves encoding the query into a vector and querying a PostgreSQL database for similar vectors. +To evaluate, encode the query as a vector and search a PostgreSQL database for similar vectors. Check if the expected URL appears in the top results. -**Code Snippet for Manual Evaluation:** +#### Code for Manual Evaluation ```python def query_similar_docs(question: str, url_ending: str) -> tuple: embedded_question = get_embeddings(question) @@ -19080,17 +18994,22 @@ def query_similar_docs(question: str, url_ending: str) -> tuple: return (question, url_ending, [url[0] for url in top_similar_docs_urls]) def test_retrieved_docs_retrieve_best_url(question_doc_pairs: list) -> float: - failures = sum(1 for pair in question_doc_pairs if pair["url_ending"] not in query_similar_docs(pair["question"], pair["url_ending"])[2]) + failures = sum(1 for pair in question_doc_pairs if all(pair["url_ending"] not in query_similar_docs(pair["question"], pair["url_ending"])[2])) return round((failures / len(question_doc_pairs)) * 100, 2) ``` +Logging can be added to track failures during testing. #### Automated Evaluation with Synthetic Queries -For broader evaluation, synthetic queries can be generated using an LLM based on document chunks. This allows for testing a larger number of queries efficiently. +For broader evaluation, use an LLM to generate synthetic queries based on document chunks. This allows for testing the retrieval component's performance across a larger dataset. -**Code Snippet for Generating Questions:** +#### Code for Generating Questions ```python +from typing import List +from litellm import completion +from zenml import step + def generate_question(chunk: str, local: bool = False) -> str: - model = LOCAL_MODEL if local else "gpt-3.5-turbo" + model = "ollama/mixtral" if local else "gpt-3.5-turbo" response = completion(model=model, messages=[{"content": f"Generate a question about this text: `{chunk}`", "role": "user"}]) return response.choices[0].message.content @@ -19101,19 +19020,26 @@ def generate_questions_from_chunks(docs_with_embeddings: List[Document], local: return docs_with_embeddings ``` -#### Evaluation Results -Initial tests showed a 20% failure rate with handcrafted queries and a 16% failure rate with synthetic queries, indicating room for improvement. +#### Full Evaluation Pipeline +Load the generated questions and evaluate their retrieval performance: +```python +@step +def retrieval_evaluation_full(sample_size: int = 50) -> float: + dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train").shuffle(seed=42).select(range(sample_size)) + failures = sum(1 for item in dataset if all(item["filename"].split("/")[-1] not in query_similar_docs(item["generated_questions"][0], item["filename"].split("/")[-1])[2])) + return round((failures / len(dataset)) * 100, 2) +``` -#### Suggestions for Improvement -- **Diverse Question Generation**: Experiment with various prompts to generate a wider range of question types. -- **Semantic Similarity Metrics**: Use metrics like cosine similarity to evaluate retrieval performance beyond binary success. -- **Comparative Evaluation**: Test different retrieval methods and compare their effectiveness. -- **Error Analysis**: Investigate failure cases to identify patterns for targeted improvements. +#### Improvement Strategies +- **Diverse Question Generation**: Experiment with prompts to create varied question types. +- **Semantic Similarity Metrics**: Use metrics like cosine similarity for a nuanced performance view. +- **Comparative Evaluation**: Test different retrieval techniques and models. +- **Error Analysis**: Investigate failure cases to identify patterns and improve the retrieval component. #### Conclusion -The evaluation process, from manual checks to automated testing, provides insights into the retrieval component's performance. Continuous improvement is essential as the RAG pipeline evolves to handle more complex queries. Future evaluations will also focus on the generation component to ensure the quality of final answers produced. +The evaluation process, including both manual and automated methods, provides insights into the retrieval component's performance, with initial failure rates of 20% and 16% indicating areas for improvement. Future work should focus on refining the evaluation approach and enhancing the retrieval system's capabilities. -For full code access, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) and specifically the [`eval_retrieval.py`](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_retrieval.py) file. +For complete code, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) and specifically the [`eval_retrieval.py` file](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_retrieval.py). ================================================== @@ -19121,42 +19047,36 @@ For full code access, refer to the [Complete Guide](https://github.com/zenml-io/ ### Evaluation in 65 Lines of Code -This section demonstrates how to evaluate the performance of a Retrieval-Augmented Generation (RAG) pipeline in 65 lines of code, building on a previous example. The complete code can be found in the project repository [here](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/most_basic_eval.py). The evaluation code requires prior RAG pipeline functions. +This section demonstrates how to evaluate the performance of a Retrieval-Augmented Generation (RAG) pipeline in 65 lines of code, building on a previous example. For the complete code, refer to the project repository [here](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/most_basic_eval.py). The following code requires prior RAG pipeline functions. #### Evaluation Data -The evaluation data consists of questions and their expected answers: - ```python eval_data = [ {"question": "What creatures inhabit the luminescent forests of ZenML World?", "expected_answer": "The luminescent forests of ZenML World are inhabited by glowing Zenbots."}, - {"question": "What do Fractal Fungi do in the melodic caverns of ZenML World?", "expected_answer": "Fractal Fungi emit pulsating tones that resonate through the crystalline structures, creating a symphony of otherworldly sounds."}, + {"question": "What do Fractal Fungi do in the melodic caverns of ZenML World?", "expected_answer": "Fractal Fungi emit pulsating tones that resonate through the crystalline structures, creating a symphony of otherworldly sounds in the melodic caverns of ZenML World."}, {"question": "Where do Gravitational Geckos live in ZenML World?", "expected_answer": "Gravitational Geckos traverse the inverted cliffs of ZenML World."}, ] ``` #### Evaluation Functions -1. **Retrieval Evaluation**: Checks if any words from the expected answer are present in the retrieved chunks. - ```python - def evaluate_retrieval(question, expected_answer, corpus, top_n=2): - relevant_chunks = retrieve_relevant_chunks(question, corpus, top_n) - return any(any(word in chunk for word in tokenize(expected_answer)) for chunk in relevant_chunks) - ``` +```python +def evaluate_retrieval(question, expected_answer, corpus, top_n=2): + relevant_chunks = retrieve_relevant_chunks(question, corpus, top_n) + return any(any(word in chunk for word in tokenize(expected_answer)) for chunk in relevant_chunks) -2. **Generation Evaluation**: Uses OpenAI's API to assess the generated answer's relevance and accuracy. - ```python - def evaluate_generation(question, expected_answer, generated_answer): - client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) - chat_completion = client.chat.completions.create( - messages=[{"role": "system", "content": "You are an evaluation judge. Determine if the generated answer is relevant and accurate."}, - {"role": "user", "content": f"Question: {question}\nExpected Answer: {expected_answer}\nGenerated Answer: {generated_answer}"}], - model="gpt-3.5-turbo", - ) - return chat_completion.choices[0].message.content.strip().lower() == "yes" - ``` +def evaluate_generation(question, expected_answer, generated_answer): + client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) + chat_completion = client.chat.completions.create( + messages=[ + {"role": "system", "content": "You are an evaluation judge. Given a question, an expected answer, and a generated answer, determine if the generated answer is relevant and accurate. Respond with 'YES' or 'NO'."}, + {"role": "user", "content": f"Question: {question}\nExpected Answer: {expected_answer}\nGenerated Answer: {generated_answer}\nIs the generated answer relevant and accurate?"} + ], + model="gpt-3.5-turbo", + ) + return chat_completion.choices[0].message.content.strip().lower() == "yes" +``` #### Evaluation Process -The evaluation process iterates through the `eval_data`, calculating retrieval and generation scores: - ```python retrieval_scores = [] generation_scores = [] @@ -19173,142 +19093,160 @@ print(f"Retrieval Accuracy: {retrieval_accuracy:.2f}") print(f"Generation Accuracy: {generation_accuracy:.2f}") ``` -The example demonstrates achieving 100% accuracy for both retrieval and generation. Future sections will cover more advanced RAG evaluation techniques. +### Summary +The code includes two evaluation functions: `evaluate_retrieval` checks if retrieved chunks contain words from the expected answer, while `evaluate_generation` uses OpenAI's API to assess the quality of generated answers. The evaluation process iterates through predefined questions and expected answers, calculating and printing the accuracy for both retrieval and generation components. The example demonstrates achieving 100% accuracy for both components, providing a foundational understanding of RAG evaluation. ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/generation.md === -### Summary of Generation Evaluation in RAG Pipeline +### Summary of RAG Pipeline Generation Evaluation Documentation #### Overview -The generation component of a Retrieval-Augmented Generation (RAG) pipeline generates answers based on retrieved context. Evaluating this component is subjective and requires careful metrics to assess the quality of generated answers. +The generation component of a Retrieval-Augmented Generation (RAG) pipeline is evaluated to assess the quality of answers generated based on retrieved context. This evaluation is more subjective than retrieval evaluation, making it challenging to establish precise metrics. #### Handcrafted Evaluation Tests -1. **Example Creation**: Create tests based on known outputs. For instance, when querying about supported orchestrators, ensure terms like "Airflow" and "Kubeflow" are included, while "Flyte" and "Prefect" are excluded. -2. **Test Tables**: - - **`bad_answers`**: Questions with incorrect terms. - - **`bad_immediate_responses`**: Questions that should not yield certain responses. - - **`good_responses`**: Questions expected to yield specific correct terms. +- Create examples to verify that generated outputs include or exclude specific terms based on known correct or incorrect information. +- Example tests: + - **Good Responses**: Should include terms like "Airflow" and "Kubeflow". + - **Bad Responses**: Should not include unsupported terms like "Flyte" and "Prefect". + +**Tables for Evaluation:** +- `bad_answers`: Lists questions and associated bad words. +- `bad_immediate_responses`: Lists questions with incorrect immediate responses. +- `good_responses`: Lists questions expected to yield correct answers. #### Testing Code -- **Test Result Model**: - ```python - class TestResult(BaseModel): - success: bool - question: str - keyword: str = "" - response: str - ``` +1. **Test Result Class**: + ```python + class TestResult(BaseModel): + success: bool + question: str + keyword: str = "" + response: str + ``` -- **Function to Test for Bad Words**: - ```python - def test_content_for_bad_words(item: dict, n_items_retrieved: int = 5) -> TestResult: - response = process_input_with_retrieval(item["question"], n_items_retrieved) - for word in item["bad_words"]: - if word in response: - return TestResult(success=False, question=item["question"], keyword=word, response=response) - return TestResult(success=True, question=item["question"], response=response) - ``` +2. **Function to Test for Bad Words**: + ```python + def test_content_for_bad_words(item: dict, n_items_retrieved: int = 5) -> TestResult: + question = item["question"] + bad_words = item["bad_words"] + response = process_input_with_retrieval(question, n_items_retrieved=n_items_retrieved) + for word in bad_words: + if word in response: + return TestResult(success=False, question=question, keyword=word, response=response) + return TestResult(success=True, question=question, response=response) + ``` -- **Test Runner**: - ```python - def run_tests(test_data: list, test_function: Callable) -> float: - failures = sum(1 for item in test_data if not test_function(item).success) - failure_rate = (failures / len(test_data)) * 100 - return round(failure_rate, 2) - ``` +3. **Run Tests Function**: + ```python + def run_tests(test_data: list, test_function: Callable) -> float: + failures = sum(1 for item in test_data if not test_function(item).success) + failure_rate = (failures / len(test_data)) * 100 + return round(failure_rate, 2) + ``` -- **End-to-End Evaluation**: - ```python - @step - def e2e_evaluation() -> Tuple[float, float, float]: - failure_rate_bad_answers = run_tests(bad_answers, test_content_for_bad_words) - failure_rate_bad_immediate_responses = run_tests(bad_immediate_responses, test_response_starts_with_bad_words) - failure_rate_good_responses = run_tests(good_responses, test_content_contains_good_words) - return failure_rate_bad_answers, failure_rate_bad_immediate_responses, failure_rate_good_responses - ``` +4. **End-to-End Evaluation**: + ```python + @step + def e2e_evaluation() -> Tuple[float, float, float]: + failure_rate_bad_answers = run_tests(bad_answers, test_content_for_bad_words) + failure_rate_bad_immediate_responses = run_tests(bad_immediate_responses, test_response_starts_with_bad_words) + failure_rate_good_responses = run_tests(good_responses, test_content_contains_good_words) + return failure_rate_bad_answers, failure_rate_bad_immediate_responses, failure_rate_good_responses + ``` #### Automated Evaluation Using Another LLM -1. **Setup**: Use another LLM to grade the output on a scale of 1 to 5 for categories like toxicity, faithfulness, helpfulness, and relevance. -2. **Pydantic Model**: - ```python - class LLMJudgedTestResult(BaseModel): - toxicity: conint(ge=1, le=5) - faithfulness: conint(ge=1, le=5) - helpfulness: conint(ge=1, le=5) - relevance: conint(ge=1, le=5) - ``` +- Use a second LLM to grade the output of the primary LLM on a scale of 1 to 5 across categories: toxicity, faithfulness, helpfulness, and relevance. + +**Pydantic Model for Results**: +```python +class LLMJudgedTestResult(BaseModel): + toxicity: conint(ge=1, le=5) + faithfulness: conint(ge=1, le=5) + helpfulness: conint(ge=1, le=5) + relevance: conint(ge=1, le=5) +``` -3. **LLM Judged Test Function**: - ```python - def llm_judged_test_e2e(question: str, context: str, n_items_retrieved: int = 5) -> LLMJudgedTestResult: - response = process_input_with_retrieval(question, n_items_retrieved) - prompt = f"Analyze the text and context to provide scores for toxicity, faithfulness, helpfulness, and relevance." - response = completion(model="gpt-4-turbo", messages=[{"content": prompt, "role": "user"}]) - return LLMJudgedTestResult(**json.loads(response["choices"][0]["message"]["content"].strip())) - ``` +**LLM Judged Test Function**: +```python +def llm_judged_test_e2e(question: str, context: str, n_items_retrieved: int = 5) -> LLMJudgedTestResult: + response = process_input_with_retrieval(question, n_items_retrieved=n_items_retrieved) + # Construct prompt and call LLM for evaluation... +``` -4. **Run LLM Judged Tests**: - ```python - def run_llm_judged_tests(test_function: Callable, sample_size: int = 50) -> Tuple[float, float, float, float]: - dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train").shuffle(seed=42).select(range(sample_size)) - results = [test_function(item["generated_questions"][0], item["page_content"]) for item in dataset] - return (sum(r.toxicity for r in results) / len(results), ...) - ``` +**Run LLM Judged Tests**: +```python +def run_llm_judged_tests(test_function: Callable, sample_size: int = 50) -> Tuple[float, float, float, float]: + dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train").shuffle(seed=42).select(range(sample_size)) + # Evaluate and calculate average scores... +``` -#### Recommendations for Improvement -- Implement retries for JSON output errors. -- Use OpenAI's JSON mode for consistent output formatting. -- Explore batch processing for efficiency. -- Increase sample size for accuracy. -- Test with multiple LLMs for diverse evaluations. -- Consider human feedback for qualitative insights. +#### Considerations for Improvement +- Implement retry mechanisms for JSON output errors. +- Utilize OpenAI's JSON mode for consistent output formatting. +- Explore batch processing and increase sample sizes for more robust evaluations. +- Consider using multiple evaluators for more nuanced scoring. -#### Conclusion -This evaluation framework provides a structured approach to assess the generation component of a RAG pipeline, enabling continuous improvement and optimization tailored to specific use cases. For complete code, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_e2e.py). +#### Additional Resources +- For complete code and further details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) and specifically the [`eval_e2e.py`](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_e2e.py) file. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/starter-choices-for-finetuning-llms.md === -### Summary of Finetuning LLMs Documentation +# Summary of Finetuning LLMs Documentation -This guide provides a structured approach to finetuning large language models (LLMs) for specific tasks. Key steps include selecting a use case, gathering data, choosing a base model, and evaluating success. +## Overview +Finetuning large language models (LLMs) tailors their capabilities to specific tasks and datasets. This guide covers selecting a use case, gathering data, choosing a base model, and evaluating finetuning success. -#### Quick Assessment Questions +## Quick Assessment Questions Before starting, consider: -1. **Success Metrics**: Define success quantitatively (e.g., "95% accuracy in extracting order IDs"). -2. **Data Readiness**: Ensure you have sufficient labeled data (e.g., "1000 labeled support tickets"). -3. **Task Consistency**: Choose tasks with clear, consistent outputs (e.g., "Convert email to 5 specific fields"). -4. **Verification**: Ensure correctness can be verified by humans (e.g., "Check if extracted date matches document"). - -#### Picking a Use Case -Select a small, manageable use case that is not easily solvable by non-LLM methods. Examples include: -- **Good Use Cases**: Structured data extraction, domain-specific classification, standardized response generation. -- **Challenging Use Cases**: Open-ended chat, creative writing, general knowledge QA. - -#### Picking Data -Choose data that closely aligns with your use case to minimize annotation effort. Aim for hundreds to thousands of examples. Examples of good data sources include: -- Customer support responses. +1. **Define Success**: Quantifiable metrics (e.g., "95% accuracy in extracting order IDs"). +2. **Data Readiness**: Ensure data is prepared (e.g., "1000 labeled support tickets"). +3. **Task Consistency**: Tasks should be clear and consistent (e.g., "Convert email to 5 specific fields"). +4. **Human Verification**: Ensure correctness can be verified (e.g., "Check if extracted date matches document"). + +## Picking a Use Case +Choose a small, manageable use case that is not easily solved by traditional methods. For example, "triage customer support queries" is better than "answer all customer support emails." Ensure you can evaluate the approach quickly. + +## Picking Data +Select data that closely aligns with your use case to minimize annotation effort. Aim for hundreds to thousands of examples. Examples of good data sources include: +- Customer support email responses. - Manually extracted metadata. -#### Success Indicators -Evaluate your use case based on: -- **Task Scope**: Specific tasks (e.g., "Extract purchase date from receipts"). -- **Output Format**: Structured outputs vs. free-form text. -- **Data Availability**: Sufficient examples ready to use. -- **Evaluation Method**: Clear metrics vs. subjective evaluations. -- **Business Impact**: Tangible benefits vs. vague improvements. - -#### Picking a Base Model -Choose a base model based on your task: -- **Llama 3.1 8B**: Best for structured data extraction and classification. -- **Llama 3.1 70B**: Suitable for complex reasoning and technical content. -- **Mistral 7B**: Good for general text generation and dialogue. -- **Phi-2**: Ideal for lightweight tasks and rapid prototyping. - -#### Model Selection Matrix +### Good vs. Not-So-Good Use Cases +| Good Use Cases | Why It Works | Example | Data Requirements | +|----------------|--------------|---------|-------------------| +| Structured Data Extraction | Clear inputs/outputs | Extracting order details | 500-1000 annotated emails | +| Domain-Specific Classification | Well-defined categories | Categorizing support tickets | 1000+ labeled examples | +| Standardized Response Generation | Consistent format | Generating troubleshooting responses | 500+ query/response pairs | + +### Challenging Use Cases +| Challenging Use Cases | Why It's Tricky | Alternative Approach | +|-----------------------|------------------|---------------------| +| Open-ended Chat | Hard to measure success | Use instruction tuning | +| Creative Writing | Subjective quality | Focus on specific formats | +| General Knowledge QA | Too broad | Narrow down to specific domain | + +## Success Indicators +Evaluate your use case with these indicators: +| Indicator | Good Sign | Warning Sign | +|-----------|-----------|--------------| +| Task Scope | "Extract purchase date" | "Handle all inquiries" | +| Output Format | Structured JSON | Free-form text | +| Data Availability | 500+ examples ready | "We'll need to create examples" | +| Evaluation Method | Field-by-field metrics | "Users will tell us" | +| Business Impact | "Save 10 hours of data entry" | "Make AI more human-like" | + +## Picking a Base Model +Select a base model based on your use case: +- **Llama 3.1-8B**: Good for structured data extraction. +- **Llama 3.1-70B**: Suitable for complex reasoning. +- **Mistral 7B**: General text generation and dialogue. +- **Phi-2**: Lightweight tasks and rapid prototyping. + +### Model Selection Matrix ```mermaid graph TD A[Choose Your Task] --> B{Structured Output?} @@ -19320,40 +19258,38 @@ graph TD F -->|No| H[Mistral-7B] ``` -#### Evaluating Success -Define clear metrics for success to assess the effectiveness of your finetuning efforts. For structured data extraction, consider: +## Evaluating Success +Define success metrics early. For structured data extraction, consider: - Accuracy of extracted fields. - Precision and recall for specific field types. - Processing time per document. - Error rates on edge cases. -#### Next Steps -With a clear understanding of how to scope your project and evaluate results, proceed to the technical implementation, including a practical example of finetuning using the Accelerate library. +## Next Steps +With a clear understanding of scoping, data selection, and evaluation, proceed to the technical implementation in the next section, which covers practical finetuning using the Accelerate library. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-with-accelerate.md === -# Finetuning an LLM with Accelerate and PEFT +### Summary: Finetuning an LLM with Accelerate and PEFT -This documentation outlines the process of finetuning a language model using the Viggo dataset, which contains over 5,000 pairs of structured meaning representations and their corresponding natural language descriptions for video game dialogues. The goal is to train models to generate fluent responses from structured inputs. +This documentation outlines the process of finetuning a language model (LLM) using the Viggo dataset, which consists of pairs of meaning representations and natural language descriptions for video game dialogues. The goal is to train models to generate natural language responses from structured inputs. -## Finetuning Pipeline - -The finetuning pipeline consists of the following steps: +#### Finetuning Pipeline +The pipeline includes the following steps: 1. **prepare_data**: Load and preprocess the Viggo dataset. 2. **finetune**: Finetune the model on the dataset. 3. **evaluate_base**: Evaluate the base model before finetuning. 4. **evaluate_finetuned**: Evaluate the finetuned model. 5. **promote**: Promote the best model to "staging" in the Model Control Plane. -For initial experiments, it is recommended to use a smaller model (e.g., Llama 3.1 ~8B parameters) to facilitate quick iterations. - -## Implementation Details +For initial experiments, it is recommended to start with smaller models (e.g., Llama 3.1 with ~8B parameters) to facilitate rapid iteration. -The `prepare_data` step is minimal, loading and tokenizing data from the Hugging Face hub. Ensure the input format is correct, especially for instruction-tuned models. Logging inputs and outputs during finetuning is advised. +#### Implementation Details +The `prepare_data` step is minimal, focusing on loading and tokenizing the dataset. Care must be taken with input data formatting, especially for instruction-tuned models. Logging inputs and outputs is advised. -Finetuning utilizes the `accelerate` library for multi-GPU support. The key code for the finetuning step is as follows: +The finetuning process utilizes the `accelerate` library for multi-GPU support. The core code for finetuning is as follows: ```python model = load_base_model(base_model_id, use_accelerate=use_accelerate) @@ -19376,62 +19312,31 @@ trainer = transformers.Trainer( ) ``` -### Evaluation Metrics +Key points: +- `ZenMLCallback` logs metrics to ZenML. +- The `evaluate` library computes ROUGE scores for evaluation. -The evaluation uses the `evaluate` library to compute ROUGE scores, which measure the overlap of n-grams and the longest common subsequence between generated and reference texts. Key ROUGE metrics include: -- **ROUGE-N**: n-gram overlap. -- **ROUGE-L**: Longest Common Subsequence. -- **ROUGE-W**: Weighted Longest Common Subsequence. -- **ROUGE-S**: Skip-bigram co-occurrence. - -### ZenML Accelerate Decorator - -ZenML provides a `@run_with_accelerate` decorator for streamlined distributed training: +#### Using ZenML Accelerate Decorator +ZenML offers a `@run_with_accelerate` decorator for simplified distributed training: ```python from zenml.integrations.huggingface.steps import run_with_accelerate -@run_with_accelerate(num_processes=4, multi_gpu=True, mixed_precision='bf16') +@run_with_accelerate(num_processes=4, multi_gpu=True) @step -def finetune_step(tokenized_train_dataset, tokenized_val_dataset, base_model_id: str, output_dir: str): +def finetune_step(tokenized_train_dataset, tokenized_val_dataset, base_model_id, output_dir): model = load_base_model(base_model_id, use_accelerate=True) - - trainer = transformers.Trainer( - # ... trainer setup as shown above - ) - + trainer = transformers.Trainer(...) # Setup as shown above trainer.train() return trainer.model ``` -### Docker Configuration - -Ensure your Docker environment is set up with CUDA support: - -```python -from zenml import pipeline -from zenml.config import DockerSettings - -docker_settings = DockerSettings( - parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", - requirements=["accelerate", "torchvision"] -) - -@pipeline(settings={"docker": docker_settings}) -def finetuning_pipeline(...): - # Your pipeline steps here -``` - -## Data Iteration and Evaluation +This decorator separates distributed training configuration from model logic and requires a properly configured Docker environment. -Careful attention to input data is crucial. Poorly formatted data can lead to suboptimal model performance. Inspect data at all stages and consider augmenting or synthetically generating data if necessary. Establish basic evaluations to measure model performance and refine hyperparameters. +#### Dataset Iteration +Careful attention to input data is crucial. Poorly formatted data can lead to subpar model performance. Inspecting data at all stages is essential. Consider augmenting data or generating synthetic data if necessary. Evaluations should be established early to measure model performance and guide further refinements. -As you progress, consider: -- Better evaluation methods. -- Model serving and inference strategies. -- Integration within existing production architectures. - -Aim for a model size that meets your use case needs while ensuring performance and efficiency. +Overall, the documentation emphasizes the importance of data quality, evaluation strategies, and iterative experimentation in the finetuning process. ================================================== @@ -19439,7 +19344,7 @@ Aim for a model size that meets your use case needs while ensuring performance a ### Summary: Fine-tuning an LLM in 100 Lines of Code -This documentation provides a concise guide for implementing a fine-tuning pipeline for a language model (LLM) using TinyLlama (1.1B parameters). The example demonstrates how to load a model, prepare a dataset, fine-tune the model, and generate responses. +This documentation outlines a concise implementation of a fine-tuning pipeline for a language model (LLM) using the TinyLlama model. The example demonstrates loading the model, preparing a dataset, fine-tuning, and generating responses. #### Key Components: @@ -19466,7 +19371,7 @@ This documentation provides a concise guide for implementing a fine-tuning pipel return tokenizer(formatted_text, truncation=True, padding="max_length", max_length=128) ``` -4. **Model Fine-tuning**: The model is fine-tuned with specified training parameters: +4. **Model Fine-tuning**: The model is fine-tuned with specified training arguments: ```python def fine_tune_model(base_model: str = "TinyLlama/TinyLlama-1.1B-Chat-v1.0") -> Tuple[AutoModelForCausalLM, AutoTokenizer]: tokenizer = AutoTokenizer.from_pretrained(base_model) @@ -19484,7 +19389,6 @@ This documentation provides a concise guide for implementing a fine-tuning pipel logging_steps=10, save_total_limit=2, ) - trainer = Trainer(model=model, args=training_args, train_dataset=tokenized_dataset, data_collator=DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)) trainer.train() return model, tokenizer @@ -19493,114 +19397,120 @@ This documentation provides a concise guide for implementing a fine-tuning pipel 5. **Response Generation**: The fine-tuned model generates responses based on prompts: ```python def generate_response(prompt: str, model: AutoModelForCausalLM, tokenizer: AutoTokenizer, max_length: int = 128) -> str: - inputs = tokenizer(f"### Instruction: {prompt}\n### Response:", return_tensors="pt").to(model.device) + formatted_prompt = f"### Instruction: {prompt}\n### Response:" + inputs = tokenizer(formatted_prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_length=max_length, temperature=0.7, num_return_sequences=1) return tokenizer.decode(outputs[0], skip_special_tokens=True) ``` +6. **Execution**: The main function fine-tunes the model and tests it with sample prompts. + #### Limitations: -- **Dataset Size**: The example uses only three training examples, which is insufficient for robust performance. -- **Model Size**: Larger models may yield better results but require more resources. -- **Training Time**: Minimal epochs and a simple learning rate are used for demonstration. -- **Evaluation**: Proper evaluation metrics and validation data are necessary for production systems. +- Small dataset size (3 examples). +- Limited training epochs and simple learning rate. +- No evaluation metrics included. #### Next Steps: -The guide suggests exploring advanced topics such as: -- Larger models and datasets -- Evaluation metrics -- Parameter-efficient fine-tuning (PEFT) -- Experiment tracking and model management -- Deployment of fine-tuned models +Future sections will cover: +- Larger models and datasets. +- Evaluation metrics. +- Parameter-efficient fine-tuning (PEFT). +- Experiment tracking and model management. +- Deployment of fine-tuned models. -This summary captures the essential technical details and key points for implementing the LLM fine-tuning pipeline. +This implementation serves as a foundational example for understanding LLM fine-tuning concepts. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-llms.md === -### Summary: Finetuning LLMs - -This documentation focuses on finetuning Large Language Models (LLMs) to enhance performance and reduce costs. Key areas covered include: - -1. **Previous Learnings**: - - Utilization of RAG with ZenML. - - Evaluation of RAG systems. - - Improvement of retrieval through reranking. - - Finetuning embeddings for RAG systems. - -2. **Purpose of Finetuning**: - - Finetuning is beneficial in scenarios where specific tasks or domain knowledge are required, such as: - - Generating responses in a specific format. - - Understanding domain-specific terminology. - - Reducing prompt length for consistent outputs. - - Following specific patterns or protocols. - - Optimizing for latency by minimizing context window size. +### Summary of LLM Finetuning Documentation -3. **Guide Structure**: - - **Finetuning in 100 lines of code**: A concise code example for finetuning. - - **Why and when to finetune LLMs**: Guidelines on the necessity of finetuning. - - **Starter choices with finetuning**: Initial options for finetuning approaches. - - **Finetuning with 🤗 Accelerate**: Utilizing the Accelerate library for efficient finetuning. - - **Evaluation for finetuning**: Methods to assess finetuning effectiveness. - - **Deploying finetuned models**: Steps for deploying models post-finetuning. - - **Next steps**: Guidance on further actions after finetuning. +**Overview:** +This documentation focuses on finetuning Large Language Models (LLMs) for specific tasks or to enhance performance and cost-effectiveness. While previous sections covered RAG (Retrieval-Augmented Generation) systems, this section delves into the scenarios and methodologies for finetuning LLMs on custom data. + +**Key Scenarios for Finetuning:** +- Improve response generation in specific formats. +- Enhance understanding of domain-specific terminology. +- Reduce prompt length for consistent outputs. +- Follow specific patterns or protocols. +- Optimize latency by minimizing context window requirements. + +**Guide Structure:** +1. [Finetuning in 100 lines of code](finetuning-100-loc.md) +2. [Why and when to finetune LLMs](why-and-when-to-finetune-llms.md) +3. [Starter choices with finetuning](starter-choices-for-finetuning-llms.md) +4. [Finetuning with 🤗 Accelerate](finetuning-with-accelerate.md) +5. [Evaluation for finetuning](evaluation-for-finetuning.md) +6. [Deploying finetuned models](deploying-finetuned-models.md) +7. [Next steps](next-steps.md) -4. **Execution**: - - The finetuning process is straightforward, emphasizing the importance of understanding when to finetune, evaluating performance, and selecting appropriate data. - - For practical implementation, refer to the `llm-lora-finetuning` repository [here](https://github.com/zenml-io/zenml-projects/tree/main/llm-lora-finetuning), which contains the full code. This code can be executed locally (with a GPU) or on cloud compute. +**Important Notes:** +- The guide does not follow a specific use case but emphasizes understanding the need for finetuning, performance evaluation, and data selection. +- For practical implementation, refer to the [`llm-lora-finetuning` repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-lora-finetuning) for complete code, which can be executed locally (with GPU) or on cloud platforms. -This guide does not adhere to a specific use case but provides a comprehensive overview of the finetuning process for LLMs. +This summary encapsulates the essential information regarding LLM finetuning, providing a clear understanding of its purpose, scenarios, and structure of the guide. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/evaluation-for-finetuning.md === -# Summary of Evaluation for LLM Finetuning +# Evaluation for LLM Finetuning -Evaluations (evals) for Large Language Model (LLM) finetuning are essential for assessing model performance, reliability, and safety, similar to unit tests in software development. They help catch issues early, track progress, and ensure models behave as expected. An incremental approach to building evaluations is recommended to avoid paralysis in the development process. +Evaluations (evals) for Large Language Model (LLM) finetuning are essential for assessing model performance, reliability, and safety, similar to unit tests in software development. They help ensure models behave as expected, catch issues early, and track progress over time. An incremental approach to building evaluations is recommended to facilitate early implementation and avoid paralysis by analysis. ## Motivation and Benefits + Key motivations for implementing evals include: 1. **Prevent Regressions**: Ensure new changes do not harm existing functionality. -2. **Track Improvements**: Quantify model enhancements over iterations. +2. **Track Improvements**: Quantify and visualize model enhancements over iterations. 3. **Ensure Safety and Robustness**: Identify and mitigate risks, biases, or unexpected behaviors. -A robust evaluation strategy leads to more reliable and performant LLMs while providing insights into model capabilities and limitations. +A robust evaluation strategy leads to more reliable and performant finetuned LLMs. ## Types of Evaluations -While generic evaluation frameworks are common, custom evaluations tailored to specific use cases are crucial. They can be categorized into: -1. **Success Modes**: Focus on desired outputs (e.g., correct formatting, appropriate responses). -2. **Failure Modes**: Target undesired outputs (e.g., hallucinations, incorrect formats). + +While generic evaluation frameworks are common, custom evals tailored to specific use cases are also important. Custom evaluations can be categorized into: + +1. **Success Modes**: Focus on desired outputs, such as: + - Correct formatting + - Appropriate responses to prompts + - Desired behavior in edge cases + +2. **Failure Modes**: Target undesired outputs, including: + - Hallucinations + - Incorrect formats + - Biased or incoherent responses ### Example Code for Custom Evals + ```python from my_library import query_llm good_responses = { - "what are the best salads available at the food court?": ["caesar", "italian"], - "how late is the shopping center open until?": ["10pm", "22:00", "ten"] + "what are the best salads?": ["caesar", "italian"], + "how late is the shopping center open?": ["10pm", "22:00"] } for question, answers in good_responses.items(): - llm_response = query_llm(question) - assert any(answer in llm_response for answer in answers) + assert any(answer in query_llm(question) for answer in answers) bad_responses = { - "who is the manager of the shopping center?": ["tom hanks", "spiderman"] + "who is the manager?": ["tom hanks", "spiderman"] } for question, answers in bad_responses.items(): - llm_response = query_llm(question) - assert not any(answer in llm_response for answer in answers) + assert not any(answer in query_llm(question) for answer in answers) ``` ## Generalized Evals and Frameworks + Generalized evals provide structured evaluation approaches, including: -- Organization of evals -- Standardized metrics -- Insights into overall performance +- Organization and structuring of evals +- Standardized metrics for common tasks +- Insights into overall model performance -However, they should be complemented with custom evals. Recommended frameworks include: +Consider using frameworks like: - [prodigy-evaluate](https://github.com/explosion/prodigy-evaluate) - [ragas](https://docs.ragas.io/en/stable/getstarted/monitoring.html) - [giskard](https://docs.giskard.ai/en/stable/getting_started/quickstart/quickstart_llm.html) @@ -19608,14 +19518,15 @@ However, they should be complemented with custom evals. Recommended frameworks i - [nervaluate](https://github.com/MantisAI/nervaluate) ## Data and Tracking -Regular analysis of inference data is vital for identifying patterns and guiding improvements. Implement comprehensive logging early to track model behavior. Suggested frameworks for data collection include: + +Regular analysis of inference data is crucial for identifying patterns and guiding improvements. Implement comprehensive logging early to track model behavior. Consider using frameworks for structured data collection and analysis, such as: - [weave](https://github.com/wandb/weave) - [openllmetry](https://github.com/traceloop/openllmetry) - [langsmith](https://smith.langchain.com/) - [langfuse](https://langfuse.com/) - [braintrust](https://www.braintrust.dev/) -Creating simple dashboards to visualize key performance metrics is also recommended for monitoring progress and assessing the impact of changes. +Creating simple dashboards to visualize core performance metrics can help monitor progress and assess the impact of changes effectively. Prioritize simplicity over perfection in your dashboard implementation. ================================================== @@ -19625,65 +19536,64 @@ Creating simple dashboards to visualize key performance metrics is also recommen This guide provides an overview of finetuning large language models (LLMs) on custom data. Key points include: -- **Finetuning Limitations**: It is not a universal solution and may not achieve desired accuracy. It introduces technical debt. -- **Use Cases Beyond Chatbots**: LLMs can be applied in various contexts, often with lower failure rates than chatbots. -- **Experimentation First**: Finetuning should be a final step after exploring simpler alternatives like smaller models or Retrieval-Augmented Generation (RAG). +- **Finetuning Limitations**: It's not a universal solution and may not achieve desired accuracy. It incurs technical debt and should be considered after exploring other methods. +- **Use Cases Beyond Chatbots**: LLMs can be applied in various contexts beyond chat interfaces, often with lower failure rates. -### Scenarios for Finetuning +### When to Finetune an LLM -Finetuning is beneficial in the following situations: +Finetuning is beneficial in specific scenarios: -1. **Domain-Specific Knowledge**: For deep understanding in specialized fields (e.g., medical, legal). -2. **Consistent Style/Format**: When specific output formats are required, such as in code generation. -3. **Task Accuracy**: For improved performance on critical tasks. -4. **Proprietary Information**: When handling sensitive data that cannot be sent to external APIs. -5. **Custom Instructions**: To embed frequently used prompts directly into the model. -6. **Efficiency**: To enhance performance with shorter prompts, reducing costs and latency. +1. **Domain-Specific Knowledge**: Necessary for deep understanding in fields like medical or legal, especially if the base model lacks relevant training data. +2. **Consistent Style/Format**: Required for specific outputs, such as code generation. +3. **Improved Accuracy**: Needed for critical tasks. +4. **Proprietary Information**: Essential for handling confidential data without external API use. +5. **Custom Instructions**: Repeated prompts can be integrated into the model to save on latency and costs. +6. **Efficiency**: Can enhance performance with shorter prompts. ### Decision Flowchart ```mermaid flowchart TD A[Should I finetune an LLM?] --> B{Is prompt engineering<br/>sufficient?} - B -->|Yes| C[Use prompt engineering] - B -->|No| D{Is it a knowledge retrieval<br/>problem?} + B -->|Yes| C[Use prompt engineering<br/>No finetuning needed] + B -->|No| D{Is it primarily a<br/>knowledge retrieval<br/>problem?} D -->|Yes| E{Is real-time data<br/>access needed?} - E -->|Yes| F[Use RAG] - E -->|No| G{Is data volume<br/>very large?} + E -->|Yes| F[Use RAG<br/>No finetuning needed] + E -->|No| G{Is data volume<br/>very large?>} G -->|Yes| H[Consider hybrid:<br/>RAG + Finetuning] G -->|No| F D -->|No| I{Is it a narrow,<br/>specific task?} I -->|Yes| J{Can a smaller<br/>specialized model<br/>handle it?} - J -->|Yes| K[Use smaller model] + J -->|Yes| K[Use smaller model<br/>No finetuning needed] J -->|No| L[Consider finetuning] - I -->|No| M{Do you need<br/>consistent style?<br/>or format?} + I -->|No| M{Do you need<br/>consistent style<br/>or format?} M -->|Yes| L M -->|No| N{Is deep domain<br/>expertise required?} N -->|Yes| O{Is the domain<br/>well-represented in<br/>base model?} - O -->|Yes| P[Use base model] + O -->|Yes| P[Use base model<br/>No finetuning needed] O -->|No| L N -->|No| Q{Is data<br/>proprietary/sensitive?} Q -->|Yes| R{Can you use<br/>API solutions?} - R -->|Yes| S[Use API solutions] + R -->|Yes| S[Use API solutions<br/>No finetuning needed] R -->|No| L Q -->|No| S ``` -### Alternatives to Finetuning +### Alternatives to Consider Before finetuning, consider: - **Prompt Engineering**: Often effective without finetuning. -- **RAG**: More effective for specific knowledge bases. -- **Smaller Models**: Task-specific models may outperform finetuned LLMs. -- **API Solutions**: Simpler and cost-effective for non-sensitive data. +- **Retrieval-Augmented Generation (RAG)**: More effective for specific knowledge bases. +- **Smaller Task-Specific Models**: May outperform finetuned LLMs for narrow tasks. +- **API Solutions**: Simpler and cost-effective for non-sensitive data applications. -Finetuning can be powerful but should only be pursued after simpler solutions have been exhausted and a clear need for its benefits is established. The next section will cover practical considerations for finetuning LLMs. +Finetuning should be a last resort after exploring simpler solutions, ensuring it aligns with specific needs. The next section will cover practical considerations for finetuning LLMs. ================================================== @@ -19691,29 +19601,28 @@ Finetuning can be powerful but should only be pursued after simpler solutions ha # Next Steps -After iterating on your finetuned model, assess the following key areas: +After iterating to improve your finetuned model, consider the following key areas: -- Factors improving model performance -- Factors degrading model performance -- Minimum viable model size -- Alignment with company processes (iteration time vs. hardware limitations) -- Effectiveness of the model in addressing the business use case +- Identify factors that enhance or degrade model performance. +- Determine the minimum viable model size. +- Assess the feasibility of iteration time within your hardware constraints. +- Ensure the finetuned model effectively addresses your business use case. -These insights will guide your next steps, which may include: +Next steps may include: -- Scaling for more users or real-time scenarios -- Meeting critical accuracy requirements, possibly necessitating a larger model -- Integrating LLM finetuning into your production systems, including monitoring and evaluation +- Scaling for more users or real-time scenarios. +- Meeting critical accuracy requirements, potentially necessitating a larger model. +- Integrating the LLM finetuning component into your production system, including monitoring, logging, and ongoing evaluation. -While it may be tempting to switch to larger models, enhancing your data quality is often more impactful, especially if starting with limited examples. Consider adding data through a [flywheel approach](https://www.sh-reya.com/blog/ai-engineering-flywheel/) or generating synthetic data before upgrading to a more powerful model. +While it may be tempting to switch to larger models, focus on enhancing your data quality first, especially if you started with limited examples. Expanding your dataset, either through a flywheel approach or synthetic data generation, is crucial before upgrading to a more powerful model. ## Resources -Recommended resources for further learning on LLM finetuning: +Recommended resources for LLM finetuning: -- [Mastering LLMs Course](https://parlance-labs.com/education/) - Video course by Hamel Husain and Dan Becker -- [Phil Schmid's blog](https://www.philschmid.de/) - Examples of LLM finetuning techniques -- [Sam Witteveen's YouTube channel](https://www.youtube.com/@samwitteveenai) - Videos on finetuning and prompt engineering +- [Mastering LLMs Course](https://parlance-labs.com/education/): Video course by Hamel Husain and Dan Becker. +- [Phil Schmid's blog](https://www.philschmid.de/): Offers worked examples of LLM finetuning. +- [Sam Witteveen's YouTube channel](https://www.youtube.com/@samwitteveenai): Covers finetuning, prompt engineering, and base model explorations. ================================================== @@ -19721,71 +19630,58 @@ Recommended resources for further learning on LLM finetuning: # Deployment Options for Finetuned LLMs -Deploying a finetuned LLM is essential for integrating your custom model into real-world applications. This process requires careful planning to ensure performance, reliability, and cost-effectiveness. +Deploying a finetuned LLM is essential for integrating it into real-world applications. Key aspects include performance, reliability, and cost-effectiveness. ## Deployment Considerations - -Key factors influencing deployment include: - -- **Resource Requirements**: LLMs need substantial RAM, processing power, and specialized hardware. Balancing performance and cost is crucial. -- **Real-Time Needs**: Consider failover scenarios, conduct benchmarks, and model expected user loads. Choose between streaming and non-streaming approaches based on latency and resource use. -- **Optimization Techniques**: Techniques like quantization can reduce resource usage but may require thorough evaluation to avoid performance degradation. +- **Resource Requirements**: LLMs need substantial RAM, processing power, and specialized hardware. Balance performance and cost based on your use case. +- **Real-Time Needs**: Plan for immediate responses, failover scenarios, and conduct benchmarks and load testing. +- **Streaming vs. Non-Streaming**: Choose based on latency and resource utilization. +- **Optimization Techniques**: Use quantization to reduce resource footprint, but evaluate impacts on performance. ## Deployment Options and Trade-offs - -1. **Roll Your Own**: Set up and manage your own infrastructure for maximum control, typically using Docker (e.g., FastAPI). -2. **Serverless Options**: Scalable and cost-efficient, but may face latency issues due to cold starts. -3. **Always-On Options**: Minimizes latency but incurs costs during idle times. +1. **Roll Your Own**: Set up and manage your own infrastructure (e.g., Docker, FastAPI). Offers control but requires expertise. +2. **Serverless Options**: Scalable and cost-efficient, but may have latency issues due to "cold starts." +3. **Always-On Options**: Minimizes latency but incurs costs even during idle periods. 4. **Fully Managed Solutions**: Simplifies deployment but may limit flexibility and increase costs. -Consider your team's expertise, budget, load patterns, and specific requirements when choosing an option. +Consider team expertise, budget, load patterns, and specific requirements when choosing a deployment option. ## Deployment with vLLM and ZenML +[vLLM](https://github.com/vllm-project/vllm) is a library for high-throughput, low-latency LLM deployment. ZenML integrates with vLLM for easy deployment. -[vLLM](https://github.com/vllm-project/vllm) is a library for high-throughput, low-latency LLM deployment. ZenML integrates with vLLM, allowing easy deployment of finetuned models. - -### Example Code: ```python from zenml import pipeline -from typing import Annotated from steps.vllm_deployer import vllm_model_deployer_step from zenml.integrations.vllm.services.vllm_deployment import VLLMDeploymentService @pipeline() -def deploy_vllm_pipeline(model: str, timeout: int = 1200) -> Annotated[VLLMDeploymentService, "my_finetuned_llm"]: +def deploy_vllm_pipeline(model: str, timeout: int = 1200) -> VLLMDeploymentService: service = vllm_model_deployer_step(model=model, timeout=timeout) return service ``` -The `model` argument can be a local model path or a Hugging Face Hub ID. +The `model` argument can be a local path or a Hugging Face Hub ID. The `VLLMDeploymentService` allows batch inference using an OpenAI-compatible API. ## Cloud-Specific Deployment Options - -- **AWS**: Use Amazon SageMaker for managed LLM deployment, or AWS Lambda with API Gateway for serverless options. Amazon ECS or EKS with Fargate offers more control. -- **GCP**: Google Cloud AI Platform provides managed services similar to SageMaker. For serverless, use Cloud Run; for more control, consider Google Kubernetes Engine (GKE). +- **AWS**: Use Amazon SageMaker for managed ML services, or AWS Lambda with API Gateway for serverless. Amazon ECS/EKS with Fargate offers container orchestration. +- **GCP**: Google Cloud AI Platform provides managed services similar to SageMaker. Cloud Run is a serverless option, while GKE offers fine-grained control. ## Architectures for Real-Time Engagement - -To engage users in real-time, deploy models behind a load balancer with auto-scaling. Implement caching (e.g., Redis) for frequent responses and use asynchronous architectures with message queues (e.g., Amazon SQS) for complex queries. For global deployments, consider edge computing services like AWS Lambda@Edge. +Deploy models behind a load balancer with auto-scaling for responsiveness. Implement caching (e.g., Redis) to enhance performance and use asynchronous architectures with message queues for complex queries. For global deployments, consider edge computing services like AWS Lambda@Edge or CloudFront Functions. ## Reducing Latency and Increasing Throughput - -Optimize for low latency and high throughput using: - -- **Model Optimization**: Techniques like quantization and distillation. -- **Hardware Acceleration**: Use GPU instances for faster inference. -- **Request Batching**: Process multiple inputs simultaneously. -- **Monitoring**: Use profiling tools to identify bottlenecks and continuously refine your deployment. +- **Model Optimization**: Use quantization and distillation to improve efficiency. +- **Hardware Acceleration**: Leverage GPU instances for faster inference. +- **Request Batching**: Process multiple inputs simultaneously to increase throughput. +- **Monitoring**: Use profiling tools to identify bottlenecks and continuously optimize deployment. ## Monitoring and Maintenance - -Post-deployment, monitor: - +Ongoing monitoring is crucial: 1. **Evaluation Failures**: Regularly assess model performance. -2. **Latency Metrics**: Ensure response times meet requirements. -3. **Load Patterns**: Analyze user interactions for scaling and optimization. -4. **Data Analysis**: Identify trends and biases in model inputs/outputs. +2. **Latency Metrics**: Track response times. +3. **Load Patterns**: Monitor user interactions for scaling insights. +4. **Data Analysis**: Analyze inputs/outputs for trends and biases. -Ensure compliance with data protection regulations when logging responses. By considering these deployment options and maintaining vigilant monitoring, you can ensure optimal performance of your finetuned LLM. +Ensure compliance with data protection regulations during logging. By considering these deployment options and maintaining monitoring practices, you can ensure optimal performance of your finetuned LLM. ================================================== @@ -19793,23 +19689,34 @@ Ensure compliance with data protection regulations when logging responses. By co ### Evaluating Reranking Performance with ZenML -This documentation outlines how to evaluate the performance of a reranking model using ZenML, focusing on comparing retrieval performance before and after reranking. +This documentation outlines how to evaluate the performance of a reranking model using ZenML. The evaluation involves comparing retrieval performance before and after applying reranking, utilizing established metrics. -#### Key Steps for Evaluation +#### Key Steps in Evaluation -1. **Retrieval Evaluation Function**: The core function `perform_retrieval_evaluation` assesses retrieval performance based on generated questions and relevant documents. It checks if the expected URL is present in the retrieved results and calculates the failure rate. +1. **Retrieval Evaluation Function**: + The core function `perform_retrieval_evaluation` evaluates retrieval performance based on generated questions and relevant documents. It checks if the expected URL is present in the retrieved results and calculates the failure rate. ```python def perform_retrieval_evaluation(sample_size: int, use_reranking: bool) -> float: dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train") sampled_dataset = dataset.shuffle(seed=42).select(range(sample_size)) - failures = sum( - 1 for item in sampled_dataset if not any(item["filename"].split("/")[-1] in url for url in query_similar_docs(item["generated_questions"][0], item["filename"].split("/")[-1], use_reranking)[2]) - ) - return round((failures / len(sampled_dataset)) * 100, 2) + total_tests, failures = len(sampled_dataset), 0 + + for item in sampled_dataset: + question = item["generated_questions"][0] + url_ending = item["filename"].split("/")[-1] + _, _, urls = query_similar_docs(question, url_ending, use_reranking) + + if url_ending not in urls: + logging.error(f"Failed for question: {question}. Expected: {url_ending}. Got: {urls}") + failures += 1 + + failure_rate = (failures / total_tests) * 100 + return round(failure_rate, 2) ``` -2. **Evaluation Steps**: Two steps are defined to execute the retrieval evaluation: one without reranking and one with reranking. +2. **Evaluation Steps**: + Two steps are defined to execute evaluations with and without reranking, returning the respective failure rates. ```python @step @@ -19821,25 +19728,27 @@ This documentation outlines how to evaluate the performance of a reranking model return perform_retrieval_evaluation(sample_size, use_reranking=True) ``` -3. **Logging and Insights**: The evaluation steps log the failure rates, allowing for inspection of specific failures in the dashboard or terminal. - -4. **Visualization**: The `visualize_evaluation_results` function creates a bar chart to visualize various evaluation metrics, including failure rates and scores for different aspects of the model's performance. +3. **Visualizing Results**: + The `visualize_evaluation_results` function creates a bar chart to visualize evaluation metrics, including failure rates and various scores. ```python @step(enable_cache=False) def visualize_evaluation_results(...): - scores = [score / 20 for score in [...]] # Normalization + scores = [small_retrieval_eval_failure_rate, small_retrieval_eval_failure_rate_reranking, ...] labels = ["Small Retrieval Eval Failure Rate", ...] - plt.barh(range(len(labels)), scores) - plt.xlabel("Score") - plt.title(f"Evaluation Metrics for {step_context.pipeline_run.name}") + fig, ax = plt.subplots(figsize=(10, 6)) + ax.barh(range(len(labels)), scores) + ax.set_yticks(range(len(labels))) + ax.set_yticklabels(labels) plt.tight_layout() - return Image.open(io.BytesIO(buf.getvalue())) + buf = io.BytesIO() + plt.savefig(buf, format="png") + return Image.open(buf) ``` #### Running the Evaluation Pipeline -To evaluate the reranking model, clone the project repository and run the evaluation pipeline: +To run the evaluation pipeline, clone the project repository and execute the evaluation command: ```bash git clone https://github.com/zenml-io/zenml-projects.git @@ -19847,11 +19756,11 @@ cd llm-complete-guide python run.py --evaluation ``` -This will execute the evaluation pipeline and display results in the dashboard. +This will output results to the dashboard, allowing for inspection of progress and logs. -#### Conclusion +### Conclusion -The evaluation process allows for a clear comparison of retrieval performance with and without reranking, providing insights into the effectiveness of the reranking model and guiding further improvements to the retrieval system. +The documentation provides a structured approach to evaluate and visualize the performance of a reranking model in ZenML, highlighting the importance of analyzing failure rates and overall retrieval performance. ================================================== @@ -19859,9 +19768,9 @@ The evaluation process allows for a clear comparison of retrieval performance wi ### Reranking Overview -**Definition**: Reranking refines the initial ranking of documents retrieved by a system, enhancing the relevance and quality of documents used in Retrieval-Augmented Generation (RAG). The initial retrieval often employs sparse methods like BM25 or TF-IDF, which may not fully capture semantic meaning. +**Definition**: Reranking refines the initial ranking of documents retrieved by a system, enhancing the relevance and quality of documents used in Retrieval-Augmented Generation (RAG). The initial retrieval often employs sparse methods like BM25 or TF-IDF, which focus on lexical matching but may miss semantic context. -**Purpose**: Rerankers reorder documents based on features such as semantic similarity and relevance scores, ensuring that the most relevant documents are prioritized for generating coherent responses. +**Function**: Rerankers reorder retrieved documents by evaluating additional features such as semantic similarity and relevance scores, ensuring the most informative documents are prioritized for LLMs. ### Types of Rerankers @@ -19874,57 +19783,60 @@ The evaluation process allows for a clear comparison of retrieval performance wi 2. **Bi-Encoders**: - Input: Separate encoders for query and document. - - Process: Generates independent embeddings and computes similarity. + - Process: Generate independent embeddings and compute similarity. - Pros: More efficient. - Cons: Weaker interaction capture. 3. **Lightweight Models**: - Examples: Distilled models or small transformer variants. - - Pros: Faster and smaller, suitable for real-time applications. + - Pros: Fast and smaller footprint, suitable for real-time applications. ### Benefits of Reranking in RAG -1. **Improved Relevance**: Identifies the most relevant documents, enhancing the LLM's context for accurate responses. -2. **Semantic Understanding**: Captures semantic meaning, retrieving documents that are contextually relevant even without exact keyword matches. -3. **Domain Adaptation**: Can be fine-tuned on specific data to incorporate domain knowledge. +1. **Improved Relevance**: Identifies the most relevant documents for queries, enhancing LLM context. +2. **Semantic Understanding**: Captures semantic meaning beyond keyword matching, retrieving documents that are contextually relevant. +3. **Domain Adaptation**: Fine-tuning on domain-specific data incorporates relevant knowledge for improved performance. 4. **Personalization**: Tailors document retrieval based on user preferences and historical interactions. -### Next Steps - -The next section will cover implementing reranking in ZenML and integrating it into the RAG inference pipeline. +### Implementation Note +The next section will cover how to implement reranking in ZenML and integrate it into the RAG inference pipeline. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/README.md === -### Summary: Adding Reranking to RAG Inference in ZenML +**Reranking in RAG Inference with ZenML** -Rerankers enhance retrieval systems using LLMs by improving the quality of retrieved documents through reordering based on additional features or scores. This section details how to integrate a reranker into your RAG inference pipeline in ZenML. +Rerankers enhance retrieval systems using LLMs by improving the quality of retrieved documents through reordering based on additional features or scores. This section outlines how to integrate a reranker into your RAG inference pipeline in ZenML. **Key Points:** -- Reranking is an optional enhancement to the existing workflow, which includes data ingestion, preprocessing, embeddings generation, and retrieval. -- It aims to improve the relevance and quality of retrieved documents, leading to better responses from the LLM. -- Basic evaluation metrics have been established to assess retrieval performance. +- Rerankers are optional but can significantly boost the relevance and quality of retrieved documents, leading to improved LLM responses. +- The overall workflow includes data ingestion, preprocessing, embeddings generation, and retrieval, with evaluation metrics set to assess performance. -**Visual Reference:** -- A diagram illustrating the reranking workflow is included, showing its position in the overall system. +**Workflow Overview:** +1. **Setup**: Establish the retrieval system workflow. +2. **Integration**: Add the reranker to reorder documents based on additional scoring. -By incorporating a reranker, users can achieve better retrieval performance in their applications. +Reranking is a valuable enhancement to optimize retrieval performance in your system. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/implementing-reranking.md === -### Implementing Reranking in ZenML +## Implementing Reranking in ZenML + +This documentation outlines how to integrate a reranker into an existing RAG (Retrieval-Augmented Generation) pipeline using the `rerankers` package. The reranker reorders retrieved documents based on their relevance to a given query. + +### Adding Reranking -This documentation explains how to integrate a reranker into an existing RAG (Retrieval-Augmented Generation) pipeline using the `rerankers` package. The reranker reorders retrieved documents based on their relevance to a query. +1. **Dependency**: Use the `rerankers` package, which provides a `Reranker` abstract class for defining custom rerankers or using existing implementations. -#### Adding Reranking +2. **Functionality**: The reranker takes a query and a list of documents, returning a reordered list based on reranking scores. -1. **Dependency**: Use the `rerankers` package, which provides an abstract `Reranker` class for defining custom rerankers or using existing implementations. -2. **Input/Output**: The reranker takes a query and a list of documents, returning a reordered list based on reranking scores. +### Example Code + +#### Basic Reranking -**Example Code**: ```python from rerankers import Reranker @@ -19941,12 +19853,15 @@ texts = [ results = ranker.rank(query="What's your favorite sport?", docs=texts) ``` -**Sample Output**: -``` +#### Output Format + +The results will include ranked documents with their scores: + +```python RankedResults( results=[ - Result(doc_id=5, text='I like to play basketball', score=-0.4653, rank=1), - Result(doc_id=0, text='I like to play soccer', score=-0.7354, rank=2), + Result(doc_id=5, text='I like to play basketball', score=-0.465, rank=1), + Result(doc_id=0, text='I like to play soccer', score=-0.735, rank=2), ... ], query="What's your favorite sport?", @@ -19954,9 +19869,9 @@ RankedResults( ) ``` -#### Custom Reranking Function +### Custom Reranking Function -A helper function can be created to rerank documents based on a query: +A helper function to rerank documents based on a query: ```python def rerank_documents(query: str, documents: List[Tuple], reranker_model: str = "flashrank") -> List[Tuple[str, str]]: @@ -19967,11 +19882,9 @@ def rerank_documents(query: str, documents: List[Tuple], reranker_model: str = " return [(results.results[i].text, documents[results.results[i].doc_id][1]) for i in range(len(results.results))] ``` -This function returns a list of tuples with reranked document text and original URLs. +### Querying Similar Documents -#### Querying Similar Documents - -The reranked documents can be integrated into a query function: +Function to query and optionally rerank documents: ```python def query_similar_docs(question: str, url_ending: str, use_reranking: bool = False, returned_sample_size: int = 5) -> Tuple[str, str, List[str]]: @@ -19989,25 +19902,27 @@ def query_similar_docs(question: str, url_ending: str, use_reranking: bool = Fal return (question, url_ending, urls) ``` -This function retrieves similar documents, optionally reranks them, and returns the top URLs. - -#### Evaluation +### Conclusion -To assess the performance of the reranker, refer to the complete code in the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) repository, specifically the [eval_retrieval.py file](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_retrieval.py). +This integration allows for improved document retrieval quality by reordering results based on relevance scores. For complete code examples, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) and specifically the [`eval_retrieval.py`](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_retrieval.py) file. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/reranking.md === -**Summary: Adding Reranking to RAG Inference in ZenML** +### Summary: Adding Reranking to RAG Inference in ZenML -Rerankers enhance retrieval systems using LLMs by reordering retrieved documents based on additional features or scores, improving their quality. This section outlines how to integrate a reranker into your RAG inference pipeline in ZenML, following the established workflow of data ingestion, preprocessing, embeddings generation, and retrieval. +Rerankers enhance retrieval systems that utilize LLMs by reordering retrieved documents based on additional features or scores, improving their quality. This section details how to integrate a reranker into your RAG inference pipeline in ZenML. -While reranking is optional, it can significantly boost the relevance and quality of retrieved documents, leading to improved LLM responses. +**Key Points:** +- Rerankers are optional but can significantly enhance the relevance of retrieved documents, leading to better LLM responses. +- The overall workflow includes data ingestion, preprocessing, embeddings generation, retrieval, and evaluation metrics. +- Reranking is an additional step that can be implemented after setting up the initial retrieval system. - +**Visual Reference:** +- A workflow diagram illustrates the reranking process within the existing setup (not included here). -In summary, incorporating a reranker can optimize your retrieval performance in ZenML. +By incorporating reranking, you can optimize the performance of your retrieval system in ZenML. ================================================== @@ -20015,21 +19930,23 @@ In summary, incorporating a reranker can optimize your retrieval performance in ### Summary of Documentation for Synthetic Data Generation with Distilabel -**Objective:** Generate synthetic data using `distilabel` to fine-tune embeddings based on an existing dataset of technical documentation. +**Objective**: Generate synthetic data to fine-tune embeddings using an existing dataset of technical documentation. -**Dataset:** The dataset used is available on Hugging Face [here](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions_0_60_0) and consists of `page_content` and source URLs. The goal is to pair `page_content` with generated questions. +**Dataset**: The dataset can be accessed [here](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions_0_60_0), consisting of `page_content` and source URLs. The goal is to pair `page_content` with generated questions. -**Pipeline Overview:** +**Pipeline Overview**: 1. Load the Hugging Face dataset. 2. Use `distilabel` to generate synthetic data. 3. Push the generated data to a new Hugging Face dataset and an Argilla instance for annotation. -**Synthetic Data Generation:** -- `distilabel` enables scalable knowledge distillation from LLMs. -- The pipeline uses `gpt-4o` as the LLM but supports other models. -- The `GenerateSentencePair` step creates appropriate and inappropriate queries for each documentation chunk. +**Synthetic Data Generation**: +- **Tool**: `distilabel` generates synthetic data and provides AI feedback. +- **LLM**: Using `gpt-4o` (other LLMs supported). +- **Pipeline Steps**: + - Load dataset and map `page_content` to `anchor`. + - Generate queries using `GenerateSentencePair`, producing both positive and negative queries. -**Key Code Snippet:** +**Code Snippet**: ```python import os from typing import Annotated, Tuple @@ -20043,7 +19960,6 @@ from zenml import step @step def generate_synthetic_queries(train_dataset: Dataset, test_dataset: Dataset) -> Tuple[Annotated[Dataset, "train_with_queries"], Annotated[Dataset, "test_with_queries"]]: llm = OpenAILLM(model=OPENAI_MODEL_GEN, api_key=os.getenv("OPENAI_API_KEY")) - with distilabel.pipeline.Pipeline(name="generate_embedding_queries") as pipeline: load_dataset = LoadDataFromHub(output_mappings={"page_content": "anchor"}) generate_sentence_pair = GenerateSentencePair(triplet=True, action="query", llm=llm, input_batch_size=10, context=synthetic_generation_context) @@ -20055,30 +19971,31 @@ def generate_synthetic_queries(train_dataset: Dataset, test_dataset: Dataset) -> return train_distiset["default"]["train"], test_distiset["default"]["train"] ``` -**Data Annotation with Argilla:** +**Data Annotation with Argilla**: - After generating synthetic data, it is pushed to Argilla for inspection. - Metadata added includes: - `parent_section`: Source section of documentation. - `token_count`: Number of tokens in the chunk. - Similarity metrics between queries. -**Key Code Snippet for Formatting Data:** +**Embedding Generation Code**: ```python def format_data(batch): model = SentenceTransformer(EMBEDDINGS_MODEL_ID_BASELINE, device="cuda" if torch.cuda.is_available() else "cpu") + batch["anchor-vector"] = model.encode(batch["anchor"]).tolist() + batch["positive-vector"] = model.encode(batch["positive"]).tolist() + batch["negative-vector"] = model.encode(batch["negative"]).tolist() - def get_embeddings(batch_column): - return [vector.tolist() for vector in model.encode(batch_column)] + def get_similarities(a, b): + return [cosine_similarity([pos_vec], [neg_vec])[0][0] for pos_vec, neg_vec in zip(a, b)] - batch["anchor-vector"] = get_embeddings(batch["anchor"]) - # Similarity calculations omitted for brevity + batch["similarity-positive-negative"] = get_similarities(batch["positive-vector"], batch["negative-vector"]) + batch["similarity-anchor-positive"] = get_similarities(batch["anchor-vector"], batch["positive-vector"]) + batch["similarity-anchor-negative"] = get_similarities(batch["anchor-vector"], batch["negative-vector"]) return batch ``` -**Next Steps:** -- After data exploration and annotation in Argilla, proceed to fine-tune embeddings using the generated dataset, assuming quality is sufficient. - -This summary captures the essential technical details and steps involved in generating and annotating synthetic data using `distilabel` and ZenML. +**Next Steps**: After annotation, proceed to fine-tune embeddings using the generated dataset, even if annotation is not performed, assuming data quality is sufficient. ================================================== @@ -20086,56 +20003,71 @@ This summary captures the essential technical details and steps involved in gene ### Summary: Finetuning Embeddings with Sentence Transformers -This documentation details the process of finetuning embeddings using the Sentence Transformers library within a ZenML pipeline. The pipeline involves the following steps: +This documentation outlines the process of finetuning embeddings using the Sentence Transformers library. The key steps in the pipeline include: 1. **Data Loading**: - - Load data from Hugging Face or Argilla (using the `--argilla` flag). - ```bash - python run.py --embeddings --argilla - ``` + - Load data from a Hugging Face dataset or Argilla using the `--argilla` flag: + ```bash + python run.py --embeddings --argilla + ``` 2. **Finetuning Process**: - - **Model Loading**: Load the base model (`EMBEDDINGS_MODEL_ID_BASELINE`) using Sentence Transformers with SDPA for efficient training. - - **Loss Function**: Utilize `MatryoshkaLoss`, a wrapper around `MultipleNegativesRankingLoss`, allowing simultaneous training with different embedding dimensions. - - **Dataset Preparation**: Load the training dataset from a specified path and save it as a temporary JSON file. - - **Evaluator**: Create an evaluator using `get_evaluator` to assess model performance. - - **Training Arguments**: Configure training parameters (epochs, batch size, learning rate, etc.) using `SentenceTransformerTrainingArguments`. - - **Trainer Initialization**: Initialize `SentenceTransformerTrainer` with the model, arguments, dataset, and loss function, then call `trainer.train()` to start training. - - **Model Saving**: After training, push the finetuned model to the Hugging Face Hub using `trainer.model.push_to_hub(EMBEDDINGS_MODEL_ID_FINE_TUNED)`. - - **Metadata Logging**: Log training metadata (parameters, hardware, etc.). - - **Model Rehydration**: Save and reload the trained model to handle materialization errors. - -3. **Code Snippet**: - ```python - model = SentenceTransformer(EMBEDDINGS_MODEL_ID_BASELINE) - train_loss = MatryoshkaLoss(model, MultipleNegativesRankingLoss(model)) - train_dataset = load_dataset("json", data_files=train_dataset_path) - args = SentenceTransformerTrainingArguments(...) - trainer = SentenceTransformerTrainer(model, args, train_dataset, train_loss) - trainer.train() - trainer.model.push_to_hub(EMBEDDINGS_MODEL_ID_FINE_TUNED) - ``` + - **Model Loading**: Load the base model (`EMBEDDINGS_MODEL_ID_BASELINE`) with efficient training using Flash Attention 2. + - **Loss Function**: Use a custom `MatryoshkaLoss`, which wraps `MultipleNegativesRankingLoss`, allowing the model to learn embeddings at various granularities. + - **Dataset Preparation**: Load the training dataset from a specified path and save it temporarily in JSON format. + - **Evaluator**: Create an evaluator to assess model performance during training. + - **Training Arguments**: Set hyperparameters (epochs, batch size, learning rate, etc.) using `SentenceTransformerTrainingArguments`. + - **Trainer Initialization**: Initialize `SentenceTransformerTrainer` with the model, training arguments, dataset, and loss function, then call `trainer.train()` to start training. + - **Model Saving**: After training, save the finetuned model to the Hugging Face Hub: + ```python + trainer.model.push_to_hub(EMBEDDINGS_MODEL_ID_FINE_TUNED) + ``` + - **Metadata Logging**: Log training parameters and hardware information. + - **Model Rehydration**: Save the trained model to a temporary file, reload it into a new instance, and return the rehydrated model. + +### Example Code Snippet: +```python +# Load the base model +model = SentenceTransformer(EMBEDDINGS_MODEL_ID_BASELINE) +# Define the loss function +train_loss = MatryoshkaLoss(model, MultipleNegativesRankingLoss(model)) +# Prepare the training dataset +train_dataset = load_dataset("json", data_files=train_dataset_path) +# Set up the training arguments +args = SentenceTransformerTrainingArguments(...) +# Create the trainer +trainer = SentenceTransformerTrainer(model, args, train_dataset, train_loss) +# Start training +trainer.train() +# Save the finetuned model +trainer.model.push_to_hub(EMBEDDINGS_MODEL_ID_FINE_TUNED) +``` -The finetuning process enhances model performance across various embedding sizes, with the model versioned and tracked in ZenML for observability. The pipeline concludes with an evaluation of both base and finetuned embeddings, along with visualization of results. +### Conclusion: +The finetuning process enhances the model's performance across various embedding sizes and ensures the model is versioned and tracked within ZenML for observability. After finetuning, the pipeline evaluates and visualizes the results of both base and finetuned embeddings. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-embeddings/finetuning-embeddings.md === -**Summary: Finetuning Embeddings on Synthetic Data for Improved Retrieval Performance** +**Summary: Finetuning Embeddings on Synthetic Data** -This documentation outlines the process of optimizing embedding models using synthetic data generation and human feedback to enhance retrieval performance in a RAG (Retrieval-Augmented Generation) pipeline. The pipeline retrieves relevant documents from a vector database and generates responses using a language model. +This documentation outlines the process of optimizing embedding models for improved retrieval performance in a Retrieval-Augmented Generation (RAG) pipeline. The focus is on finetuning embeddings using custom synthetic data and human feedback, moving beyond off-the-shelf embeddings for better domain-specific performance. **Key Steps:** 1. **Generate Synthetic Data**: Use `distilabel` for synthetic data generation. -2. **Finetune Embeddings**: Utilize Sentence Transformers for embedding finetuning. -3. **Evaluate Finetuned Embeddings**: Leverage ZenML's model control plane for systematic evaluation. +2. **Finetune Embeddings**: Employ Sentence Transformers for embedding finetuning. +3. **Evaluate Finetuned Embeddings**: Utilize ZenML's model control plane for systematic evaluation. **Libraries Used:** -- **`distilabel`**: Generates synthetic data and provides AI feedback for knowledge distillation. +- **`distilabel`**: Generates synthetic data and provides AI feedback to enhance model outputs. - **`argilla`**: Facilitates collaboration between AI engineers and domain experts through an interactive UI for data organization and exploration. -Both libraries can be used independently but are more effective together within ZenML pipelines. For practical implementation, refer to the [llm-complete-guide repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) for full code examples. The finetuning process can be executed locally or on cloud compute. +Both libraries can function independently but are more effective when used together within ZenML pipelines. + +**Implementation**: Follow the example in the [llm-complete-guide repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) for complete code, which can be executed locally or on cloud compute. + +This process aims to enhance the retrieval step and overall performance of the RAG pipeline by leveraging domain-specific synthetic data. ================================================== @@ -20143,41 +20075,42 @@ Both libraries can be used independently but are more effective together within ### Summary of Documentation on Evaluating Finetuned Embeddings -This documentation outlines the process of evaluating finetuned embeddings against original base embeddings using the MatryoshkaLoss function. The evaluation steps are straightforward, as demonstrated in the provided code snippet. +This documentation outlines the process of evaluating finetuned embeddings and comparing them to original base embeddings using the MatryoshkaLoss function. The evaluation steps are straightforward, as demonstrated in the provided code. -#### Key Code Components +#### Key Code Snippet +```python +from zenml import log_model_metadata, step -1. **Model Evaluation Function**: - ```python - def evaluate_model(dataset: DatasetDict, model: SentenceTransformer) -> Dict[str, float]: - evaluator = get_evaluator(dataset=dataset, model=model) - return evaluator(model) - ``` +def evaluate_model(dataset: DatasetDict, model: SentenceTransformer) -> Dict[str, float]: + evaluator = get_evaluator(dataset=dataset, model=model) + return evaluator(model) -2. **Base Model Evaluation Step**: - ```python - @step - def evaluate_base_model(dataset: DatasetDict) -> Annotated[Dict[str, float], "base_model_evaluation_results"]: - model = SentenceTransformer(EMBEDDINGS_MODEL_ID_BASELINE, device="cuda" if torch.cuda.is_available() else "cpu") - results = evaluate_model(dataset=dataset, model=model) - - base_model_eval = {f"dim_{dim}_cosine_ndcg@10": float(results[f"dim_{dim}_cosine_ndcg@10"]) for dim in EMBEDDINGS_MODEL_MATRYOSHKA_DIMS} - log_model_metadata(metadata={"base_model_eval": base_model_eval}) - - return results - ``` +@step +def evaluate_base_model(dataset: DatasetDict) -> Annotated[Dict[str, float], "base_model_evaluation_results"]: + model = SentenceTransformer(EMBEDDINGS_MODEL_ID_BASELINE, device="cuda" if torch.cuda.is_available() else "cpu") + results = evaluate_model(dataset=dataset, model=model) + + base_model_eval = {f"dim_{dim}_cosine_ndcg@10": float(results[f"dim_{dim}_cosine_ndcg@10"]) for dim in EMBEDDINGS_MODEL_MATRYOSHKA_DIMS} + log_model_metadata(metadata={"base_model_eval": base_model_eval}) + + return results +``` -#### Logging and Visualization -- Evaluation results are logged as model metadata in ZenML, allowing inspection via the Model Control Plane. -- Results are stored as a dictionary of string keys and float values, versioned and saved in the artifact store. -- Visualization of results can be done using `PIL.Image` and `matplotlib`, comparing base and finetuned model evaluations. +#### Evaluation and Logging +- The results are logged as model metadata in ZenML, allowing inspection via the Model Control Plane. +- Results are stored as a dictionary with string keys and float values, which are versioned and tracked. -#### Insights and Next Steps -- The finetuned embeddings show improved recall but may require better training data for significant enhancements. -- The Model Control Plane provides a unified interface for inspecting artifacts, models, and metadata. -- Future steps involve integrating finetuned embeddings into the original RAG pipeline and exploring LLM finetuning and deployment. +#### Visualization +- Results can be visualized using `PIL.Image` or `matplotlib`, comparing base and finetuned model evaluations. The visualization shows improvements in recall across dimensions but indicates room for further enhancement, particularly in the quality of training data. -For further exploration, users can refer to additional resources on LLM finetuning with ZenML, including a LoRA project and a blog post on finetuning Llama 3.1. +#### Model Control Plane +- The Model Control Plane provides a unified interface for inspecting artifacts, models, logged metadata, and associated pipeline runs. It allows users to view the latest versions and compare evaluation metrics. + +#### Next Steps +- After evaluation, finetuned embeddings can be integrated into the original RAG pipeline to generate new embeddings and rerun retrieval evaluations. +- The next section will focus on LLM finetuning and deployment, with resources available for practical implementation. + +For further exploration, users can refer to additional guides and repositories provided in the documentation. ================================================== @@ -20185,16 +20118,20 @@ For further exploration, users can refer to additional resources on LLM finetuni ### Cloud Guide Summary -This section provides guides for connecting major public clouds to your ZenML deployment by configuring a **stack**. A stack is the combination of tools and infrastructure that your pipelines utilize. ZenML acts as a translation layer, enabling code execution across different stacks. +This section provides guides for connecting major public clouds to your ZenML deployment by configuring a **stack**. A stack is the set of tools and infrastructure that your pipelines utilize. When executing a pipeline, ZenML adapts its actions based on the configured stack. **Key Points:** -- **Stack Registration:** The focus is on registering a stack, assuming that the necessary resources for running pipelines have already been provisioned. -- **Provisioning Infrastructure:** This can be done manually or through: - - The in-browser stack deployment wizard - - The stack registration wizard +- The guide focuses on **registering** a stack, assuming the necessary resources for pipeline execution are already provisioned. +- Infrastructure provisioning can be done through: + - Manual setup + - In-browser stack deployment wizard + - Stack registration wizard - ZenML Terraform modules - +**Visual Aid:** +- ZenML acts as a translation layer, enabling code execution across different stacks. + +This guide is essential for ensuring that your ZenML workflows can leverage cloud resources effectively. ==================================================