=== File: docs/book/introduction.md === # ZenML Documentation Summary **ZenML** is an open-source MLOps framework designed for creating portable, production-ready machine learning pipelines, facilitating collaboration by decoupling infrastructure from code. ## For MLOps Platform Engineers - **ZenML Pro**: Offers a managed control plane with features like CI/CD and RBAC. - **Self-hosted Deployment**: Deploy on any cloud provider using Terraform utilities. ```bash zenml stack register --provider aws zenml stack deploy --provider gcp ``` - **Standardization**: Register environments as ZenML stacks for consistent ML workflows. ```bash zenml orchestrator register kfp_orchestrator -f kubeflow zenml stack register production --orchestrator kubeflow ... ``` - **No Vendor Lock-In**: Easily switch between cloud providers. ```bash zenml stack set gcp python run.py # Run in GCP zenml stack set aws python run.py # Now run in AWS ``` ## For Data Scientists - **Local Development**: Develop models locally and switch to production seamlessly. ```bash python run.py # Local development zenml stack set production python run.py # Run in production ``` - **Pythonic SDK**: Use decorators to create ZenML pipelines. ```python from zenml import pipeline, step @step def step_1() -> str: return "world" @step def step_2(input_one: str, input_two: str) -> None: print(f"{input_one} {input_two}") @pipeline def my_pipeline(): step_2(input_one="hello", input_two=step_1()) my_pipeline() ``` - **Automatic Metadata Tracking**: Tracks metadata and versions datasets and models. ## For ML Engineers - **ML Lifecycle Management**: Manage ML workflows and infrastructures easily. ```bash zenml stack set staging python run.py # Test on staging zenml stack set production python run.py # Run in production ``` - **Reproducibility**: Automatically tracks and versions all components. - **Automated Deployments**: Define workflows as ZenML pipelines for easy deployment. ```python from zenml.integrations.seldon.steps import seldon_model_deployer_step @pipeline def my_pipeline(): data = data_loader_step() model = model_trainer_step(data) seldon_model_deployer_step(model) ``` ### Additional Resources - Links to guides on production setup, core concepts, and FAQs are available for further exploration. ZenML provides a comprehensive solution for managing the ML lifecycle, ensuring reproducibility, and facilitating collaboration among teams. ================================================== === File: docs/book/component-guide/README.md === # Overview of ZenML MLOps Components and Integrations ZenML categorizes MLOps tools into stack components to streamline the understanding and implementation of MLOps pipelines. Each stack component serves a specific function in the MLOps process and is realized as a base abstraction for standardizing workflows. Users can implement these abstractions or utilize built-in integrations. ## Supported Stack Components | **Type** | **Description** | |-------------------------|----------------------------------------------------------| | [Orchestrator](orchestrators/orchestrators.md) | Manages pipeline runs | | [Artifact Store](artifact-stores/artifact-stores.md) | Stores artifacts from pipelines | | [Container Registry](container-registries/container-registries.md) | Stores container images | | [Data Validator](data-validators/data-validators.md) | Validates data and models | | [Experiment Tracker](experiment-trackers/experiment-trackers.md) | Tracks ML experiments | | [Model Deployer](model-deployers/model-deployers.md) | Online model serving platforms | | [Step Operator](step-operators/step-operators.md) | Executes individual steps in specific environments | | [Alerter](alerters/alerters.md) | Sends alerts through specified channels | | [Image Builder](image-builders/image-builders.md) | Builds container images | | [Annotator](annotators/annotators.md) | Labels and annotates data | | [Model Registry](model-registries/model-registries.md) | Manages ML models | | [Feature Store](feature-stores/feature-stores.md) | Manages data/features | Each ZenML pipeline requires a **stack** that includes at least an orchestrator and an artifact store, with other components being optional. ## Custom Component Flavors Users can create custom components by writing their own component `flavors`. For guidance, refer to the [general guide on writing component flavors](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) or specific guides for component types. ## Integrations ZenML enhances MLOps processes by integrating with various tools, allowing users to orchestrate workflows with tools like [Airflow](orchestrators/airflow.md) or [Kubeflow](orchestrators/kubeflow.md), track experiments with [MLflow](experiment-trackers/mlflow.md) or [Weights & Biases](experiment-trackers/wandb.md), and deploy models using [Seldon Core](model-deployers/seldon.md). This integration flexibility prevents vendor lock-in and allows easy switching of tools. ### Available Integrations A comprehensive list of ZenML integrations can be found on the [integrations webpage](https://zenml.io/integrations) or in the [integrations directory](https://github.com/zenml-io/zenml/tree/main/src/zenml/integrations). ### Installing Integrations To install integrations, use: ```bash zenml integration install kubeflow mlflow seldon -y ``` This command installs the preferred versions of the integrations via pip. ### Upgrade Integrations To upgrade integrations, use: ```bash zenml integration upgrade mlflow pytorch -y ``` If no integrations are specified, all installed integrations will be upgraded. ### Community Contributions ZenML welcomes community contributions for new integrations. Refer to the [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) and [External Integration Guide](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/README.md) for more information. ================================================== === File: docs/book/component-guide/integration-overview.md === ### Overview of ZenML Integrations ZenML enhances MLOps pipelines by integrating with various tools, allowing users to orchestrate workflows, track experiments, and deploy models seamlessly. This flexibility prevents vendor lock-in, enabling easy tool switching as requirements evolve. #### Available Integrations A comprehensive list of supported ZenML integrations can be found on the [ZenML integrations webpage](https://zenml.io/integrations) or in the [integrations directory on GitHub](https://github.com/zenml-io/zenml/tree/main/src/zenml/integrations). #### Installing ZenML Integrations To install integrations, use the command: ```bash zenml integration install kubeflow mlflow seldon -y ``` This command installs preferred versions via pip: ```bash pip install kubeflow== mlflow== seldon== ``` The `-y` flag automatically confirms installation prompts. For a complete list of CLI commands, run `zenml integration --help`. #### Using `uv` for Package Installation You can utilize [`uv`](https://github.com/astral-sh/uv) as a package manager by adding the `--uv` flag: ```bash zenml integration install --uv ``` Ensure `uv` is installed, as this is an experimental feature. #### Upgrading ZenML Integrations Upgrade all integrations to their latest versions with: ```bash zenml integration upgrade mlflow pytorch -y ``` The `-y` flag confirms upgrades without prompts. If no integrations are specified, all installed integrations will be upgraded. #### Community Contributions ZenML encourages community contributions for new integrations. Refer to the [roadmap](https://zenml.io/roadmap) for prioritized tools and check the [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) for details on contributing. ================================================== === File: docs/book/component-guide/component-guide.md === ### Overview of MLOps Components in ZenML MLOps can be overwhelming due to the multitude of tools available. ZenML categorizes these tools into **Stacks and Stack Components** to clarify their roles in MLOps pipelines. Stack components are standardized abstractions that streamline workflows, allowing users to implement custom components or utilize built-in integrations. #### Supported Stack Components: | **Component Type** | **Description** | |--------------------------|--------------------------------------------------------| | [Orchestrator](./orchestrators/orchestrators.md) | Manages pipeline runs | | [Artifact Store](./artifact-stores/artifact-stores.md) | Stores artifacts generated by pipelines | | [Container Registry](./container-registries/container-registries.md) | Stores container images | | [Step Operator](./step-operators/step-operators.md) | Executes individual steps in specific environments | | [Model Deployer](./model-deployers/model-deployers.md) | Handles online model serving | | [Feature Store](./feature-stores/feature-stores.md) | Manages data and features | | [Experiment Tracker](./experiment-trackers/experiment-trackers.md) | Tracks ML experiments | | [Alerter](./alerters/alerters.md) | Sends alerts through designated channels | | [Annotator](./annotators/annotators.md) | Labels and annotates data | | [Data Validator](./data-validators/data-validators.md) | Validates data and models | | [Image Builder](./image-builders/image-builders.md) | Builds container images | | [Model Registry](./model-registries/model-registries.md) | Manages ML models | Each ZenML pipeline requires a **stack** that includes at least an orchestrator and an artifact store, with other components being optional based on the pipeline's maturity. #### Custom Component Flavors Users can create custom component **flavors** to tailor ZenML's behavior. For guidance, refer to the [general guide on writing component flavors](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) or specialized guides for specific components, such as the [custom orchestrator guide](orchestrators/custom.md). ================================================== === File: docs/book/component-guide/model-registries/custom.md === ### Develop a Custom Model Registry #### Overview To create a custom model registry in ZenML, it is essential to understand the general concepts of writing custom component flavors. The `BaseModelRegistry` class serves as the abstract base for custom model registries, providing a basic interface for model registration and retrieval. #### Base Abstraction The `BaseModelRegistry` class includes several abstract methods for model management: ```python from abc import ABC, abstractmethod from typing import Any, Dict, List, Optional from zenml.stack import StackComponent, StackComponentConfig class BaseModelRegistryConfig(StackComponentConfig): """Base config for model registries.""" class BaseModelRegistry(StackComponent, ABC): @abstractmethod def register_model(self, name: str, description: Optional[str] = None, tags: Optional[Dict[str, str]] = None) -> RegisteredModel: """Registers a model in the model registry.""" @abstractmethod def delete_model(self, name: str) -> None: """Deletes a registered model from the model registry.""" @abstractmethod def update_model(self, name: str, description: Optional[str] = None, tags: Optional[Dict[str, str]] = None) -> RegisteredModel: """Updates a registered model in the model registry.""" @abstractmethod def get_model(self, name: str) -> RegisteredModel: """Gets a registered model from the model registry.""" @abstractmethod def list_models(self, name: Optional[str] = None, tags: Optional[Dict[str, str]] = None) -> List[RegisteredModel]: """Lists all registered models in the model registry.""" # Model Version Methods @abstractmethod def register_model_version(self, name: str, version: Optional[str] = None, **kwargs: Any) -> RegistryModelVersion: """Registers a model version in the model registry.""" @abstractmethod def delete_model_version(self, name: str, version: str) -> None: """Deletes a model version from the model registry.""" @abstractmethod def update_model_version(self, name: str, version: str, **kwargs: Any) -> RegistryModelVersion: """Updates a model version in the model registry.""" @abstractmethod def list_model_versions(self, name: Optional[str] = None, **kwargs: Any) -> List[RegistryModelVersion]: """Lists all model versions for a registered model.""" @abstractmethod def get_model_version(self, name: str, version: str) -> RegistryModelVersion: """Gets a model version for a registered model.""" @abstractmethod def load_model_version(self, name: str, version: str, **kwargs: Any) -> Any: """Loads a model version from the model registry.""" ``` #### Building a Custom Model Registry To create a custom flavor: 1. Understand core concepts of model registries. 2. Inherit from `BaseModelRegistry` and implement the abstract methods. 3. Create a `ModelRegistryConfig` class inheriting from `BaseModelRegistryConfig` for additional parameters. 4. Combine implementation and configuration by inheriting from `BaseModelRegistryFlavor`. Register your custom model registry using the CLI: ```shell zenml model-registry flavor register ``` #### Important Notes - The **CustomModelRegistryFlavor** is used during flavor creation. - The **CustomModelRegistryConfig** validates user input during registration. - The **CustomModelRegistry** is utilized when the component is in use, allowing separation of configuration from implementation. For a complete implementation example, refer to the [MLFlowModelRegistry](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_registry.MLFlowModelRegistry). ================================================== === File: docs/book/component-guide/model-registries/mlflow.md === # MLflow Model Registry Overview **MLflow** is a tool for tracking experiments, managing models, and deploying them. ZenML integrates with MLflow to provide an **Experiment Tracker** and a **Model Deployer**. The MLflow model registry manages and tracks ML models and artifacts, offering a user interface for browsing. ## Use Cases The MLflow model registry is beneficial for: - Tracking different model versions during development and deployment. - Managing model deployments across various environments. - Monitoring and comparing model performance over time. - Simplifying model deployment to production or staging environments. ## Deployment Steps To deploy the MLflow model registry, install the MLflow integration: ```shell zenml integration install mlflow -y ``` Then, register the model registry component: ```shell zenml model-registry register mlflow_model_registry --flavor=mlflow zenml stack register custom_stack -r mlflow_model_registry ... --set ``` **Note:** The model registry uses the same configuration as the MLflow Experiment Tracker. Use MLflow version **2.2.1** or higher due to a critical vulnerability in older versions. ## Usage You can use the MLflow model registry in ZenML pipelines or via the CLI. ### Register Models in a Pipeline Use the `mlflow_register_model_step` to register a model: ```python from zenml import pipeline from zenml.integrations.mlflow.steps.mlflow_registry import mlflow_register_model_step @pipeline def mlflow_registry_training_pipeline(): model = ... mlflow_register_model_step( model=model, name="tensorflow-mnist-model", ) ``` **Parameters:** - `name`: Required model name. - `version`: Model version. - `trained_model_name`: Name of the model artifact. - `model_source_uri`: Path to the model. - `description`: Model version description. - `metadata`: Metadata associated with the model version. ### Register Models via CLI To manually register a model version: ```shell zenml model-registry models register-version Tensorflow-model \ --description="New version with accuracy 98.88%" \ -v 1 \ --model-uri="file:///.../mlruns/.../artifacts/model" \ -m key1 value1 -m key2 value2 \ --zenml-pipeline-name="mlflow_training_pipeline" \ --zenml-step-name="trainer" ``` ### Deploy a Registered Model After registration, deploy the model as a prediction service. Refer to the MLflow model deployer documentation for details. ### Interact with Registered Models List registered models: ```shell zenml model-registry models list ``` List versions of a specific model: ```shell zenml model-registry models list-versions tensorflow-mnist-model ``` Get details of a specific model version: ```shell zenml model-registry models get-version tensorflow-mnist-model -v 1 ``` ### Deleting Models To delete a model or a specific version: ```shell zenml model-registry models delete REGISTERED_MODEL_NAME zenml model-registry models delete-version REGISTERED_MODEL_NAME -v VERSION ``` For more details, consult the [ZenML SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_registry.MLFlowModelRegistry). ================================================== === File: docs/book/component-guide/model-registries/model-registries.md === ### Model Registries in ZenML **Overview**: Model registries are centralized solutions for managing and tracking machine learning models throughout their lifecycle. They store metadata (version, configuration, metrics) to facilitate reproducibility and streamline model management. #### Key Concepts: - **RegisteredModel**: A logical grouping of models tracking different versions. It includes the model's name, description, and tags. - **RegistryModelVersion**: A specific version of a model with a unique identifier, containing metadata and references to the model artifact, pipeline name, run ID, and step name. - **ModelVersionStage**: Indicates the state of a model version, which can be `None`, `Staging`, `Production`, or `Archived`, tracking the model's lifecycle. #### When to Use: Model registries provide a visual interface for managing model metadata, making them ideal for centralized state management and easy retrieval of models, especially when using a remote orchestrator. They complement ZenML's mandatory Artifact Store. #### Integration with ZenML Stack: Model registries are optional components that work alongside an experiment tracker. They can be integrated using various flavors, such as: - **MLflow**: Add MLflow as a model registry. - **Custom Implementation**: Implement a custom registry. To view available flavors, use: ```shell zenml model-registry flavor list ``` #### Usage: 1. Register a model registry in your stack, matching the flavor of your experiment tracker. 2. Register models using: - Built-in pipeline step - ZenML CLI - Model registry UI 3. Retrieve and load models for deployment or experimentation. For more details on fetching runs, refer to the [documentation on fetching pipelines](../../how-to/pipeline-development/build-pipelines/fetching-pipelines.md). ================================================== === File: docs/book/component-guide/model-deployers/custom.md === ### Develop a Custom Model Deployer ZenML provides a `Model Deployer` stack component for deploying and managing trained machine-learning models. It interacts with deployment tools and can serve as a model registry, allowing users to list, suspend, resume, or delete models. #### Base Abstraction The model deployer is built on three main criteria: 1. **Efficient Deployment**: It manages model deployment according to the serving infrastructure's requirements, holding necessary configuration attributes. 2. **Continuous Deployment Logic**: It updates existing model servers instead of creating new ones for each model version, using the `deploy_model` method. This can be used in ZenML pipeline steps or for ad-hoc deployments. 3. **BaseService Registry**: It acts as a registry for `BaseService` instances, allowing for the recreation of model server configurations, such as those in Kubernetes. The model deployer includes lifecycle management methods: `stop_model_server`, `start_model_server`, and `delete_model_server`. #### Interface Code ```python from abc import ABC, abstractmethod from typing import Dict, Optional, Type from uuid import UUID from zenml.enums import StackComponentType from zenml.services import BaseService, ServiceConfig from zenml.stack import StackComponent, StackComponentConfig, Flavor DEFAULT_DEPLOYMENT_TIMEOUT = 300 class BaseModelDeployerConfig(StackComponentConfig): """Base class for model deployer configurations.""" class BaseModelDeployer(StackComponent, ABC): @abstractmethod def perform_deploy_model(self, id: UUID, config: ServiceConfig, timeout: int = DEFAULT_DEPLOYMENT_TIMEOUT) -> BaseService: """Deploy a model.""" @abstractmethod def perform_stop_model(self, service: BaseService, timeout: int = DEFAULT_DEPLOYMENT_TIMEOUT, force: bool = False) -> BaseService: """Stop a model server.""" @abstractmethod def perform_start_model(self, service: BaseService, timeout: int = DEFAULT_DEPLOYMENT_TIMEOUT) -> BaseService: """Start a model server.""" @abstractmethod def perform_delete_model(self, service: BaseService, timeout: int = DEFAULT_DEPLOYMENT_TIMEOUT, force: bool = False) -> None: """Delete a model server.""" class BaseModelDeployerFlavor(Flavor): @property @abstractmethod def name(self): """Returns the flavor name.""" @property def type(self) -> StackComponentType: return StackComponentType.MODEL_DEPLOYER @property def config_class(self) -> Type[BaseModelDeployerConfig]: return BaseModelDeployerConfig @property @abstractmethod def implementation_class(self) -> Type[BaseModelDeployer]: """Returns the class implementing the model deployer.""" ``` #### Building Custom Model Deployers To create a custom model deployer flavor: 1. Inherit from `BaseModelDeployer` and implement the abstract methods. 2. Create a configuration class inheriting from `BaseModelDeployerConfig`. 3. Combine both by inheriting from `BaseModelDeployerFlavor`, providing a name. 4. Implement a service class inheriting from `BaseService`. Register the flavor using the CLI: ```shell zenml model-deployer flavor register ``` Example registration: ```shell zenml model-deployer flavor register flavors.my_flavor.MyModelDeployerFlavor ``` Ensure ZenML is initialized at the root of your repository for proper flavor resolution. After registration, list available flavors: ```shell zenml model-deployer flavor list ``` #### Important Notes - The `CustomModelDeployerFlavor` is utilized during flavor creation. - The `CustomModelDeployerConfig` is used for validation during component registration. - The `CustomModelDeployer` is invoked when the component is in use, allowing separation of configuration from implementation. This structure allows for flexible and modular development of custom model deployers in ZenML. ================================================== === File: docs/book/component-guide/model-deployers/model-deployers.md === # Model Deployers Model deployment involves making machine learning models available for predictions on real-world data. There are two main types of predictions: **batch predictions** (for large datasets) and **real-time predictions** (for individual data points). Model deployers are components that serve models either in real-time or batch mode. ## Key Concepts - **Online Serving**: Hosting models as a managed web service accessible via an API (HTTP/GRPC), allowing for low-latency inference requests. - **Batch Inference**: Making predictions on a batch of observations, typically storing results in files or databases. ## Usage Model deployers are optional in the ZenML stack, primarily used for real-time inference in development or production environments (local, Kubernetes, or cloud). They enable continuous training and deployment pipelines. ### Model Deployer Flavors ZenML offers several model deployers: | Model Deployer | Flavor | Integration | Notes | |----------------|--------|-------------|-------| | MLflow | mlflow | mlflow | Local deployment | | BentoML | bentoml| bentoml | Local or production deployment | | Seldon Core | seldon | Seldon Core | Kubernetes-based production deployment | | Hugging Face | huggingface | huggingface | Deploys on Hugging Face Inference Endpoints | | Databricks | databricks | databricks | Deploys to Databricks Inference Endpoints | | vLLM | vllm | vllm | Local deployment of LLMs | | Custom | custom | | Custom implementation | ### Configuration Example To configure model deployers: ```shell # Configure MLflow zenml model-deployer register mlflow --flavor=mlflow # Configure Seldon Core zenml model-deployer register seldon --flavor=seldon \ --kubernetes_context=zenml-eks --kubernetes_namespace=zenml-workloads \ --base_url=http:// ``` ### Role in ZenML Stack Model deployers facilitate efficient model deployment across various environments, managing configuration attributes for interaction with serving tools. Core methods include: - `deploy_model`: Deploys a model and returns a Service object. - `find_model_server`: Retrieves deployed model servers. - `stop_model_server`, `start_model_server`, `delete_model_server`: Manage server lifecycle. The **Service object** represents a deployed model server, containing `config` (deployment attributes) and `status` (operational status). ### Interaction with Deployed Models After deployment, interact with model servers via CLI: ```shell # List deployed models zenml model-deployer models list # Describe a specific model zenml model-deployer models describe # Get prediction URL zenml model-deployer models get-url # Delete a model zenml model-deployer models delete ``` In Python, retrieve the prediction URL: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") deployer_step = pipeline_run.steps[""] deployed_model_url = deployer_step.run_metadata["deployed_model_url"].value ``` ### Continuous Deployment Workflow ZenML integrations provide standard pipeline steps for continuous model deployment, managing the deployment process and saving configurations in the Artifact Store for future use. ================================================== === File: docs/book/component-guide/model-deployers/bentoml.md === ### Summary of Deploying Models Locally with BentoML **BentoML Overview** - BentoML is an open-source framework for serving machine learning models, supporting local, cloud, and Kubernetes deployments. - The BentoML Model Deployer allows for local deployment of models via an HTTP server. - It integrates with ZenML for model management and deployment. **Use Cases** - Standardize model deployment within organizations. - Simplify the transition from development to production. - For Kubernetes-based deployments, consider other Model Deployer flavors. **Deployment Paths** 1. **Local HTTP Server**: For development and testing. 2. **Containerized Service**: For production-grade settings, using tools like Yatai (for Kubernetes) or the deprecated `bentoctl`. **Getting Started** 1. Install the required integration: ```bash zenml integration install bentoml -y ``` 2. Register the BentoML model deployer: ```bash zenml model-deployer register bentoml_deployer --flavor=bentoml ``` **Creating a BentoML Service** - Define a service to serve your model. Example for a PyTorch model: ```python import bentoml from bentoml.validators import DType, Shape import numpy as np import torch @bentoml.service(name=SERVICE_NAME) class MNISTService: def __init__(self): self.model = bentoml.pytorch.load_model(MODEL_NAME) self.model.eval() @bentoml.api() async def predict_ndarray(self, inp: Annotated[np.ndarray, DType("float32"), Shape((28, 28))]) -> np.ndarray: inp = np.expand_dims(inp, (0, 1)) return to_numpy(await self.model(torch.tensor(inp))) @bentoml.api() async def predict_image(self, f: PILImage) -> np.ndarray: arr = np.array(f) / 255.0 arr = np.expand_dims(arr, (0, 1)).astype("float32") return to_numpy(await self.model(torch.tensor(arr))) ``` **Building a Bento** - Use the `bento_builder_step` or create a custom builder function to package your model: ```python from zenml import step @step def my_bento_builder(model) -> bento.Bento: model = load_artifact_from_response(model) bentoml.pytorch.save_model(model_name, model) return bentos.build(service=service, models=[model_name]) ``` **ZenML Pipeline for Deployment** - Define a pipeline to build and deploy the bento: ```python from zenml import pipeline @pipeline def bento_deployer_pipeline(): bento = ... deployed_model = bentoml_model_deployer_step( bento=bento, model_name="pytorch_mnist", port=3001, ) ``` **Local vs. Containerized Deployment** - **Local Deployment**: Deploy to a local HTTP server. - **Containerized Deployment**: Use Docker to build and run the image: ```python deployed_model = bentoml_model_deployer_step( bento=bento, model_name="pytorch_mnist", deployment_type="container", image="my-custom-image", ) ``` **Predicting with Deployed Model** - Use the BentoML client to send requests: ```python @step def predictor(inference_data: Dict[str, List], service: BentoMLDeploymentService) -> None: service.start(timeout=10) for img, data in inference_data.items(): prediction = service.predict("predict_ndarray", np.array(data)) print(f"Prediction for {img} is {to_labels(prediction[0])}") ``` **From Local to Cloud with `bentoctl`** - Note: `bentoctl` is deprecated but was used for deploying models to cloud services like AWS Lambda, SageMaker, and Google Cloud. For detailed usage and configuration, refer to the [BentoML documentation](https://docs.bentoml.org/en/latest/guides/model-store.html#manage-models). ================================================== === File: docs/book/component-guide/model-deployers/seldon.md === ### Summary of Seldon Core Documentation for Kubernetes Model Deployment **Seldon Core Overview**: Seldon Core is a production-grade, source-available model serving platform designed for deploying machine learning models as REST/GRPC microservices. It includes features like monitoring, logging, model explainers, outlier detection, and advanced deployment strategies (A/B testing, canary deployments). It supports standard ML model packaging formats, simplifying real-time inference. **Important Note**: The Seldon Core model deployer is **not supported on MacOS**. **When to Use Seldon Core**: - For advanced Kubernetes deployments. - To manage model lifecycle with zero downtime (updates, scaling, monitoring). - To utilize advanced API endpoints (REST/GRPC). - For complex deployment processes with custom transformers and routers. **Deployment Prerequisites**: 1. Access to a Kubernetes cluster (recommended to use a Service Connector). 2. Seldon Core must be pre-installed in the cluster. 3. Models must be stored in persistent shared storage accessible from the Kubernetes cluster (e.g., AWS S3, GCS). **Installation Steps for Seldon Core on EKS**: 1. Configure EKS cluster access: ```bash aws eks --region us-east-1 update-kubeconfig --name zenml-cluster --alias zenml-eks ``` 2. Install Istio: ```bash curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.5.0 sh - cd istio-1.5.0/ bin/istioctl manifest apply --set profile=demo ``` 3. Set up Istio gateway: ```bash curl https://raw.githubusercontent.com/SeldonIO/seldon-core/master/notebooks/resources/seldon-gateway.yaml | kubectl apply -f - ``` 4. Install Seldon Core: ```bash helm install seldon-core seldon-core-operator \ --repo https://storage.googleapis.com/seldon-charts \ --set usageMetrics.enabled=true \ --set istio.enabled=true \ --namespace seldon-system ``` 5. Test installation: ```bash kubectl apply -f iris.yaml ``` Example `iris.yaml`: ```yaml apiVersion: machinelearning.seldon.io/v1 kind: SeldonDeployment metadata: name: iris-model namespace: default spec: name: iris predictors: - graph: implementation: SKLEARN_SERVER modelUri: gs://seldon-models/v1.14.0-dev/sklearn/iris name: classifier name: default replicas: 1 ``` **Service Connector Setup**: Use Service Connectors for authentication to Kubernetes clusters. Options include AWS, GCP, Azure, or a generic Kubernetes connector. Register a Service Connector with: ```bash zenml service-connector register --type aws --resource-type kubernetes-cluster --resource-name --auto-configure ``` **Model Deployer Registration**: Register the Seldon Core Model Deployer: ```bash zenml model-deployer register --flavor=seldon \ --kubernetes_namespace= \ --base_url=http://$INGRESS_HOST ``` **Authentication Management**: The Seldon Core Model Deployer requires access to persistent storage where models are located. Configure explicit credentials for the artifact store if Seldon Core and the artifact store are not in the same cloud environment. **Advanced Custom Code Deployment**: Define a custom prediction function to deploy pre- and post-processing code with the model. Example: ```python def custom_predict(model: Any, request: Array_Like) -> Array_Like: # Custom prediction logic ... ``` Register the custom function with: ```python seldon_custom_model_deployer_step( model=model, predict_function="", service_config=SeldonDeploymentConfig( model_name="", replicas=1, implementation="custom", resources=SeldonResourceRequirements( limits={"cpu": "200m", "memory": "250Mi"} ), serviceAccountName="kubernetes-service-account", ), ) ``` For more detailed configurations and examples, refer to the [official Seldon Core documentation](https://github.com/SeldonIO/seldon-core). ================================================== === File: docs/book/component-guide/model-deployers/mlflow.md === ### Summary: Deploying Models Locally with MLflow **MLflow Overview** MLflow is an open-source platform for managing the machine learning lifecycle. The MLflow Model Deployer allows for local deployment and management of MLflow models on a running MLflow server. **Note:** Currently, it is not production-ready and is intended for local development only. **Use Cases** Use the MLflow Model Deployer if you want: - Easy local model deployment for real-time predictions. - A straightforward deployment without complex infrastructure like Kubernetes. **Installation** To deploy models, install the MLflow integration with ZenML: ```bash zenml integration install mlflow -y ``` Register the MLflow model deployer: ```bash zenml model-deployer register mlflow_deployer --flavor=mlflow ``` **Deployment Process** 1. **Deploying a Logged Model** If you know the model URI: ```python from zenml import step, get_step_context from zenml.client import Client @step def deploy_model() -> Optional[MLFlowDeploymentService]: zenml_client = Client() model_deployer = zenml_client.active_stack.model_deployer mlflow_deployment_config = MLFlowDeploymentConfig( name="mlflow-model-deployment-example", description="Example of deploying a model", pipeline_name=get_step_context().pipeline_name, pipeline_step_name=get_step_context().step_name, model_uri="runs://model" or "models://", model_name="model", workers=1, mlserver=False, timeout=DEFAULT_SERVICE_START_STOP_TIMEOUT ) service = model_deployer.deploy_model(config=mlflow_deployment_config) return service ``` 2. **Deploying Without Known URI** If the model URI is unknown: ```python from zenml import step, get_step_context from zenml.client import Client from mlflow.tracking import MlflowClient, artifact_utils @step def deploy_model() -> Optional[MLFlowDeploymentService]: zenml_client = Client() model_deployer = zenml_client.active_stack.model_deployer experiment_tracker = zenml_client.active_stack.experiment_tracker mlflow_run_id = experiment_tracker.get_run_id( experiment_name=get_step_context().pipeline_name, run_name=get_step_context().run_name, ) client = MlflowClient() model_uri = artifact_utils.get_artifact_uri(run_id=mlflow_run_id, artifact_path="model") mlflow_deployment_config = MLFlowDeploymentConfig( name="mlflow-model-deployment-example", description="Example of deploying a model", pipeline_name=get_step_context().pipeline_name, pipeline_step_name=get_step_context().step_name, model_uri=model_uri, model_name="model", workers=1, mlserver=False, timeout=300, ) service = model_deployer.deploy_model(config=mlflow_deployment_config) return service ``` **Configuration Options** Within `MLFlowDeploymentService`, you can configure: - `name`, `description`, `pipeline_name`, `pipeline_step_name` - `model_name`, `model_version` - `silent_daemon`, `blocking` - `model_uri`, `workers`, `mlserver`, `timeout` **Running Inference** 1. **Load Prediction Service from Another Pipeline**: ```python import json import requests from zenml import step from zenml.integrations.mlflow.services import MLFlowDeploymentService @step(enable_cache=False) def prediction_service_loader(pipeline_name: str, pipeline_step_name: str, model_name: str = "model") -> None: model_deployer = MLFlowModelDeployer.get_active_model_deployer() existing_services = model_deployer.find_model_server(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name, model_name=model_name) if not existing_services: raise RuntimeError("No running service found.") service = existing_services[0] payload = json.dumps({"inputs": {"messages": [{"role": "user", "content": "Tell a joke!"}]}, "params": {"temperature": 0.5, "max_tokens": 20}}) response = requests.post(url=service.get_prediction_url(), data=payload, headers={"Content-Type": "application/json"}) return response.json() ``` 2. **Use Service in Same Pipeline**: ```python from typing_extensions import Annotated import numpy as np from zenml import step from zenml.integrations.mlflow.services import MLFlowDeploymentService @step def predictor(service: MLFlowDeploymentService, data: np.ndarray) -> Annotated[np.ndarray, "predictions"]: prediction = service.predict(data).argmax(axis=-1) return prediction ``` For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_deployers). ================================================== === File: docs/book/component-guide/model-deployers/huggingface.md === ### Summary: Deploying Models to Hugging Face Inference Endpoints **Hugging Face Inference Endpoints** is a managed service for deploying `transformers`, `sentence-transformers`, and `diffusers` models on secure, autoscaling infrastructure without the need for container or GPU management. #### When to Use - Deploy models on dedicated infrastructure. - Prefer a fully-managed production solution. - Aim for production-ready APIs with minimal MLOps. - Require cost-effectiveness, paying only for used compute resources. - Need enterprise security with offline endpoints connected to Virtual Private Clouds (VPCs). #### Deployment Steps 1. **Install Hugging Face ZenML Integration**: ```bash zenml integration install huggingface -y ``` 2. **Register the Model Deployer**: ```bash zenml model-deployer register --flavor=huggingface --token= --namespace= ``` - `token`: Hugging Face authentication token. - `namespace`: Username or organization name for endpoint creation. 3. **Update ZenML Stack**: ```bash zenml stack update --model-deployer= ``` #### Using the Model Deployer - **Deploy a Model**: Use `huggingface_model_deployer_step` in your pipeline. - **Run Batch Inference**: Utilize `HuggingFaceDeploymentService` for inference on deployed models. ##### Example Deployment Pipeline ```python from zenml import pipeline from zenml.config import DockerSettings from zenml.integrations.huggingface.services import HuggingFaceServiceConfig from zenml.integrations.huggingface.steps import huggingface_model_deployer_step @pipeline(enable_cache=True) def huggingface_deployment_pipeline(model_name: str = "hf", timeout: int = 1200): service_config = HuggingFaceServiceConfig(model_name=model_name) huggingface_model_deployer_step(service_config=service_config, timeout=timeout) ``` #### Configurable Attributes in `HuggingFaceServiceConfig` - `model_name`: Name of the model. - `endpoint_name`: Inference endpoint name. - `repository`: User or organization repository name. - `framework`: ML framework (e.g., `"pytorch"`). - `accelerator`: Hardware for inference (e.g., `"gpu"`). - `instance_size`: Size of the hosting instance (e.g., `"large"`). - `region`: Cloud region for the endpoint (e.g., `"us-east-1"`). - `vendor`: Cloud provider (e.g., `"aws"`). - `token`: Authentication token. - `min_replica`/`max_replica`: Scaling configuration. - `task`: Supported ML task (e.g., `"text-classification"`). #### Running Inference Example code to run inference on a provisioned endpoint: ```python from zenml import step, pipeline from zenml.integrations.huggingface.model_deployers import HuggingFaceModelDeployer from zenml.integrations.huggingface.services import HuggingFaceDeploymentService @step def prediction_service_loader(pipeline_name: str, pipeline_step_name: str, model_name: str = "default") -> HuggingFaceDeploymentService: model_deployer = HuggingFaceModelDeployer.get_active_model_deployer() existing_services = model_deployer.find_model_server(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name, model_name=model_name) if not existing_services: raise RuntimeError("No running service found.") return existing_services[0] @step def predictor(service: HuggingFaceDeploymentService, data: str) -> str: return service.predict(data) @pipeline def huggingface_deployment_inference_pipeline(pipeline_name: str): inference_data = ... model_service = prediction_service_loader(pipeline_name=pipeline_name) predictions = predictor(model_service, inference_data) ``` For further details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-huggingface/) and the Hugging Face endpoint [code](https://github.com/huggingface/huggingface_hub/blob/5e3b603ccc7cd6523d998e75f82848215abf9415/src/huggingface_hub/hf_api.py#L6957). ================================================== === File: docs/book/component-guide/model-deployers/databricks.md === ### Summary: Deploying Models to Databricks Inference Endpoints **Overview**: Databricks Model Serving provides a unified interface for deploying, governing, and querying AI models as REST APIs. It offers managed, autoscaling infrastructure, eliminating the need for users to manage containers or GPUs. **When to Use Databricks Model Deployer**: - You are using Databricks for data and ML workloads. - You want to deploy AI models without managing infrastructure. - You require enterprise security with offline endpoints. - You aim to create production-ready APIs with minimal MLOps involvement. **Installation**: To deploy models, install the Databricks ZenML integration: ```bash zenml integration install databricks -y ``` **Registering the Model Deployer**: Register the Databricks model deployer: ```bash zenml model-deployer register --flavor=databricks --host= --client_id={{databricks.client_id}} --client_secret={{databricks.client_secret}} ``` *Note: Create a Databricks service account for necessary permissions and authentication.* **Updating the Stack**: Update your ZenML stack to use the model deployer: ```bash zenml stack update --model-deployer= ``` **Configuration Options** (`DatabricksServiceConfig`): - `model_name`: Name of the model in the Databricks Model Registry. - `model_version`: Version of the model. - `workload_size`: Size of the workload (`Small`, `Medium`, `Large`). - `scale_to_zero_enabled`: Enable/disable scale to zero feature. - `env_vars`: Environment variables for the model serving container. - `workload_type`: Type of workload (`CPU`, `GPU_LARGE`, etc.). - `endpoint_secret_name`: Name of the secret for endpoint security. **Running Inference**: Example code to run inference on a provisioned endpoint: ```python from zenml import step, pipeline from zenml.integrations.databricks.model_deployers import DatabricksModelDeployer from zenml.integrations.databricks.services import DatabricksDeploymentService @step(enable_cache=False) def prediction_service_loader(pipeline_name: str, pipeline_step_name: str, running: bool = True, model_name: str = "default") -> DatabricksDeploymentService: model_deployer = DatabricksModelDeployer.get_active_model_deployer() existing_services = model_deployer.find_model_server(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name, model_name=model_name, running=running) if not existing_services: raise RuntimeError(f"No running inference endpoint found.") return existing_services[0] @step def predictor(service: DatabricksDeploymentService, data: str) -> str: return service.predict(data) @pipeline def databricks_deployment_inference_pipeline(pipeline_name: str, pipeline_step_name: str = "databricks_model_deployer_step"): inference_data = ... model_deployment_service = prediction_service_loader(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name) predictions = predictor(model_deployment_service, inference_data) ``` For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-databricks/#zenml.integrations.databricks.model_deployers). ================================================== === File: docs/book/component-guide/model-deployers/vllm.md === ### vLLM Documentation Summary **vLLM** is a library designed for fast and efficient Large Language Model (LLM) inference and serving. #### When to Use vLLM - Deploy LLMs with high serving throughput and OpenAI-compatible API. - Supports continuous batching of requests. - Offers quantization options: GPTQ, AWQ, INT4, INT8, FP8. - Features include PagedAttention, Speculative decoding, and Chunked pre-fill. #### Deployment Steps 1. **Install vLLM ZenML Integration**: ```bash zenml integration install vllm -y ``` 2. **Register the vLLM Model Deployer**: ```bash zenml model-deployer register vllm_deployer --flavor=vllm ``` This sets up a local vLLM server running as a daemon. #### Usage For practical implementation, refer to the [deployment pipeline example](https://github.com/zenml-io/zenml-projects/blob/79f67ea52c3908b9b33c9a41eef18cb7d72362e8/llm-vllm-deployer/pipelines/deploy_pipeline.py#L25). ##### Deploying an LLM Use the `vllm_model_deployer_step` to create a deployment service: ```python from zenml import pipeline from typing import Annotated from steps.vllm_deployer import vllm_model_deployer_step from zenml.integrations.vllm.services.vllm_deployment import VLLMDeploymentService @pipeline() def deploy_vllm_pipeline(model: str, timeout: int = 1200) -> Annotated[VLLMDeploymentService, "GPT2"]: service = vllm_model_deployer_step(model=model, timeout=timeout) return service ``` Refer to the [example of running a GPT-2 model](https://github.com/zenml-io/zenml-projects/tree/79f67ea52c3908b9b33c9a41eef18cb7d72362e8/llm-vllm-deployer). #### Configuration Options Within `VLLMDeploymentService`, you can configure: - `model`: Hugging Face model name or path. - `tokenizer`: Hugging Face tokenizer name or path (defaults to model name). - `served_model_name`: API model name (defaults to model name). - `trust_remote_code`: Trust code from Hugging Face. - `tokenizer_mode`: Options are ['auto', 'slow', 'mistral']. - `dtype`: Data type for weights and activations (options: ['auto', 'half', 'float16', 'bfloat16', 'float', 'float32']). - `revision`: Specific model version (branch, tag, or commit id; defaults to the latest). ================================================== === File: docs/book/component-guide/alerters/custom.md === ### Develop a Custom Alerter #### Overview To develop a custom alerter in ZenML, it's recommended to first review the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). #### Base Abstraction The base class for alerters, `BaseAlerter`, defines two essential abstract methods: - `post(message: str, params: Optional[BaseAlerterStepParameters]) -> bool`: Posts a message to a chat service and returns `True` if successful. - `ask(question: str, params: Optional[BaseAlerterStepParameters]) -> bool`: Posts a message and waits for approval, returning `True` only if approved. **Base Alerter Implementation:** ```python class BaseAlerter(StackComponent, ABC): def post(self, message: str, params: Optional[BaseAlerterStepParameters]) -> bool: return True def ask(self, question: str, params: Optional[BaseAlerterStepParameters]) -> bool: return True ``` #### Creating a Custom Alerter Follow these steps to create a custom alerter: 1. **Inherit from `BaseAlerter`** and implement the `post()` and `ask()` methods: ```python from typing import Optional from zenml.alerter import BaseAlerter, BaseAlerterStepParameters class MyAlerter(BaseAlerter): def post(self, message: str, config: Optional[BaseAlerterStepParameters]) -> bool: ... return True def ask(self, question: str, config: Optional[BaseAlerterStepParameters]) -> bool: ... return True ``` 2. **Implement a configuration object** if needed: ```python from zenml.alerter.base_alerter import BaseAlerterConfig class MyAlerterConfig(BaseAlerterConfig): my_param: str ``` 3. **Create a flavor object** to combine implementation and configuration: ```python from typing import Type, TYPE_CHECKING from zenml.alerter import BaseAlerterFlavor if TYPE_CHECKING: from zenml.stack import StackComponent, StackComponentConfig class MyAlerterFlavor(BaseAlerterFlavor): @property def name(self) -> str: return "my_alerter" @property def config_class(self) -> Type[StackComponentConfig]: from my_alerter_config import MyAlerterConfig return MyAlerterConfig @property def implementation_class(self) -> Type[StackComponent]: from my_alerter import MyAlerter return MyAlerter ``` #### Registering the Custom Alerter Register your new flavor using the CLI: ```shell zenml alerter flavor register ``` Example: ```shell zenml alerter flavor register flavors.my_flavor.MyAlerterFlavor ``` #### Important Notes - Ensure ZenML is initialized at the root of your repository to avoid resolution issues. - After registration, verify the new flavor is listed: ```shell zenml alerter flavor list ``` #### Workflow Integration - The `MyAlerterFlavor` is used during flavor creation. - The `MyAlerterConfig` is utilized during stack component registration for validation. - The `MyAlerter` is invoked when the component is in use, allowing for separation of configuration and implementation. This enables registration even if dependencies are not locally installed. ================================================== === File: docs/book/component-guide/alerters/discord.md === ### Discord Alerter Overview The `DiscordAlerter` allows sending messages to a Discord channel from ZenML pipelines. It includes two key steps: 1. **`discord_alerter_post_step`**: Sends a message to a Discord channel and returns success status. 2. **`discord_alerter_ask_step`**: Sends a message and waits for user feedback, returning `True` only if the user approves the operation. #### Use Cases - **Immediate Notifications**: Use `discord_alerter_post_step` for alerts on failures (e.g., model performance issues). - **Human-in-the-Loop**: Use `discord_alerter_ask_step` for critical decision points, like model deployments. ### Requirements Install the Discord integration with: ```shell zenml integration install discord -y ``` ### Setting Up a Discord Bot 1. Create a Discord workspace and channel. 2. [Create a Discord App with a bot](https://discordpy.readthedocs.io/en/latest/discord.html). 3. Ensure the bot has permissions to send and receive messages. ### Registering a Discord Alerter Register the `discord` alerter with: ```shell zenml alerter register discord_alerter \ --flavor=discord \ --discord_token= \ --default_discord_channel_id= ``` Add it to your stack: ```shell zenml stack register ... -al discord_alerter ``` #### Parameters - **DISCORD_CHANNEL_ID**: Obtain by right-clicking the channel and selecting 'Copy Channel ID'. Enable "Developer Mode" in settings if not visible. - **DISCORD_TOKEN**: Found during bot setup. Ensure the bot has permissions to: - Read Messages/View Channels - Send Messages ### Using the Discord Alerter Import the steps and use them in your pipeline. A typical implementation might look like this: ```python from zenml.integrations.discord.steps.discord_alerter_ask_step import discord_alerter_ask_step from zenml import step, pipeline @step def my_formatter_step(artifact) -> str: return f"Here is my artifact {artifact}!" @pipeline def my_pipeline(...): ... artifact = ... message = my_formatter_step(artifact) approved = discord_alerter_ask_step(message) ... # Behavior based on `approved` if __name__ == "__main__": my_pipeline() ``` For more details on configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-discord/#zenml.integrations.discord.alerters.discord_alerter.DiscordAlerter). ================================================== === File: docs/book/component-guide/alerters/slack.md === # Slack Alerter Documentation Summary ## Overview The `SlackAlerter` allows sending messages and questions to a Slack channel from ZenML pipelines. ## Setup Instructions ### Create a Slack App 1. Set up a Slack workspace and create a Slack App with a bot. 2. Grant the following permissions under `OAuth & Permissions`: - `chat:write` - `channels:read` - `channels:history` 3. Invite the app to your desired channel using `/invite` or channel settings. ### Registering Slack Alerter in ZenML 1. Install the Slack integration: ```shell zenml integration install slack -y ``` 2. Create a secret and register the alerter: ```shell zenml secret create slack_token --oauth_token= zenml alerter register slack_alerter \ --flavor=slack \ --slack_token={{slack_token.oauth_token}} \ --slack_channel_id= ``` ### Required Parameters - ``: Found in channel details (starts with `C...`). - ``: Found in Slack app settings under `OAuth & Permissions`. ### Add Alerter to Stack ```shell zenml stack register ... -al slack_alerter --set ``` ## Usage ### Direct Methods: `post()` and `ask()` Use the active alerter in your pipeline: ```python from zenml import pipeline, step from zenml.client import Client @step def post_statement() -> None: Client().active_stack.alerter.post("Step finished!") @step def ask_question() -> bool: return Client().active_stack.alerter.ask("Should I continue?") @pipeline(enable_cache=False) def my_pipeline(): post_statement() ask_question() if __name__ == "__main__": my_pipeline() ``` *Note: `ask()` defaults to `False` on error.* ### Custom Settings You can specify channel ID during runtime: ```python @step(settings={"alerter": {"slack_channel_id": }}) def post_statement() -> None: Client().active_stack.alerter.post("Posting to another channel!") ``` ### Advanced Message Formatting Utilize `SlackAlerterParameters` and `SlackAlerterPayload` for enhanced messages: ```python from zenml import pipeline, step, get_step_context from zenml.client import Client from zenml.integrations.slack.alerters.slack_alerter import ( SlackAlerterParameters, SlackAlerterPayload ) @step def post_statement() -> None: params = SlackAlerterParameters( payload=SlackAlerterPayload( pipeline_name=get_step_context().pipeline.name, step_name=get_step_context().step_run.name, stack_name=Client().active_stack.name, ), ) Client().active_stack.alerter.post( message="Message with additional pipeline info.", params=params ) ``` ### Predefined Steps For simpler usage, use built-in steps: ```python from zenml import pipeline from zenml.integrations.slack.steps.slack_alerter_post_step import slack_alerter_post_step from zenml.integrations.slack.steps.slack_alerter_ask_step import slack_alerter_ask_step @pipeline(enable_cache=False) def my_pipeline(): slack_alerter_post_step("Posting a statement.") slack_alerter_ask_step("Asking a question. Should I continue?") if __name__ == "__main__": my_pipeline() ``` ## Additional Information For more details and configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-slack/#zenml.integrations.slack.alerters.slack_alerter.SlackAlerter). ================================================== === File: docs/book/component-guide/alerters/alerters.md === ### Alerters Overview **Alerters** enable sending messages to chat services (e.g., Slack, Discord) from pipelines, facilitating immediate notifications for failures, monitoring, and human-in-the-loop ML. #### Available Alerter Integrations - **SlackAlerter**: Integrates with Slack channels. - **DiscordAlerter**: Integrates with Discord channels. - **Custom Implementation**: Allows building custom alerters for other chat services. | Alerter | Flavor | Integration | Notes | |---------|---------|-------------|---------------------------------------------| | Slack | `slack` | `slack` | Interacts with a Slack channel | | Discord | `discord`| `discord` | Interacts with a Discord channel | | Custom | _custom_| | Extend the alerter abstraction | To view available alerter flavors, use: ```shell zenml alerter flavor list ``` #### Using Alerters with ZenML 1. **Register an Alerter Component**: ```shell zenml alerter register ... ``` 2. **Add Alerter to Your Stack**: ```shell zenml stack register ... -al ``` 3. **Import and Use Standard Steps**: After registration, import the standard steps from the integration and use them in your pipelines. ================================================== === File: docs/book/component-guide/container-registries/azure.md === ### Azure Container Registry Overview The Azure Container Registry (ACR) is a built-in container registry option in ZenML, utilizing Azure's infrastructure for storing container images. #### When to Use ACR Use ACR if: - Your stack components require pulling or pushing container images. - You have access to Azure. #### Deployment Steps 1. Go to the [Azure Portal](https://portal.azure.com/#create/Microsoft.ContainerRegistry). 2. Select your subscription, resource group, location, and registry name. 3. Click `Review + Create` to create the registry. #### Finding the Registry URI The URI format is: ```shell .azurecr.io ``` To find your URI: - Search for `container registries` in the Azure portal. - Use the registry name to construct the URI. #### Using ACR Prerequisites: - Docker installed and running. - Registry URI obtained from the previous section. To register the container registry: ```shell zenml container-registry register --flavor=azure --uri= zenml stack update -c ``` #### Authentication Methods Authentication is required to use ACR in pipelines. Options include: 1. **Local Authentication** (Quick Setup): - Requires Azure CLI installed. - Log in to the registry: ```shell az acr login --name= ``` **Note**: This method is not portable across environments. 2. **Azure Service Connector** (Recommended): - Provides auto-configuration and better security. - Register a service connector: ```sh zenml service-connector register --type azure -i ``` - For a non-interactive setup: ```sh zenml service-connector register --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= --resource-type docker-registry --resource-id ``` #### Connecting ACR to ZenML After setting up a service connector: 1. Register the ACR: ```sh zenml container-registry register -f azure --uri= ``` 2. Connect it via the service connector: ```sh zenml container-registry connect --connector ``` #### Using ACR in a ZenML Stack To register and set a stack with the new container registry: ```sh zenml stack register -c ... --set ``` #### Local Docker Client Login If you need to interact with the remote registry: ```sh zenml service-connector login --resource-type docker-registry --resource-id ``` For more details on configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.azure_container_registry.AzureContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/custom.md === ### Develop a Custom Container Registry #### Overview To create a custom container registry in ZenML, it's essential to understand the base abstraction, which includes a `uri` and a `prepare_image_push` method for validation. #### Base Abstraction The base classes for a container registry are defined as follows: ```python from abc import abstractmethod from typing import Type from zenml.enums import StackComponentType from zenml.stack import Flavor from zenml.stack.authentication_mixin import AuthenticationConfigMixin, AuthenticationMixin from zenml.utils import docker_utils class BaseContainerRegistryConfig(AuthenticationConfigMixin): uri: str class BaseContainerRegistry(AuthenticationMixin): def prepare_image_push(self, image_name: str) -> None: pass def push_image(self, image_name: str) -> str: if not image_name.startswith(self.config.uri): raise ValueError(f"Image `{image_name}` does not belong to registry `{self.config.uri}`.") self.prepare_image_push(image_name) return docker_utils.push_image(image_name) class BaseContainerRegistryFlavor(Flavor): @property @abstractmethod def name(self) -> str: pass @property def type(self) -> StackComponentType: return StackComponentType.CONTAINER_REGISTRY @property def config_class(self) -> Type[BaseContainerRegistryConfig]: return BaseContainerRegistryConfig @property def implementation_class(self) -> Type[BaseContainerRegistry]: return BaseContainerRegistry ``` #### Building Your Own Container Registry To create a custom flavor: 1. **Implement the Registry**: Inherit from `BaseContainerRegistry` and define any checks in `prepare_image_push`. 2. **Create Configuration**: Inherit from `BaseContainerRegistryConfig` for additional configuration. 3. **Combine Classes**: Inherit from `BaseContainerRegistryFlavor` to unify implementation and configuration. **Registering the Flavor**: Use the CLI to register your flavor: ```shell zenml container-registry flavor register ``` For example: ```shell zenml container-registry flavor register flavors.my_flavor.MyContainerRegistryFlavor ``` **Note**: Ensure ZenML is initialized at the root of your repository for proper resolution. #### Listing Available Flavors After registration, list available flavors: ```shell zenml container-registry flavor list ``` #### Important Considerations - **Class Usage**: - `CustomContainerRegistryFlavor` is used during flavor creation. - `CustomContainerRegistryConfig` is involved in stack component registration and validation. - `CustomContainerRegistry` is utilized when the component is in use. - This design separates configuration from implementation, allowing for registration without local dependencies. For more details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.base_container_registry.BaseContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/dockerhub.md === ### DockerHub Container Registry in ZenML **Overview**: DockerHub is a built-in container registry in ZenML for storing container images. #### When to Use - Use DockerHub if: - Your stack components need to pull or push container images. - You have a DockerHub account. #### Deployment Steps 1. **Create a DockerHub Account**: Required to use the DockerHub registry. 2. **Repository Type**: - By default, images are published to a **public** repository. - For a **private** repository, create one on DockerHub before running the pipeline. #### Registry URI Format The DockerHub container registry URI can be in one of the following formats: ```shell # or docker.io/ ``` **Examples**: - `zenml` - `my-username` - `docker.io/zenml` - `docker.io/my-username` #### Using DockerHub 1. Ensure **Docker** is installed and running. 2. Obtain the registry URI as described above. 3. Register the container registry in your active stack: ```shell zenml container-registry register \ --flavor=dockerhub \ --uri= zenml stack update -c ``` 4. Log in to DockerHub for image operations: ```shell docker login ``` - Use your DockerHub account name and password or a [personal access token](https://docs.docker.com/docker-hub/access-tokens/). #### Additional Information For a complete list of configurable attributes for the DockerHub container registry, refer to the [SDK Docs](https://apidocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.dockerhub_container_registry.DockerHubContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/default.md === ### Summary of Default Container Registry Documentation **Default Container Registry Overview** The Default Container Registry in ZenML supports local and remote container registries with any URI format. **When to Use** Use the Default Container Registry for local storage or unsupported remote registries. **Local Registry URI Format** Specify a local registry URI as follows: ```shell localhost: # Examples: localhost:5000 localhost:8000 localhost:9999 ``` **Usage Requirements** - Docker must be installed and running. - Provide the registry URI. **Registering the Container Registry** To register and use the Default Container Registry: ```shell zenml container-registry register --flavor=default --uri= zenml stack update -c ``` **Authentication Methods** For private registries, configure authentication. Local Authentication is quick for local setups, while Docker Service Connector is recommended for remote registries. **Local Authentication** Utilizes Docker client credentials from the local environment. Log in using: ```shell docker login --username --password-stdin ``` *Note: Local authentication is not portable across environments.* **Docker Service Connector (Recommended)** To set up authentication: ```sh zenml service-connector register --type docker -i # Non-interactive zenml service-connector register --type docker --username= --password= ``` **Listing Resources** Check accessible resources: ```sh zenml service-connector list-resources --connector-type docker --resource-id ``` **Connecting the Container Registry** Register and connect the Default Container Registry: ```sh zenml container-registry register -f default --uri= zenml container-registry connect -i # Non-interactive zenml container-registry connect --connector ``` **Using in ZenML Stack** To use the Default Container Registry in a ZenML Stack: ```sh zenml stack register -c ... --set ``` **Temporary Local Login** To temporarily authenticate the local Docker client: ```sh zenml service-connector login ``` For further details on configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.container_registries.default_container_registry.DefaultContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/gcp.md === ### Google Cloud Container Registry Overview The Google Cloud Container Registry is integrated with ZenML and utilizes the Google Artifact Registry. **Important**: Google Container Registry will be deprecated in favor of Artifact Registry, effective May 15, 2024. After March 18, 2025, Container Registry will be shut down. ### When to Use Use the GCP container registry if: - Your stack components require pulling/pushing container images. - You have access to GCP. ### Deployment Steps 1. **Enable Artifact Registry**: [Enable here](https://console.cloud.google.com/marketplace/product/google/artifactregistry.googleapis.com). 2. **Create a Docker Repository**: [Create here](https://console.cloud.google.com/artifacts). ### Registry URI Format The URI format for the GCP container registry is: ```shell -docker.pkg.dev// ``` **Examples**: - `europe-west1-docker.pkg.dev/zenml/my-repo` - `southamerica-east1-docker.pkg.dev/zenml/zenml-test` ### Using the GCP Container Registry Requirements: - Docker installed and running. - Obtain the registry URI as described above. **Register the Container Registry**: ```shell zenml container-registry register --flavor=gcp --uri= zenml stack update -c ``` ### Authentication Methods Authentication is necessary for using the GCP Container Registry: #### Local Authentication Quick setup using local Docker client credentials: ```shell gcloud auth configure-docker ``` For Artifact Registry: ```shell gcloud auth configure-docker -docker.pkg.dev ``` **Note**: Local authentication is not portable across environments. #### GCP Service Connector (Recommended) Set up a GCP Service Connector for better security and convenience: ```sh zenml service-connector register --type gcp -i ``` Non-interactive setup: ```sh zenml service-connector register --type gcp --resource-type docker-registry --auto-configure ``` ### Connecting the GCP Container Registry After setting up the Service Connector, register and connect the container registry: ```sh zenml container-registry register -f gcp --uri= zenml container-registry connect -i ``` Non-interactive connection: ```sh zenml container-registry connect --connector ``` ### Using the Container Registry in a ZenML Stack To register and set a stack with the new container registry: ```sh zenml stack register -c ... --set ``` For more details on configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.gcp_container_registry.GCPContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/aws.md === ### Amazon Elastic Container Registry (ECR) Overview Amazon ECR is the container registry used with the ZenML `aws` integration for storing container images. Use it if your stack components require pulling or pushing images and you have access to AWS ECR. ### Deployment Steps 1. **Create a Repository**: - Visit the [ECR website](https://console.aws.amazon.com/ecr). - Select the correct region and click `Create repository`. - Name the repository based on your orchestrator or step operator. 2. **URI Format**: The ECR URI format is: ``` .dkr.ecr..amazonaws.com ``` Example URIs: ``` 123456789.dkr.ecr.eu-west-2.amazonaws.com ``` 3. **Obtain URI**: - Get your `Account ID` from the AWS console. - Choose the desired region from [AWS regional endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#regional-endpoints). - Construct the URI using the template. ### Using the AWS Container Registry 1. **Requirements**: - Install ZenML AWS integration: ```shell zenml integration install aws ``` - Ensure Docker is installed and running. - Use the obtained registry URI. 2. **Register the Container Registry**: ```shell zenml container-registry register --flavor=aws --uri= zenml stack update -c ``` ### Authentication Methods - **Local Authentication**: Quick setup using local AWS CLI credentials. Requires AWS CLI installation. ```shell aws ecr get-login-password --region | docker login --username AWS --password-stdin ``` *Note: Local authentication is not portable across environments.* - **AWS Service Connector (Recommended)**: Provides auto-configuration and better security. ```shell zenml service-connector register --type aws -i ``` Non-interactive version: ```shell zenml service-connector register --type aws --resource-type docker-registry --auto-configure ``` ### Connecting the AWS Container Registry 1. **Register and Connect**: ```shell zenml container-registry register -f aws --uri= zenml container-registry connect -i ``` Non-interactive: ```shell zenml container-registry connect --connector ``` 2. **Using in a ZenML Stack**: ```shell zenml stack register -c ... --set ``` ### Local Docker Client Authentication To manually interact with the remote registry: ```shell zenml service-connector login --resource-type docker-registry ``` ### Additional Resources For more details on configurable attributes of the AWS container registry, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.container_registries.aws_container_registry.AWSContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/github.md === ### GitHub Container Registry Overview The GitHub Container Registry, integrated with ZenML, is used to store container images. #### When to Use Utilize the GitHub Container Registry if: - Your stack components require pulling or pushing container images. - You are using GitHub for your projects. If not using GitHub, explore other container registry options. #### Deployment The GitHub Container Registry is enabled by default upon creating a GitHub account. #### Finding the Registry URI The URI format is: ```shell ghcr.io/ ``` Examples: - `ghcr.io/zenml` - `ghcr.io/my-username` - `ghcr.io/my-organization` To find your URI, replace `` with your GitHub username or organization name. #### Usage Requirements To use the GitHub Container Registry: - Install and run [Docker](https://www.docker.com). - Obtain the registry URI as described above. - Configure your Docker client for pulling and pushing images by creating a personal access token and logging in. Follow [this guide](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-to-the-container-registry). #### Registering the Container Registry To register and update your active stack, use: ```shell zenml container-registry register --flavor=github --uri= zenml stack update -c ``` For more details and configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.github_container_registry.GitHubContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/container-registries.md === # Container Registries Container registries are crucial for storing Docker images used in remote MLOps stacks, enabling the containerization of machine learning pipeline code for isolated execution. ### Usage A container registry is necessary when components of your stack need to push or pull container images, applicable to most of ZenML's remote orchestrators, step operators, and some model deployers. Check the documentation for specific components to determine if a container registry is required. ### Container Registry Flavors ZenML supports several container registry flavors: - **Default flavor**: Accepts any URI without validation, suitable for local or unsupported remote registries. - **Specific flavors**: Validate URIs and perform checks for push permissions. **Recommendation**: Use specific container registry flavors for additional URI validation. | Container Registry | Flavor | Integration | URI Example | |--------------------|---------|--------------|-------------------------------------------| | [DefaultContainerRegistry](default.md) | `default` | _built-in_ | - | | [DockerHubContainerRegistry](dockerhub.md) | `dockerhub` | _built-in_ | docker.io/zenml | | [GCPContainerRegistry](gcp.md) | `gcp` | _built-in_ | gcr.io/zenml | | [AzureContainerRegistry](azure.md) | `azure` | _built-in_ | zenml.azurecr.io | | [GitHubContainerRegistry](github.md) | `github` | _built-in_ | ghcr.io/zenml | | [AWSContainerRegistry](aws.md) | `aws` | `aws` | 123456789.dkr.ecr.us-east-1.amazonaws.com | To view available container registry flavors, use the command: ```shell zenml container-registry flavor list ``` ================================================== === File: docs/book/component-guide/feature-stores/custom.md === ### Develop a Custom Feature Store Before creating a custom feature store, review the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) to understand ZenML's component flavor concepts. **Feature Store Overview:** - Feature stores enable data teams to serve data through: - An offline store - An online low-latency store - They maintain synchronization between the two and provide a centralized registry for features and feature schemas for team or organizational use. **Important Note:** - The base abstraction for feature stores is currently in progress, and extension is not available. For immediate use, refer to the list of existing feature stores. ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/component-guide/feature-stores/feature-stores.md === ### Feature Stores Feature stores enable data teams to manage data through an offline store and an online low-latency store, ensuring synchronization between the two. They provide a centralized registry for features and feature schemas, addressing the challenge of train-serve skew where training and serving data diverge. #### When to Use Feature stores are optional components in the ZenML Stack, suitable for: - Productionalizing new features - Reusing existing features across pipelines and models - Ensuring consistency between training and serving data - Providing a central registry of features and feature schemas #### Available Feature Stores ZenML features an integration with Feast for production use cases. The following feature stores are available: | Feature Store | Flavor | Integration | Notes | |------------------------------|---------|-------------|-------------------------------------| | [FeastFeatureStore](feast.md)| `feast` | `feast` | Connects ZenML with existing Feast | | [Custom Implementation](custom.md) | _custom_ | | Extend the feature store abstraction | To view available feature store flavors, use: ```shell zenml feature-store flavor list ``` #### How to Use The feature store implementation is based on the Feast integration. For usage details, refer to the [Feast documentation](feast.md#how-do-you-use-it). ================================================== === File: docs/book/component-guide/feature-stores/feast.md === ### Summary of Feast Feature Store Documentation **Feast Overview** Feast (Feature Store) is a system for managing and serving machine learning features for production models. It supports both low-latency online stores for real-time predictions and offline stores for batch scoring or model training. **Use Cases** Feast enables: - Access to offline/batch data for training. - Access to online data during inference. **Deployment** To deploy Feast with ZenML: 1. Ensure you have a Feast feature store. If not, follow the [Feast Documentation](https://docs.feast.dev/how-to-guides/feast-snowflake-gcp-aws/deploy-a-feature-store). 2. Install the Feast integration in ZenML: ```shell zenml integration install feast ``` 3. Register the feature store: ```shell zenml feature-store register feast_store --flavor=feast --feast_repo="" zenml stack register ... -f feast_store ``` **Usage** To retrieve features from a registered feature store, create a step that interfaces with it: ```python from datetime import datetime from typing import Any, Dict, List, Union import pandas as pd from zenml import step from zenml.client import Client @step def get_historical_features(entity_dict: Union[Dict[str, Any], str], features: List[str], full_feature_names: bool = False) -> pd.DataFrame: """Fetch historical features from Feast.""" feature_store = Client().active_stack.feature_store if not feature_store: raise DoesNotExistException("Feast feature store not available.") entity_dict["event_timestamp"] = [datetime.fromisoformat(val) for val in entity_dict["event_timestamp"]] entity_df = pd.DataFrame.from_dict(entity_dict) return feature_store.get_historical_features(entity_df=entity_df, features=features, full_feature_names=full_feature_names) entity_dict = { "driver_id": [1001, 1002, 1003], "label_driver_reported_satisfaction": [1, 5, 3], "event_timestamp": [ datetime(2021, 4, 12, 10, 59, 42).isoformat(), datetime(2021, 4, 12, 8, 12, 10).isoformat(), datetime(2021, 4, 12, 16, 40, 26).isoformat(), ], } features = [ "driver_hourly_stats:conv_rate", "driver_hourly_stats:acc_rate", "driver_hourly_stats:avg_daily_trips", "transformed_conv_rate:conv_rate_plus_val1", "transformed_conv_rate:conv_rate_plus_val2", ] @pipeline def my_pipeline(): my_features = get_historical_features(entity_dict, features) ... ``` **Important Notes** - Online data retrieval is supported locally but not in deployed models. - ZenML’s use of Pydantic limits data types to basic types; conversions are necessary for complex types like `DataFrame` or `datetime`. For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-feast/#zenml.integrations.feast.feature_stores.feast_feature_store.FeastFeatureStore). ================================================== === File: docs/book/component-guide/step-operators/step-operators.md === # Step Operators The step operator allows execution of individual pipeline steps in specialized environments optimized for specific workloads, such as those requiring GPUs or distributed processing frameworks like [Spark](https://spark.apache.org/). ### Comparison to Orchestrators The orchestrator is a mandatory component that executes all pipeline steps in order and manages scheduling. In contrast, the step operator is used for executing individual steps in separate environments when the orchestrator's environment is inadequate. ### When to Use Utilize a step operator when pipeline steps need resources unavailable in the orchestrator's runtime environment. For example, if a step requires GPU resources for training a model but the orchestrator runs on a cluster without GPUs, a step operator like [SageMaker](sagemaker.md), [Vertex](vertex.md), or [AzureML](azureml.md) should be used. ### Step Operator Flavors ZenML provides the following integrations for executing steps on major cloud providers: | Step Operator | Flavor | Integration | Notes | |---------------|-------------|-------------|-----------------------------------------| | [AzureML](azureml.md) | `azureml` | `azure` | Executes steps using AzureML | | [Kubernetes](kubernetes.md) | `kubernetes` | `kubernetes` | Executes steps using Kubernetes Pods | | [Modal](modal.md) | `modal` | `modal` | Executes steps using Modal | | [SageMaker](sagemaker.md) | `sagemaker` | `aws` | Executes steps using SageMaker | | [Spark](spark-kubernetes.md) | `spark` | `spark` | Executes steps in a distributed manner using Spark on Kubernetes | | [Vertex](vertex.md) | `vertex` | `gcp` | Executes steps using Vertex AI | | [Custom Implementation](custom.md) | _custom_ | | Allows custom step operator implementation | To view available flavors, use: ```shell zenml step-operator flavor list ``` ### How to Use You don't need to directly interact with ZenML step operators in your code. Specify the desired step operator in the `@step` decorator of your step, as shown below: ```python from zenml import step @step(step_operator=) def my_step(...) -> ...: ... ``` #### Specifying Per-Step Resources For additional hardware resources, specify them in your steps as detailed [here](../../how-to/pipeline-development/training-with-gpus/README.md). #### Enabling CUDA for GPU Hardware To run steps on a GPU, follow [these instructions](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA for full GPU acceleration. ================================================== === File: docs/book/component-guide/step-operators/custom.md === ### Developing a Custom Step Operator in ZenML #### Overview To develop a custom step operator in ZenML, it's essential to understand the component flavor concepts outlined in the [general guide](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). #### Base Abstraction The `BaseStepOperator` is an abstract class that must be subclassed to execute pipeline steps in a separate environment. It provides a basic interface: ```python from abc import ABC, abstractmethod from typing import List, Type from zenml.enums import StackComponentType from zenml.stack import StackComponent, StackComponentConfig, Flavor from zenml.config.step_run_info import StepRunInfo class BaseStepOperatorConfig(StackComponentConfig): """Base config for step operators.""" class BaseStepOperator(StackComponent, ABC): """Base class for ZenML step operators.""" @abstractmethod def launch(self, info: StepRunInfo, entrypoint_command: List[str]) -> None: """Executes a step synchronously.""" class BaseStepOperatorFlavor(Flavor): """Base class for ZenML step operator flavors.""" @property @abstractmethod def name(self) -> str: """Returns the flavor name.""" @property def type(self) -> StackComponentType: return StackComponentType.STEP_OPERATOR @property def config_class(self) -> Type[BaseStepOperatorConfig]: return BaseStepOperatorConfig @property @abstractmethod def implementation_class(self) -> Type[BaseStepOperator]: """Returns the implementation class for this flavor.""" ``` #### Steps to Create a Custom Step Operator 1. **Subclass `BaseStepOperator`**: Implement the `launch` method to prepare the execution environment and run the entrypoint command. - Use `info.pipeline.docker_settings` for Docker dependencies. - Ensure source code is available in the execution environment. 2. **Handle Resources**: If applicable, manage resources defined in `info.config.resource_settings`. 3. **Create Configuration Class**: Inherit from `BaseStepOperatorConfig` to add custom parameters. 4. **Combine Implementation and Configuration**: Inherit from `BaseStepOperatorFlavor`, providing a name for the flavor. 5. **Register the Flavor**: Use the CLI to register the flavor: ```shell zenml step-operator flavor register ``` Example: ```shell zenml step-operator flavor register flavors.my_flavor.MyStepOperatorFlavor ``` #### Important Considerations - Ensure ZenML is initialized at the root of your repository to avoid resolution issues. - After registration, list available flavors: ```shell zenml step-operator flavor list ``` #### Additional Notes - The `CustomStepOperatorFlavor` is used during flavor creation, while `CustomStepOperatorConfig` is utilized during registration for validation. - The `CustomStepOperator` is invoked when the component is in use, allowing for separation of configuration and implementation. #### Enabling GPU Support For GPU execution, follow the instructions on [enabling CUDA](../../how-to/pipeline-development/training-with-gpus/README.md) to ensure full acceleration capabilities. ================================================== === File: docs/book/component-guide/step-operators/spark-kubernetes.md === ### Summary: Executing Individual Steps on Spark #### Overview The `spark` integration in ZenML introduces two key step operators: - **SparkStepOperator**: Base class for Spark-related step operators. - **KubernetesSparkStepOperator**: Launches ZenML steps as Spark applications using Kubernetes. #### SparkStepOperator **Configuration Class: `SparkStepOperatorConfig`** - **Attributes**: - `master`: Master URL for the Spark cluster (supports Kubernetes, YARN, Mesos). - `deploy_mode`: 'cluster' (default) or 'client', indicating driver node location. - `submit_kwargs`: JSON string for additional Spark parameters. **Implementation Class: `SparkStepOperator`** - **Methods**: - `_resource_configuration`: Configures Spark resources. - `_backend_configuration`: Configures backend settings for cluster managers. - `_io_configuration`: Configures input/output sources. - `_additional_configuration`: Appends user-defined parameters. - `_launch_spark_job`: Executes a Spark job using `spark-submit`. - `launch`: Initiates the step on Spark. **Important Notes**: - `_io_configuration` is effective with `S3ArtifactStore` requiring additional configuration for other stores. #### KubernetesSparkStepOperator **Configuration Class: `KubernetesSparkStepOperatorConfig`** - **Attributes**: - `namespace`: Kubernetes namespace for driver/executor pods. - `service_account`: Service account for Spark components. **Implementation Class: `KubernetesSparkStepOperator`** - Inherits from `SparkStepOperator`. - Overrides `_backend_configuration` to set up Kubernetes-specific configurations. #### When to Use - For large datasets or when leveraging distributed computing for efficiency. #### Deployment Steps 1. **Remote ZenML Server**: Follow the deployment guide. 2. **Kubernetes Cluster**: Set up using cloud providers or custom infrastructure (e.g., AWS EKS). - **EKS Setup**: Create IAM roles, configure the EKS cluster, and set up node groups. - **Docker Image**: Use Spark's Docker images or build custom images with required dependencies. - **RBAC Configuration**: Create necessary Kubernetes resources for Spark access. #### Usage - Install the Spark integration: ```bash zenml integration install spark ``` - Register the step operator: ```bash zenml step-operator register spark_step_operator \ --flavor=spark-kubernetes \ --master=k8s://$EKS_API_SERVER_ENDPOINT \ --namespace= \ --service_account= ``` - Register the stack: ```bash zenml stack register spark_stack \ -o default \ -s spark_step_operator \ -a spark_artifact_store \ -c spark_container_registry \ -i local_builder \ --set ``` - Define a step using the operator: ```python from zenml import step @step(step_operator=) def step_on_spark(...) -> ...: ... ``` #### Additional Configuration For more configurations, use `SparkStepOperatorSettings` when defining or running pipelines. Refer to the SDK documentation for available attributes. ================================================== === File: docs/book/component-guide/step-operators/sagemaker.md === ### Amazon SageMaker Step Operator Overview **Amazon SageMaker** provides specialized compute instances for training jobs and a UI for model management. The **ZenML SageMaker step operator** enables submission of individual steps to run on SageMaker instances. ### When to Use Use the SageMaker step operator if: - Your pipeline steps require resources (CPU, GPU, memory) not available in your orchestrator. - You have access to SageMaker. For other cloud providers, refer to the **Vertex** or **AzureML** step operators. ### Deployment Requirements 1. **IAM Role**: Create a role in the IAM console with `AmazonS3FullAccess` and `AmazonSageMakerFullAccess` policies. [Setup Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html#sagemaker-roles-create-execution-role). 2. **ZenML AWS Integration**: Install with: ```shell zenml integration install aws ``` 3. **Docker**: Must be installed and running. 4. **AWS Container Registry**: Required for your stack. [Setup Guide](../container-registries/aws.md#how-to-deploy-it). 5. **Remote Artifact Store**: Needed for reading/writing artifacts. Refer to the respective documentation for setup. 6. **Instance Type**: Choose an instance type for executing steps. [Available Types](https://docs.aws.amazon.com/sagemaker/latest/dg/notebooks-available-instance-types.html). 7. **(Optional) Experiment**: To group SageMaker runs. [Create Experiment Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments-create.html). ### Authentication Methods #### 1. **Service Connector (Recommended)** Register a service connector with AWS permissions: ```shell zenml service-connector register --type aws -i zenml step-operator register \ --flavor=sagemaker \ --role= \ --instance_type= \ # --experiment_name= zenml step-operator connect --connector zenml stack register -s ... --set ``` #### 2. **Implicit Authentication** - **Local Orchestrator**: Uses the `default` AWS profile. Ensure it has SageMaker permissions. - **Remote Orchestrator**: Must authenticate to AWS and assume the specified IAM role. ```shell zenml step-operator register \ --flavor=sagemaker \ --role= \ --instance_type= \ # --experiment_name= zenml stack register -s ... --set python run.py # Authenticates with `default` profile ``` ### Using the Step Operator To execute steps in SageMaker, specify the operator in the `@step` decorator: ```python from zenml import step @step(step_operator=) def trainer(...) -> ...: """Train a model.""" ``` ### Additional Configuration For further configuration, pass `SagemakerStepOperatorSettings` when defining/running your pipeline. Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.flavors.sagemaker_step_operator_flavor.SagemakerStepOperatorSettings) for attributes and [runtime configuration](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md). ### Enabling CUDA for GPU To run steps on GPU, follow the [GPU training instructions](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA for full acceleration. ================================================== === File: docs/book/component-guide/step-operators/modal.md === ### Modal Step Operator Overview **Modal** is a cloud infrastructure platform optimized for fast execution times, particularly for Docker image building and hardware provisioning. The **ZenML Modal step operator** allows you to run individual steps on Modal compute instances. #### When to Use Use the Modal step operator if: - You require fast execution for resource-intensive steps (CPU, GPU, memory). - You want precise hardware specifications for each step. - You have access to Modal. #### Deployment Steps 1. **Sign Up**: Create a Modal account [here](https://modal.com/signup). 2. **Install CLI**: Run: ```shell pip install modal modal setup ``` #### Usage Requirements - Install the ZenML Modal integration: ```shell zenml integration install modal ``` - Ensure Docker is installed and running. - Set up a cloud artifact store and a cloud container registry supported by ZenML. #### Registering the Step Operator Register the step operator with: ```shell zenml step-operator register --flavor=modal zenml stack update -s ... ``` #### Executing Steps To execute a step using the Modal operator, use the `@step` decorator: ```python from zenml import step @step(step_operator=) def trainer(...) -> ...: """Train a model.""" ``` ZenML builds a Docker image for your code to run in Modal. #### Additional Configuration Specify hardware requirements using `ResourceSettings`: ```python from zenml.config import ResourceSettings from zenml.integrations.modal.flavors import ModalStepOperatorSettings modal_settings = ModalStepOperatorSettings(gpu="A100") resource_settings = ResourceSettings(cpu=2, memory="32GB") @step( step_operator="modal", settings={ "step_operator": modal_settings, "resources": resource_settings } ) def my_modal_step(): ... ``` - The `cpu` parameter in `ResourceSettings` accepts a single integer, indicating a soft minimum limit. - Example cost for 2 CPUs and 32GB memory: ~$1.03/hour. This configuration runs `my_modal_step` on a Modal instance with 1 A100 GPU, 2 CPUs, and 32GB memory. For supported GPU types, refer to the [Modal docs](https://modal.com/docs/reference/modal.gpu). #### Important Notes - Settings for region and cloud provider are available for Modal Enterprise and Team plans only. - Use looser settings to avoid execution failures; Modal provides detailed error messages for troubleshooting. - For more on region selection, see the [Modal docs](https://modal.com/docs/guide/region-selection). ================================================== === File: docs/book/component-guide/step-operators/azureml.md === ### Summary of AzureML Step Operator Documentation **Overview**: AzureML provides compute instances for training jobs and a UI for model management. ZenML's AzureML step operator enables submission of individual steps to AzureML compute instances. **When to Use**: - Use the AzureML step operator when pipeline steps require resources not provided by your orchestrator. - Access to AzureML is necessary; for other cloud providers, consider SageMaker or Vertex step operators. **Deployment Steps**: 1. Create an Azure Machine Learning workspace, including a container registry and storage account. 2. (Optional) Create a compute instance or cluster in AzureML. 3. (Optional) Create a Service Principal for authentication if using a service connector. **Usage Requirements**: - Install ZenML Azure integration: ```shell zenml integration install azure ``` - Docker must be installed and running. - Set up an Azure container registry and artifact store. - Ensure an AzureML workspace and optional compute cluster are available. **Authentication Methods**: 1. **Service Connector** (recommended): - Register a service connector and connect it to the step operator. - Example commands: ```shell zenml service-connector register --type azure -i zenml step-operator register --flavor=azureml --subscription_id= --resource_group= --workspace_name= zenml step-operator connect --connector zenml stack register -s ... --set ``` 2. **Implicit Authentication**: - For local orchestrators, ZenML uses Azure CLI configuration. - For remote orchestrators, ensure they can authenticate to Azure. **Executing Steps**: - Specify the step operator in the `@step` decorator: ```python from zenml import step @step(step_operator=) def trainer(...) -> ...: """Train a model.""" ``` **Docker Image**: ZenML builds a Docker image for the pipeline, which can be customized. **Configuration**: - Use `AzureMLStepOperatorSettings` to configure compute resources: - **Serverless Compute** (default): Set `mode` to `serverless`. - **Compute Instance**: Set `mode` to `compute-instance` and specify `compute_name` and optional parameters. - **Compute Cluster**: Set `mode` to `compute-cluster` and specify `compute_name`. Example of defining a compute instance: ```python from zenml.integrations.azure.flavors import AzureMLStepOperatorSettings azureml_settings = AzureMLStepOperatorSettings( mode="compute-instance", compute_name="MyComputeInstance", compute_size="Standard_NC6s_v3", ) @step(settings={"step_operator": azureml_settings}) def my_azureml_step(): # YOUR STEP CODE ... ``` **CUDA for GPU**: Follow specific instructions to enable CUDA for GPU acceleration when using the step operator. For more details, refer to the AzureMLStepOperatorSettings SDK docs and runtime configuration documentation. ================================================== === File: docs/book/component-guide/step-operators/kubernetes.md === ### Kubernetes Step Operator Overview ZenML's Kubernetes step operator enables the execution of individual pipeline steps on Kubernetes pods, ideal for scenarios requiring additional computing resources beyond those provided by the orchestrator. #### When to Use - When pipeline steps need more CPU, GPU, or memory resources. - When a Kubernetes cluster is accessible. #### Deployment Requirements 1. **Kubernetes Cluster**: Must be deployed (refer to the cloud guide for options). 2. **ZenML Kubernetes Integration**: Install with: ```shell zenml integration install kubernetes ``` 3. **Docker**: Either installed locally or use a remote image builder. 4. **Remote Artifact Store**: Required for reading/writing artifacts. **Recommendation**: Set up a Service Connector for connecting to the Kubernetes cluster, especially for cloud-managed clusters (AWS, GCP, Azure). #### Usage Steps 1. **Register the Step Operator**: - Using a Service Connector: ```shell zenml step-operator register --flavor kubernetes zenml service-connector list-resources --resource-type kubernetes-cluster -e zenml step-operator connect --connector ``` - Using `kubectl`: ```shell zenml step-operator register --flavor=kubernetes --kubernetes_context= ``` 2. **Update Active Stack**: ```shell zenml stack update -s ``` 3. **Define Steps**: Use the registered step operator in the `@step` decorator: ```python from zenml import step @step(step_operator=) def trainer(...) -> ...: """Train a model.""" ``` #### Interacting with Pods For debugging, you can interact with Kubernetes pods using `kubectl`. Pods are labeled with: - `run`: ZenML run name - `pipeline`: ZenML pipeline name Example to delete pods for a specific pipeline: ```shell kubectl delete pod -n zenml -l pipeline=kubernetes_example_pipeline ``` #### Additional Configuration Customize the Kubernetes step operator using `KubernetesStepOperatorSettings` for attributes like: - **Pod Settings**: Node selectors, labels, affinity, tolerations, and image pull secrets. - **Service Account Name**: Specify the service account for the pods. Example configuration: ```python from zenml.integrations.kubernetes.flavors import KubernetesStepOperatorSettings kubernetes_settings = KubernetesStepOperatorSettings( pod_settings={ "node_selectors": {"cloud.google.com/gke-nodepool": "ml-pool"}, "affinity": {...}, "tolerations": [...], "resources": {...}, "annotations": {...}, "volumes": [...], "volume_mounts": [...], "host_ipc": True, "image_pull_secrets": ["regcred"], "labels": {...} }, kubernetes_namespace="ml-pipelines", service_account_name="zenml-pipeline-runner" ) @step(settings={"step_operator": kubernetes_settings}) def my_kubernetes_step(): ... ``` #### GPU Configuration To run steps on GPU, follow specific instructions to enable CUDA for full acceleration. For more details, refer to the SDK documentation for a complete list of attributes and configuration options. ================================================== === File: docs/book/component-guide/step-operators/vertex.md === ### Google Cloud Vertex AI Documentation Summary **Overview**: Google Cloud Vertex AI provides specialized compute instances for training jobs and a UI for managing models and logs. ZenML's Vertex AI step operator allows submission of pipeline steps to Vertex AI compute instances. **When to Use**: - Use the Vertex step operator when pipeline steps require additional computing resources not provided by your orchestrator. - Requires access to Vertex AI; for other cloud providers, consider SageMaker or AzureML step operators. **Deployment Steps**: 1. **Enable Vertex AI**: Activate it in the Google Cloud Console. 2. **Create a Service Account**: Grant permissions for creating Vertex AI jobs (`roles/aiplatform.admin`) and pushing to the container registry (`roles/storage.admin`). **Usage Requirements**: - Install ZenML GCP integration: ```shell zenml integration install gcp ``` - Ensure Docker is installed and running. - Enable Vertex AI and have a service account file. - Set up a GCR container registry. - (Optional) Specify a machine type (default: `n1-standard-4`). - Configure a remote artifact store for read/write access. **Authentication Options**: 1. **Using `gcloud` CLI**: ```shell gcloud auth login zenml step-operator register --flavor=vertex --project= --region= ``` 2. **Service Account Key File**: ```shell zenml step-operator register --flavor=vertex --project= --region= --service_account_path= ``` 3. **GCP Service Connector** (recommended): ```shell zenml service-connector register --type gcp --auth-method=service-account --project_id= --service_account_json=@ zenml step-operator register --flavor=vertex --region= zenml step-operator connect --connector ``` **Using the Step Operator**: - Add the step operator to the active stack: ```shell zenml stack update -s ``` - Define a step using the operator: ```python from zenml import step @step(step_operator=) def trainer(...) -> ...: """Train a model.""" ``` **Additional Configuration**: - Specify service account, network, and reserved IP ranges during registration: ```shell zenml step-operator register --flavor=vertex --project= --region= --service_account= --network= --reserved_ip_ranges= ``` **Custom Settings**: - Use `VertexStepOperatorSettings` for additional configurations: ```python from zenml import step from zenml.integrations.gcp.flavors.vertex_step_operator_flavor import VertexStepOperatorSettings @step(step_operator=, settings={"step_operator": VertexStepOperatorSettings( accelerator_type="NVIDIA_TESLA_T4", accelerator_count=1, machine_type="n1-standard-2", disk_type="pd-ssd", disk_size_gb=100, )}) def trainer(...) -> ...: """Train a model.""" ``` **CUDA for GPU**: Follow specific instructions to enable CUDA for GPU acceleration. **Persistent Resources**: To speed up development: 1. Create a persistent resource in GCP. 2. Ensure the step operator is configured with a service account that has access to it: ```bash zenml step-operator register -f vertex --service_account= ``` 3. Use the persistent resource in your code: ```python @step(step_operator=, settings={"step_operator": VertexStepOperatorSettings( persistent_resource_id="my-persistent-resource", machine_type="n1-standard-4", accelerator_type="NVIDIA_TESLA_T4", accelerator_count=1, )}) def trainer(...) -> ...: """Train a model.""" ``` **Cost Management**: Monitor persistent resource usage as they incur costs while running. ================================================== === File: docs/book/component-guide/experiment-trackers/custom.md === ### Develop a Custom Experiment Tracker #### Overview To create a custom experiment tracker in ZenML, refer to the [general guide on writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). Note that a base abstraction for the Experiment Tracker is under development, and extensions are currently not recommended. You can use existing flavors or implement your own, but be prepared for potential refactoring once the base abstraction is released. #### Steps to Build a Custom Experiment Tracker 1. **Create a Class**: Inherit from `BaseExperimentTracker` and implement the abstract methods. 2. **Configuration Class**: If needed, create a class inheriting from `BaseExperimentTrackerConfig` for configuration parameters. 3. **Combine Classes**: Inherit from `BaseExperimentTrackerFlavor` to integrate the implementation and configuration. #### Registering Your Flavor Use the CLI to register your custom flavor with the following command, ensuring to use dot notation for the flavor class: ```shell zenml experiment-tracker flavor register ``` For example, if `MyExperimentTrackerFlavor` is in `flavors/my_flavor.py`, register it as: ```shell zenml experiment-tracker flavor register flavors.my_flavor.MyExperimentTrackerFlavor ``` #### Best Practices - Initialize ZenML at the root of your repository using `zenml init` to avoid resolution issues. - After registration, verify your flavor is available with: ```shell zenml experiment-tracker flavor list ``` #### Important Notes - The **CustomExperimentTrackerFlavor** class is used during flavor creation via CLI. - The **CustomExperimentTrackerConfig** class is utilized when registering/updating a stack component, validating user-provided values. - The **CustomExperimentTracker** is engaged when the component is in use, allowing separation of flavor configuration from implementation. This design enables registration of flavors and components without needing all dependencies installed locally, provided the flavor and config are in a separate module from the actual tracker implementation. ================================================== === File: docs/book/component-guide/experiment-trackers/vertexai.md === ### Vertex AI Experiment Tracker Summary The **Vertex AI Experiment Tracker** is a component of the ZenML integration that utilizes the Google Cloud Vertex AI tracking service to log and visualize pipeline step data, such as models, parameters, and metrics. #### Use Cases - Ideal for iterative ML experimentation and transitioning to production workflows. - Recommended if already using Vertex AI or seeking a managed solution within the Google Cloud ecosystem. - Consider other Experiment Tracker flavors if unfamiliar with Vertex AI or using non-GCP cloud providers. #### Configuration To configure the Vertex AI Experiment Tracker, install the GCP ZenML integration: ```shell zenml integration install gcp -y ``` **Main Configuration Options:** - `project`: GCP project name (default inferred). - `location`: GCP location (default is us-central1). - `staging_bucket`: GCS bucket for staging artifacts (format: gs://...). - `service_account_path`: Path to service account JSON for authentication. **Registering the Tracker:** ```shell zenml experiment-tracker register vertex_experiment_tracker \ --flavor=vertex \ --project= \ --location= \ --staging_bucket=gs:// zenml stack register custom_stack -e vertex_experiment_tracker ... --set ``` #### Authentication Methods 1. **Implicit Authentication**: Quick local setup using `gcloud auth login`. Not recommended for production. 2. **GCP Service Connector** (recommended): Provides better security and configuration management. ```sh zenml service-connector register --type gcp -i ``` 3. **GCP Credentials**: Use a service account key stored in a ZenML secret for authentication. ```shell zenml experiment-tracker register \ --flavor=vertex \ --project= \ --location= \ --staging_bucket=gs:// \ --service_account_path=path/to/service_account_key.json ``` #### Usage To log information from a ZenML pipeline step, enable the experiment tracker using the `@step` decorator. **Example 1: Logging Metrics** ```python from google.cloud import aiplatform class VertexAICallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): metrics = {key: value for key, value in (logs or {}).items() if isinstance(value, (int, float))} aiplatform.log_time_series_metrics(metrics=metrics, step=epoch) @step(experiment_tracker="") def train_model(config, x_train, y_train): aiplatform.autolog() model.fit(x_train, y_train, callbacks=[VertexAICallback()]) aiplatform.log_metrics(...) aiplatform.log_params(...) ``` **Example 2: Uploading TensorBoard Logs** ```python @step(experiment_tracker="") def train_model(config, gcs_path, x_train, y_train): tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=gcs_path) aiplatform.start_upload_tb_log(tensorboard_experiment_name="experiment_name", logdir=gcs_path) model.fit(x_train, y_train, callbacks=[tensorboard_callback]) aiplatform.end_upload_tb_log() aiplatform.log_metrics(...) aiplatform.log_params(...) ``` #### Accessing the Experiment Tracker UI To find the URL of the Vertex AI experiment linked to a ZenML run: ```python from zenml.client import Client client = Client() tracking_url = client.get_pipeline("").last_run.steps.get("").run_metadata["experiment_tracker_url"].value print(tracking_url) ``` #### Additional Configuration You can specify additional settings using `VertexExperimentTrackerSettings` for experiment names or TensorBoard instances: ```python from zenml.integrations.gcp.flavors.vertex_experiment_tracker_flavor import VertexExperimentTrackerSettings vertexai_settings = VertexExperimentTrackerSettings(experiment="") @step(experiment_tracker="", settings={"experiment_tracker": vertexai_settings}) def step_one(data): ... ``` For more detailed configurations, refer to the ZenML documentation. ================================================== === File: docs/book/component-guide/experiment-trackers/neptune.md === # Neptune Experiment Tracker with ZenML The Neptune Experiment Tracker integrates with [neptune.ai](https://neptune.ai/product/experiment-tracking) to log and visualize information from ZenML pipeline steps, such as models, parameters, and metrics. ## Use Cases You should use the Neptune Experiment Tracker if: - You are already using neptune.ai for tracking experiment results and want to continue as you adopt MLOps practices with ZenML. - You prefer a visually interactive way to navigate results from ZenML pipeline runs. - You want to share logged artifacts and metrics with your team or stakeholders. Consider other [Experiment Tracker flavors](./experiment-trackers.md#experiment-tracker-flavors) if you are unfamiliar with neptune.ai. ## Deployment To deploy the Neptune Experiment Tracker, install the integration: ```shell zenml integration install neptune -y ``` ### Authentication Methods You need to configure the following credentials: - `api_token`: Your Neptune account API token. If left blank, it will attempt to retrieve it from environment variables. - `project`: The project name in the format "workspace-name/project-name". #### ZenML Secret (Recommended) Store credentials securely using a ZenML secret: ```shell zenml secret create neptune_secret --api_token= ``` Configure the Experiment Tracker: ```shell zenml experiment-tracker register neptune_experiment_tracker \ --flavor=neptune \ --project= \ --api_token={{neptune_secret.api_token}} zenml stack register neptune_stack -e neptune_experiment_tracker ... --set ``` #### Basic Authentication (Not Recommended for Production) Directly configure credentials in stack attributes: ```shell zenml experiment-tracker register neptune_experiment_tracker --flavor=neptune \ --project= --api_token= zenml stack register neptune_stack -e neptune_experiment_tracker ... --set ``` ## Usage To log information from a ZenML pipeline step, enable the experiment tracker with the `@step` decorator and fetch the Neptune run object: ```python from zenml.integrations.neptune.experiment_trackers.run_state import get_neptune_run from zenml import get_step_context, step from sklearn.model_selection import train_test_split from sklearn.svm import SVC from zenml.client import Client experiment_tracker = Client().active_stack.experiment_tracker @step(experiment_tracker=experiment_tracker.name) def train_model() -> SVC: iris = load_iris() X_train, _, y_train, _ = train_test_split(iris.data, iris.target, test_size=0.2) params = {"kernel": "rbf", "C": 1.0} model = SVC(**params).fit(X_train, y_train) neptune_run = get_neptune_run() neptune_run["parameters"] = params return model ``` ### Logging Metadata Use `get_step_context` to log ZenML metadata: ```python @step(experiment_tracker="neptune_tracker") def my_step(): neptune_run = get_neptune_run() context = get_step_context() neptune_run["pipeline_metadata"] = stringify_unsupported(context.pipeline_run.get_metadata().dict()) neptune_run[f"step_metadata/{context.step_name}"] = stringify_unsupported(context.step_run.get_metadata().dict()) ``` ### Adding Tags Pass tags using `NeptuneExperimentTrackerSettings`: ```python neptune_settings = NeptuneExperimentTrackerSettings(tags={"keras", "mnist"}) @step(experiment_tracker="", settings={"experiment_tracker": neptune_settings}) def my_step(...): neptune_run = get_neptune_run() ``` ## Neptune UI Neptune provides a web-based UI to inspect tracked experiments. Each ZenML pipeline run is logged as a separate experiment in Neptune, accessible via the console or the dashboard. ## Full Code Example Here’s an end-to-end example integrating ZenML with Neptune: ```python from zenml.integrations.neptune.experiment_trackers.run_state import get_neptune_run from zenml import pipeline, step from zenml.client import Client from sklearn.model_selection import train_test_split from sklearn.svm import SVC from sklearn.metrics import accuracy_score experiment_tracker = Client().active_stack.experiment_tracker @step(experiment_tracker=experiment_tracker.name) def train_model() -> SVC: iris = load_iris() X_train, _, y_train, _ = train_test_split(iris.data, iris.target, test_size=0.2) params = {"kernel": "rbf", "C": 1.0} model = SVC(**params).fit(X_train, y_train) neptune_run = get_neptune_run() neptune_run["parameters"] = params return model @step(experiment_tracker=experiment_tracker.name) def evaluate_model(model: SVC): iris = load_iris() _, X_test, _, y_test = train_test_split(iris.data, iris.target, test_size=0.2) accuracy = accuracy_score(y_test, model.predict(X_test)) neptune_run = get_neptune_run() context = get_step_context() neptune_run["metrics/accuracy"] = accuracy return accuracy @pipeline def ml_pipeline(): model = train_model() evaluate_model(model) if __name__ == "__main__": from zenml.integrations.neptune.flavors import NeptuneExperimentTrackerSettings neptune_settings = NeptuneExperimentTrackerSettings(tags={"regression", "sklearn"}) ml_pipeline.with_options(settings={"experiment_tracker": neptune_settings})() ``` ## Further Reading For more information, check [Neptune's docs](https://docs.neptune.ai/integrations/zenml/). ================================================== === File: docs/book/component-guide/experiment-trackers/mlflow.md === # MLflow Experiment Tracker Summary ## Overview The MLflow Experiment Tracker, integrated with ZenML, utilizes the MLflow tracking service to log and visualize pipeline step information (models, parameters, metrics). ## Use Cases Use the MLflow Experiment Tracker if: - You are already using MLflow for experiment tracking and want to continue as you adopt MLOps practices. - You seek an interactive way to navigate results from ZenML pipeline runs. - Your team has a shared MLflow Tracking service and you want to connect ZenML for artifact sharing. Consider other Experiment Tracker flavors if you are unfamiliar with MLflow. ## Configuration To set up the MLflow Experiment Tracker, install the integration: ```shell zenml integration install mlflow -y ``` ### Deployment Scenarios 1. **Localhost (default)**: Requires a local Artifact Store. Not suitable for collaborative settings. ```shell zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow zenml stack register custom_stack -e mlflow_experiment_tracker ... --set ``` 2. **Remote Tracking Server**: Requires a deployed MLflow Tracking Server with authentication parameters. - **Critical Note**: Use MLflow version 2.2.1 or higher due to a vulnerability. 3. **Databricks**: Use the managed MLflow Tracking server with authentication parameters. ### Authentication Methods - **Basic Authentication** (not recommended for production): ```shell zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow \ --tracking_uri= --tracking_token= ``` - **ZenML Secret (Recommended)**: Store credentials securely. ```shell zenml secret create mlflow_secret --username= --password= zenml experiment-tracker register mlflow \ --flavor=mlflow \ --tracking_username={{mlflow_secret.username}} \ --tracking_password={{mlflow_secret.password}} \ ... ``` ## Usage To log information from a ZenML pipeline step, use the `@step` decorator and MLflow's logging capabilities: ```python import mlflow @step(experiment_tracker="") def tf_trainer(x_train, y_train): mlflow.tensorflow.autolog() mlflow.log_param(...) mlflow.log_metric(...) mlflow.log_artifact(...) return model ``` ### Dynamic Tracker Reference Instead of hardcoding, dynamically reference the experiment tracker: ```python from zenml.client import Client experiment_tracker = Client().active_stack.experiment_tracker @step(experiment_tracker=experiment_tracker.name) def tf_trainer(...): ... ``` ### MLflow UI Access the MLflow UI for detailed experiment tracking. Use the following to find the tracking URL: ```python from zenml.client import Client last_run = client.get_pipeline("").last_run trainer_step = last_run.get_step("") tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value print(tracking_url) ``` To start the local MLflow UI: ```bash mlflow ui --backend-store-uri ``` ### Additional Configuration For nested runs or tags, pass `MLFlowExperimentTrackerSettings`: ```python from zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor import MLFlowExperimentTrackerSettings mlflow_settings = MLFlowExperimentTrackerSettings(nested=True, tags={"key": "value"}) @step(experiment_tracker="", settings={"experiment_tracker": mlflow_settings}) def step_one(data): ... ``` For more details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.experiment_trackers.mlflow_experiment_tracker). ================================================== === File: docs/book/component-guide/experiment-trackers/experiment-trackers.md === ### Experiment Trackers in ZenML **Overview**: Experiment Trackers enable logging and visualization of ML experiments, linking pipeline runs to experiments. They provide a user-friendly interface for browsing and comparing experiment data. **Key Points**: - **Integration**: Experiment Trackers are optional stack components in ZenML, requiring registration as part of a ZenML Stack. - **Artifact Store**: ZenML automatically tracks pipeline artifacts via the mandatory Artifact Store, but Experiment Trackers enhance usability with visual interfaces. - **Usage**: Add an Experiment Tracker to your ZenML stack for improved visual features. **Architecture**: Experiment Trackers fit into the ZenML stack architecture, allowing seamless integration with various tools. **Available Flavors**: | Experiment Tracker | Flavor | Integration | Notes | |--------------------|--------|-------------|-------| | [Comet](comet.md) | `comet` | `comet` | Adds Comet tracking capabilities | | [MLflow](mlflow.md) | `mlflow` | `mlflow` | Adds MLflow tracking capabilities | | [Neptune](neptune.md) | `neptune` | `neptune` | Adds Neptune tracking capabilities | | [Weights & Biases](wandb.md) | `wandb` | `wandb` | Adds Weights & Biases tracking capabilities | | [Custom Implementation](custom.md) | _custom_ | | _custom_ | Custom tracking options available | **Command to List Flavors**: ```shell zenml experiment-tracker flavor list ``` **Usage Steps**: 1. Configure and add an Experiment Tracker to your ZenML stack. 2. Enable the Experiment Tracker for specific pipeline steps using a decorator. 3. Log information (models, metrics, etc.) explicitly in your pipeline steps. 4. Access the Experiment Tracker UI to visualize logged data. **Accessing Experiment Tracker UI**: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") step = pipeline_run.steps[""] experiment_tracker_url = step.run_metadata["experiment_tracker_url"].value ``` **Note**: Experiment trackers will mark runs as failed if the corresponding ZenML pipeline step fails. For detailed usage, refer to the documentation for the specific Experiment Tracker flavor in use. ================================================== === File: docs/book/component-guide/experiment-trackers/wandb.md === ### Weights & Biases Experiment Tracker with ZenML The Weights & Biases (W&B) Experiment Tracker integrates with ZenML to log and visualize pipeline information, such as models, parameters, and metrics. It is particularly useful for tracking iterative ML experiments and can also be adapted for automated pipeline runs. #### When to Use - If you are already using W&B for experiment tracking and want to continue as you adopt MLOps practices. - For a visually interactive way to navigate ZenML pipeline results. - To share artifacts and metrics with teams or stakeholders. #### Deployment To deploy the W&B Experiment Tracker, install the integration: ```shell zenml integration install wandb -y ``` ##### Authentication Methods You need to configure the following credentials: - `api_key`: Mandatory API key for your W&B account. - `project_name`: Name of the project for logging runs. - `entity`: Username or team name for sending runs. **Basic Authentication (Not Recommended for Production)** ```shell zenml experiment-tracker register wandb_experiment_tracker --flavor=wandb \ --entity= --project_name= --api_key= zenml stack register custom_stack -e wandb_experiment_tracker ... --set ``` **ZenML Secret (Recommended)** Create a ZenML secret to securely store credentials: ```shell zenml secret create wandb_secret \ --entity= \ --project_name= \ --api_key= ``` Then, register the experiment tracker: ```shell zenml experiment-tracker register wandb_tracker \ --flavor=wandb \ --entity={{wandb_secret.entity}} \ --project_name={{wandb_secret.project_name}} \ --api_key={{wandb_secret.api_key}} ``` #### Usage To log information from a ZenML pipeline step, enable the experiment tracker using the `@step` decorator: ```python import wandb from wandb.integration.keras import WandbCallback @step(experiment_tracker="") def tf_trainer(...): model.fit(..., callbacks=[WandbCallback(log_evaluation=True)]) wandb.log({"": metric}) ``` Alternatively, use the Client to dynamically reference the experiment tracker: ```python from zenml.client import Client experiment_tracker = Client().active_stack.experiment_tracker @step(experiment_tracker=experiment_tracker.name) def tf_trainer(...): ... ``` #### Weights & Biases UI Each ZenML step using W&B creates a separate experiment run, accessible via the W&B UI. You can find the tracking URL in the step metadata: ```python tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value print(tracking_url) ``` #### Additional Configuration You can pass `WandbExperimentTrackerSettings` to customize settings or add tags: ```python from zenml.integrations.wandb.flavors.wandb_experiment_tracker_flavor import WandbExperimentTrackerSettings wandb_settings = WandbExperimentTrackerSettings(tags=["some_tag"]) @step(experiment_tracker="", settings={"experiment_tracker": wandb_settings}) def my_step(...): ... ``` #### Full Code Example Here’s a complete example demonstrating the integration: ```python from zenml import pipeline, step from zenml.client import Client from zenml.integrations.wandb.flavors.wandb_experiment_tracker_flavor import WandbExperimentTrackerSettings from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments from datasets import load_dataset import wandb experiment_tracker = Client().active_stack.experiment_tracker @step def prepare_data(): dataset = load_dataset("imdb") ... return train_dataset, eval_dataset @step(experiment_tracker=experiment_tracker.name) def train_model(train_dataset, eval_dataset): model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=2) training_args = TrainingArguments(...) trainer = Trainer(model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset) trainer.train() wandb.log({"final_evaluation": eval_results}) return model @pipeline(enable_cache=False) def fine_tuning_pipeline(): train_dataset, eval_dataset = prepare_data() model = train_model(train_dataset, eval_dataset) if __name__ == "__main__": wandb_settings = WandbExperimentTrackerSettings(tags=["distilbert", "imdb"]) fine_tuning_pipeline.with_options(settings={"experiment_tracker": wandb_settings})() ``` For more details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-wandb/#zenml.integrations.wandb.experiment_trackers.wandb_experiment_tracker). ================================================== === File: docs/book/component-guide/experiment-trackers/comet.md === # Comet Experiment Tracker Summary The Comet Experiment Tracker, integrated with ZenML, allows logging and visualization of pipeline step data (models, parameters, metrics) using the Comet platform. It is particularly useful during the iterative ML experimentation phase and can also track automated pipeline results. ## When to Use Comet - If you already use Comet for tracking experiment results and want to integrate it with ZenML. - For a visually interactive way to navigate results from ZenML pipeline runs. - To share artifacts and metrics with your team or stakeholders. Consider other Experiment Tracker flavors if you're unfamiliar with Comet. ## Deployment To deploy the Comet Experiment Tracker, install the integration: ```bash zenml integration install comet -y ``` ### Authentication Methods 1. **ZenML Secret (Recommended)**: Store credentials securely. ```bash zenml secret create comet_secret \ --workspace= \ --project_name= \ --api_key= ``` Configure the tracker: ```bash zenml experiment-tracker register comet_tracker \ --flavor=comet \ --workspace={{comet_secret.workspace}} \ --project_name={{comet_secret.project_name}} \ --api_key={{comet_secret.api_key}} zenml stack register custom_stack -e comet_experiment_tracker ... --set ``` 2. **Basic Authentication**: Directly configure credentials (not recommended for production). ```bash zenml experiment-tracker register comet_experiment_tracker --flavor=comet \ --workspace= --project_name= --api_key= zenml stack register custom_stack -e comet_experiment_tracker ... --set ``` ## Usage To log information from a ZenML pipeline step, enable the experiment tracker with the `@step` decorator: ```python from zenml.client import Client experiment_tracker = Client().active_stack.experiment_tracker @step(experiment_tracker=experiment_tracker.name) def my_step(): experiment_tracker.log_metrics({"my_metric": 42}) experiment_tracker.log_params({"my_param": "hello"}) ``` ### Comet UI Each ZenML step using Comet creates a separate experiment viewable in the Comet UI. Access the experiment URL via step metadata: ```python tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value print(tracking_url) ``` ## Full Code Example Here’s a concise example of using the Comet Experiment Tracker in a ZenML pipeline: ```python from comet_ml.integration.sklearn import log_model import numpy as np from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.svm import SVC from sklearn.metrics import accuracy_score from zenml import pipeline, step from zenml.client import Client experiment_tracker = Client().active_stack.experiment_tracker @step def load_data(): return load_iris().data, load_iris().target @step def preprocess_data(X, y): return train_test_split(X, y, test_size=0.2, random_state=42) @step(experiment_tracker=experiment_tracker.name) def train_model(X_train, y_train): model = SVC().fit(X_train, y_train) log_model(experiment=experiment_tracker.experiment, model_name="SVC", model=model) return model @step(experiment_tracker=experiment_tracker.name) def evaluate_model(model, X_test, y_test): accuracy = accuracy_score(y_test, model.predict(X_test)) experiment_tracker.log_metrics({"accuracy": accuracy}) return accuracy @pipeline def iris_classification_pipeline(): X, y = load_data() X_train, X_test, y_train, y_test = preprocess_data(X, y) model = train_model(X_train, y_train) evaluate_model(model, X_test, y_test) if __name__ == "__main__": iris_classification_pipeline()() ``` ### Additional Configuration You can pass `CometExperimentTrackerSettings` for extra tags and configurations: ```python comet_settings = CometExperimentTrackerSettings(tags=["some_tag"]) @step(experiment_tracker="", settings={"experiment_tracker": comet_settings}) def my_step(): ... ``` Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-comet/#zenml.integrations.comet.flavors.comet_experiment_tracker_flavor.CometExperimentTrackerSettings) for more attributes and configurations. ================================================== === File: docs/book/component-guide/annotators/annotators.md === # Annotators in ZenML ## Overview Annotators are a component of the ZenML stack that facilitate data annotation within ML workflows. They enable users to launch annotation processes, configure datasets, and track labeled tasks. Data annotation is crucial in MLOps, and ZenML aims to support iterative annotation workflows that integrate labeling into the ML lifecycle. ## Key Use Cases 1. **Initial Data Labeling**: Start labeling data to bootstrap models, iterating with model predictions to improve labeling efficiency. 2. **Ongoing Data Labeling**: Regularly update labels as new data arrives, maintaining contact with raw data for accurate insights. 3. **Inference Samples**: Store and label data from model predictions to compare with actual labels, aiding in drift detection and model retraining. 4. **Ad Hoc Interventions**: Identify and correct bad labels or address class imbalances through targeted annotation. ## Core Features - Seamless integration of labels in training steps. - Versioning of annotation data. - Conversion of annotation data to/from custom formats. - Generation of UI config files for web annotation interfaces. ## Available Annotators ZenML integrates with several annotation tools: | Annotator | Flavor | Integration | Notes | |-------------------------|----------------|---------------|-----------------------------------------| | ArgillaAnnotator | `argilla` | `argilla` | Connects ZenML with Argilla | | LabelStudioAnnotator | `label_studio` | `label_studio`| Connects ZenML with Label Studio | | PigeonAnnotator | `pigeon` | `pigeon` | Limited to Jupyter notebooks for image/text classification | | ProdigyAnnotator | `prodigy` | `prodigy` | Connects ZenML with Prodigy | | Custom Implementation | _custom_ | | Extend the annotator abstraction | To view available annotator flavors, use: ```shell zenml annotator flavor list ``` ## Usage The annotator implementation is built on the Label Studio integration. For detailed usage, refer to the [Label Studio documentation](label-studio.md#how-do-you-use-it). Note that Pigeon has limited functionality. ## Naming Conventions ZenML standardizes terminology for annotations: - **Project** (Label Studio) is referred to as **Dataset** in ZenML. - The combination of an **annotation** and **source data** is called a **task** in ZenML. This documentation provides a concise overview of the annotators in ZenML, emphasizing their role in the ML lifecycle and the integration with various annotation tools. ================================================== === File: docs/book/component-guide/annotators/custom.md === ### Develop a Custom Annotator Before creating a custom annotator, review the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational knowledge on component flavor concepts. **Overview**: Annotators are stack components that facilitate data annotation within ZenML stacks and pipelines. You can utilize the CLI to initiate annotation, configure datasets, and retrieve statistics on labeled tasks. **Important Note**: The base abstraction for annotators is currently under development, and extensions are not available at this time. For immediate use, refer to the list of existing feature stores. ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/component-guide/annotators/argilla.md === ### Summary of Argilla Documentation **Argilla Overview** Argilla is a collaborative tool designed for AI engineers and domain experts to create high-quality datasets for machine learning projects. It supports the entire MLOps cycle, from data labeling to model monitoring, emphasizing human-in-the-loop approaches. **Use Cases** Argilla is ideal for labeling textual data within ML workflows. It integrates with ZenML, allowing users to incorporate it into their data curation processes. **Deployment** To deploy Argilla, install the ZenML Argilla integration: ```shell zenml integration install argilla ``` You can register the annotator with an API key directly or as a secret for better security. To register a secret: ```shell zenml secret create argilla_secrets --api_key="" ``` Then, register the annotator: ```shell zenml annotator register argilla --flavor argilla --authentication_secret=argilla_secrets --port=6900 ``` For a deployed instance, specify the URL without a trailing `/` and include headers for private Hugging Face Spaces: ```shell zenml annotator register argilla --flavor argilla --authentication_secret=argilla_secrets --instance_url="https://[your-owner-name]-[your_space_name].hf.space" --headers='{"Authorization": "Bearer {[your_hugging_face_token]}"}' ``` After registration, add components to a stack: ```shell zenml stack copy default annotation zenml stack update annotation -an zenml stack set annotation ``` Verify the setup with: ```shell zenml annotator dataset list ``` **Usage** Access data and annotations via the CLI or ZenML SDK. Common commands include: - List datasets: `zenml annotator dataset list` - Annotate a dataset: `zenml annotator dataset annotate ` **Argilla Annotator Component** The Argilla annotator inherits from `BaseAnnotator`, with methods for dataset registration, annotation export, and daemon process management. **Argilla Annotator SDK** To use the SDK in Python: ```python from zenml.client import Client client = Client() annotator = client.active_stack.annotator # List dataset names dataset_names = annotator.get_dataset_names() # Get a specific dataset dataset = annotator.get_dataset("dataset_name") # Get annotations for a dataset annotations = annotator.get_labeled_data(dataset_name="dataset_name") ``` For further details, refer to the [Argilla documentation](https://docs.argilla.io/en/latest/). ================================================== === File: docs/book/component-guide/annotators/pigeon.md === ### Pigeon Overview **Pigeon** is a lightweight, open-source annotation tool for labeling data within Jupyter notebooks. It supports: - Text Classification - Image Classification - Text Captioning ### Use Cases Pigeon is ideal for: - Labeling small to medium datasets in ML workflows - Quick labeling tasks - Iterative labeling during exploratory phases - Collaborative labeling in Jupyter notebooks ### Deployment Steps 1. **Install ZenML Pigeon Integration:** ```shell zenml integration install pigeon ``` 2. **Register the Pigeon Annotator:** ```shell zenml annotator register pigeon --flavor pigeon --output_dir="path/to/dir" ``` (The `output_dir` is relative to the repository or notebook root.) 3. **Update Your Stack:** ```shell zenml stack update --annotator pigeon ``` ### Usage After registration, access the Pigeon annotator in your Jupyter notebook: **For Text Classification:** ```python from zenml.client import Client annotator = Client().active_stack.annotator annotations = annotator.annotate( data=['I love this movie', 'I was really disappointed by the book'], options=['positive', 'negative'] ) ``` **For Image Classification:** ```python from zenml.client import Client from IPython.display import display, Image annotator = Client().active_stack.annotator annotations = annotator.annotate( data=['/path/to/image1.png', '/path/to/image2.png'], options=['cat', 'dog'], display_fn=lambda filename: display(Image(filename)) ) ``` ### Annotation Management Use the following commands to manage datasets: - `zenml annotator dataset list` - List datasets - `zenml annotator dataset delete ` - Delete a dataset - `zenml annotator dataset stats ` - Get dataset statistics Annotation files are saved as JSON in the specified output directory, with filenames as dataset names. ### Acknowledgements Pigeon was created by [Anastasis Germanidis](https://github.com/agermanidis) and is available as a [Python package](https://pypi.org/project/pigeon-jupyter/) and [GitHub repository](https://github.com/agermanidis/pigeon). It is licensed under the Apache License and has been updated for compatibility with recent `ipywidgets` versions. ================================================== === File: docs/book/component-guide/annotators/label-studio.md === ### Label Studio Integration with ZenML **Overview**: Label Studio is an open-source annotation platform for data scientists and ML practitioners, supporting various annotation types, including: - **Computer Vision**: Image classification, object detection, semantic segmentation - **Audio & Speech**: Classification, speaker diarization, emotion recognition, transcription - **Text/NLP**: Classification, NER, question answering, sentiment analysis - **Time Series**: Classification, segmentation, event recognition - **Multi-Modal**: Dialogue processing, OCR, time series with reference **Use Case**: Incorporate Label Studio into your ML workflow for data labeling. It integrates with cloud artifact stores (AWS S3, GCP/GCS, Azure Blob Storage) but does not support purely local stacks. ### Deployment Steps 1. **Install the Integration**: ```shell zenml integration install label_studio ``` 2. **Set Up Label Studio**: Clone the repository and start the local instance: ```shell git clone https://github.com/HumanSignal/label-studio.git cd label-studio docker-compose up -d ``` Access it at [http://localhost:8080/](http://localhost:8080/) to obtain your API key from the account settings. 3. **Register the API Key**: ```shell zenml secret create label_studio_secrets --api_key="" ``` 4. **Register the Annotator**: ```shell zenml annotator register label_studio --flavor label_studio --authentication_secret="label_studio_secrets" --port=8080 ``` 5. **Set Up the Stack**: ```shell zenml stack copy default annotation zenml stack update annotation -a zenml stack update annotation -an zenml stack set annotation ``` ### Usage - Use the CLI command to list datasets: ```shell zenml annotator dataset list ``` - To annotate a dataset: ```shell zenml annotator dataset annotate ``` ### Key Components - **Label Studio Annotator**: Inherits from `BaseAnnotator`, enabling dataset registration, annotation export, and starting the annotator daemon. - **Standard Steps**: - `LabelStudioDatasetRegistrationConfig`: For dataset registration. - `LabelStudioDatasetSyncConfig`: For syncing new data. - `get_or_create_dataset`: Registers or retrieves a dataset. - `get_labeled_data`: Retrieves labeled data in Label Studio format. - `sync_new_data_to_label_studio`: Syncs annotations with the cloud artifact store. - **Helper Functions**: ZenML provides functions to generate 'label config' strings for object detection, image classification, and OCR. For more details, refer to the [Label Studio documentation](https://labelstud.io/guide/tasks.html) and the [ZenML GitHub repository](https://github.com/zenml-io/zenml). ================================================== === File: docs/book/component-guide/annotators/prodigy.md === ### Prodigy Documentation Summary **Prodigy Overview** Prodigy is a paid annotation tool for creating training and evaluation data for machine learning models. It aids in data inspection, cleaning, error analysis, and developing rule-based systems. The tool features a web application optimized for efficient annotation. **Usage Context** Prodigy is beneficial when labeling data in your ML workflow, making it a suitable addition to your ZenML stack. **Deployment Steps** 1. **Install Prodigy**: A license is required. Refer to the [Prodigy installation guide](https://prodi.gy/docs/install) for details. Ensure `urllib3<2` is also installed. 2. **Register Prodigy with ZenML**: ```shell zenml integration export-requirements --output-file prodigy-requirements.txt prodigy zenml annotator register prodigy --flavor prodigy ``` Optionally, specify a custom config path. 3. **Update ZenML Stack**: ```shell zenml stack copy default annotation zenml stack update annotation -an prodigy zenml stack set annotation ``` **Using Prodigy** Prodigy does not require pre-starting the annotator. Use it as per the [Prodigy documentation](https://prodi.gy). Access your data and annotations via the CLI: - List datasets: ```shell zenml annotator dataset list ``` - Annotate a dataset: ```shell zenml annotator dataset annotate your_dataset --command="textcat.manual news_topics ./news_headlines.jsonl --label Technology,Politics,Economy,Entertainment" ``` **Importing Annotations in ZenML** To import annotations in a ZenML step: ```python from typing import List, Dict, Any from zenml import step from zenml.client import Client @step def import_annotations() -> List[Dict[str, Any]]: zenml_client = Client() annotations = zenml_client.active_stack.annotator.get_labeled_data(dataset_name="my_dataset") return annotations ``` **Prodigy Annotator Component** The Prodigy annotator component extends the `BaseAnnotator` class, implementing core methods for dataset registration and annotation export. It includes additional methods specific to Prodigy for enhanced functionality. ================================================== === File: docs/book/component-guide/image-builders/local.md === ### Local Image Builder The Local Image Builder is a built-in feature of ZenML that utilizes the local Docker installation on your machine to build container images. It employs the official Docker Python library, which accesses authentication credentials from the default location: `$HOME/.docker/config.json`. To use a different configuration directory, set the `DOCKER_CONFIG` environment variable: ```shell export DOCKER_CONFIG=/path/to/config_dir ``` Ensure the specified directory contains a `config.json` file. #### When to Use Use the Local Image Builder if: - You can install and run Docker on your machine. - You want to use remote components requiring containerization without additional infrastructure setup. #### Deployment The Local Image Builder is included with ZenML and requires no extra setup. #### Usage To use the Local Image Builder: 1. Ensure Docker is installed and running. 2. Authenticate the Docker client to push to your desired container registry. Register the image builder and create a new stack with the following commands: ```shell zenml image-builder register --flavor=local zenml stack register -i ... --set ``` For more details and configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-image_builders/#zenml.image_builders.local_image_builder.LocalImageBuilder). ================================================== === File: docs/book/component-guide/image-builders/custom.md === ### Develop a Custom Image Builder #### Overview To create a custom image builder in ZenML, familiarize yourself with the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). #### Base Abstraction The `BaseImageBuilder` is the abstract class for building Docker images. It provides a basic interface: ```python from abc import ABC, abstractmethod from typing import Any, Dict, Optional, Type from zenml.container_registries import BaseContainerRegistry from zenml.image_builders import BuildContext from zenml.stack import StackComponent class BaseImageBuilder(StackComponent, ABC): """Base class for ZenML image builders.""" @property def build_context_class(self) -> Type["BuildContext"]: """Returns the build context class (default: BuildContext).""" return BuildContext @abstractmethod def build( self, image_name: str, build_context: "BuildContext", docker_build_options: Dict[str, Any], container_registry: Optional["BaseContainerRegistry"] = None, ) -> str: """Builds a Docker image and optionally pushes it to a registry.""" ``` #### Creating a Custom Image Builder To create a custom image builder: 1. **Subclass `BaseImageBuilder`:** Implement the `build` method to create a Docker image. Handle optional registry pushing. 2. **Configuration Class:** Inherit from `BaseImageBuilderConfig` for any custom parameters. 3. **Flavor Class:** Inherit from `BaseImageBuilderFlavor` and define a `name` property. Register the flavor via CLI: ```shell zenml image-builder flavor register ``` For example: ```shell zenml image-builder flavor register flavors.my_flavor.MyImageBuilderFlavor ``` #### Important Notes - Ensure ZenML is initialized at the root of your repository to avoid resolution issues. - List available flavors using: ```shell zenml image-builder flavor list ``` #### Class Interactions - **CustomImageBuilderFlavor:** Used during flavor creation. - **CustomImageBuilderConfig:** Validates user input during registration. - **CustomImageBuilder:** Engaged when the component is in use. This separation allows for independent registration of flavors and components. #### Custom Build Context If a different build context is needed, subclass `BuildContext` and override the `build_context_class` property in your image builder: ```python class MyCustomBuildContext(BuildContext): pass class MyImageBuilder(BaseImageBuilder): @property def build_context_class(self) -> Type["BuildContext"]: return MyCustomBuildContext ``` This customization allows flexibility in the build context used by your image builder. ================================================== === File: docs/book/component-guide/image-builders/gcp.md === ### Google Cloud Image Builder Overview The Google Cloud Image Builder is part of the ZenML `gcp` integration, utilizing [Google Cloud Build](https://cloud.google.com/build) to create container images. #### When to Use - You cannot install or use [Docker](https://www.docker.com) locally. - You are already using Google Cloud Platform (GCP). - Your stack includes other GCP components like the [GCS Artifact Store](../artifact-stores/gcp.md) or the [Vertex Orchestrator](../orchestrators/vertex.md). #### Deployment Requirements 1. Enable Google Cloud Build APIs in your GCP project. 2. Install the ZenML `gcp` integration: ```shell zenml integration install gcp ``` 3. Set up a GCP Artifact Store for build context and a GCP container registry for the built image. 4. Optionally specify: - GCP project ID and service account for permissions. - Custom Docker image for builds (default: `'gcr.io/cloud-builders/docker'`). - Network and build timeout settings. #### Registering the Image Builder To register the image builder: ```shell zenml image-builder register \ --flavor=gcp \ --cloud_builder_image= \ --network= \ --build_timeout= zenml stack register -i ... --set ``` #### Authentication Methods Authentication is required to use the GCP Image Builder: 1. **Local Authentication**: Quick setup using local GCP CLI credentials. Not portable across environments. - Requires Google Cloud CLI installation. 2. **GCP Service Connector (Recommended)**: Provides better security and reusability of credentials. - Register a service connector: ```shell zenml service-connector register --type gcp -i ``` - Auto-configure for GCP Cloud Build: ```shell zenml service-connector register --type gcp --resource-type gcp-generic --resource-name --auto-configure ``` 3. **GCP Credentials**: Use a service account key for authentication, less secure than the service connector. ```shell zenml image-builder register \ --flavor=gcp \ --project= \ --service_account_path= \ --cloud_builder_image= \ --network= \ --build_timeout= ``` #### Caveats - Google Cloud Build uses a default network (`cloudbuild`) for executing build steps, which provides Application Default Credentials (ADC). - For private dependencies, use a custom base image with the `keyrings.google-artifactregistry-auth` package: ```dockerfile FROM zenmldocker/zenml:latest RUN pip install keyrings.google-artifactregistry-auth ``` **Note**: Specify the ZenML version in the base image tag for stability. ================================================== === File: docs/book/component-guide/image-builders/kaniko.md === ### Kaniko Image Builder Overview The Kaniko image builder is part of the ZenML `kaniko` integration, utilizing [Kaniko](https://github.com/GoogleContainerTools/kaniko) for building container images. #### When to Use - Use Kaniko if you cannot install Docker on your machine. - Familiarity with Kubernetes is required. #### Deployment Requirements - A deployed Kubernetes cluster is necessary. #### Usage Steps 1. Install the ZenML `kaniko` integration: ```shell zenml integration install kaniko ``` 2. Install [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl). 3. Set up a [remote container registry](../container-registries/container-registries.md) in your stack. 4. Optionally, configure the Kaniko image builder to store build context in an artifact store by setting `store_context_in_artifact_store=True` and ensuring a [remote artifact store](../artifact-stores/artifact-stores.md) is part of your stack. 5. Register the image builder: ```shell zenml image-builder register \ --flavor=kaniko \ --kubernetes_context= [ --pod_running_timeout= ] zenml stack register -i ... --set ``` #### Authentication The Kaniko build pod must authenticate to: - Push to the container registry. - Pull from private registries for parent images. - Read from the artifact store if configured. **AWS Setup:** - Attach `EC2InstanceProfileForImageBuilderECRContainerBuilds` policy to the EKS node IAM role. - Register image builder with required environment variables: ```shell zenml image-builder register \ --flavor=kaniko \ --kubernetes_context= \ --env='[{"name": "AWS_SDK_LOAD_CONFIG", "value": "true"}, {"name": "AWS_EC2_METADATA_DISABLED", "value": "true"}]' ``` **GCP Setup:** - Enable workload identity and create necessary service accounts. - Register image builder with namespace and service account: ```shell zenml image-builder register \ --flavor=kaniko \ --kubernetes_context= \ --kubernetes_namespace= \ --service_account_name= ``` **Azure Setup:** - Create a Kubernetes `configmap` for Docker config: ```shell kubectl create configmap docker-config --from-literal='config.json={ "credHelpers": { "mycr.azurecr.io": "acr-env" } }' ``` - Register image builder with the mounted configmap: ```shell zenml image-builder register \ --flavor=kaniko \ --kubernetes_context= \ --volume_mounts='[{"name": "docker-config", "mountPath": "/kaniko/.docker/"}]' \ --volumes='[{"name": "docker-config", "configMap": {"name": "docker-config"}}]' ``` #### Additional Parameters You can pass parameters to the Kaniko build using the `executor_args` attribute: ```shell zenml image-builder register \ --flavor=kaniko \ --kubernetes_context= \ --executor_args='["--label", "key=value"]' ``` **Common Flags:** - `--cache`: Disable caching (`false`). - `--cache-dir`: Directory for cached layers. - `--cache-repo`: Repository for cached layers. - `--cache-ttl`: Cache expiration time (default `24h`). - `--cleanup`: Disable cleanup of the working directory (`false`). - `--compressed-caching`: Disable compressed caching (`false`). For a complete list of flags, refer to the [Kaniko additional flags](https://github.com/GoogleContainerTools/kaniko#additional-flags). ================================================== === File: docs/book/component-guide/image-builders/aws.md === ### AWS Image Builder Overview The AWS Image Builder, part of the ZenML `aws` integration, utilizes [AWS CodeBuild](https://aws.amazon.com/codebuild) to create container images. It is ideal for users who cannot install Docker locally, are already using AWS, or have a stack primarily composed of AWS components like the [S3 Artifact Store](../artifact-stores/s3.md) or the [SageMaker Orchestrator](../orchestrators/sagemaker.md). ### Deployment and Usage To deploy the AWS Image Builder: 1. **Install the ZenML AWS Integration**: ```shell zenml integration install aws ``` 2. **Set Up Requirements**: - An [S3 Artifact Store](../artifact-stores/s3.md) for build context. - Optionally, an [AWS container registry](../container-registries/aws.md) for image storage. - Create an [AWS CodeBuild project](https://aws.amazon.com/codebuild) in the desired AWS region. 3. **Configure CodeBuild Project**: - **Source Type**: `Amazon S3` - **Bucket**: Same as the S3 Artifact Store. - **Environment Type**: `Linux Container` - **Environment Image**: `bentolor/docker-dind-awscli` - **Privileged Mode**: `false` 4. **Service Role Permissions**: Ensure the service role for CodeBuild has permissions for S3 and ECR (if applicable): ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["s3:GetObject"], "Resource": "arn:aws:s3:::/*" }, { "Effect": "Allow", "Action": ["ecr:*"], "Resource": "arn:aws:ecr:::repository/" }, { "Effect": "Allow", "Action": ["ecr:GetAuthorizationToken"], "Resource": "*" } ] } ``` 5. **Register the Image Builder**: ```shell zenml image-builder register \ --flavor=aws \ --code_build_project= ``` 6. **Set Up Authentication**: Recommended to use an [AWS Service Connector](../../how-to/infrastructure-deployment/auth-management/aws-service-connector.md) for authentication. You can register it as follows: ```shell zenml service-connector register --type aws -i ``` ### Authentication Methods - **Implicit Authentication**: Uses AWS CLI credentials from the local environment. Quick but not portable. - **AWS Service Connector**: Recommended for better security and reusability across components. ### Customizing AWS CodeBuild Builds You can customize the AWS Image Builder with additional options during registration: - `build_image`: Default is `bentolor/docker-dind-awscli`. - `compute_type`: Default is `BUILD_GENERAL1_SMALL`. - `custom_env_vars`: Custom environment variables for the build. - `implicit_container_registry_auth`: Controls authentication method for the container registry (default is `true`). ### Example Commands - **Register and Activate a Stack**: ```shell zenml stack register -i ... --set ``` - **Connect Image Builder to Service Connector**: ```shell zenml image-builder connect --connector ``` This summary provides a concise overview of deploying and using the AWS Image Builder with ZenML, including setup, configuration, authentication, and customization options. ================================================== === File: docs/book/component-guide/image-builders/image-builders.md === ### Image Builders in ZenML **Overview**: The image builder is crucial for building container images in remote MLOps environments, enabling the execution of machine-learning pipelines. **When to Use**: Utilize the image builder when components of your ZenML stack, such as orchestrators, step operators, or model deployers, need to build Docker images. **Image Builder Flavors**: ZenML includes the following image builders: | Image Builder | Flavor | Integration | Notes | |-----------------------|----------|-------------|-----------------------------------------| | [LocalImageBuilder](local.md) | `local` | _built-in_ | Builds Docker images locally. | | [KanikoImageBuilder](kaniko.md) | `kaniko` | `kaniko` | Builds Docker images in Kubernetes. | | [GCPImageBuilder](gcp.md) | `gcp` | `gcp` | Uses Google Cloud Build for images. | | [AWSImageBuilder](aws.md) | `aws` | `aws` | Uses AWS Code Build for images. | | [Custom Implementation](custom.md) | _custom_ | | Allows for custom image builder creation.| **List Available Flavors**: To view available image builder flavors, use: ```shell zenml image-builder flavor list ``` **Usage**: Direct interaction with the image builder is unnecessary. The active image builder in your ZenML stack will be automatically used by any component that requires container image building. ================================================== === File: docs/book/component-guide/artifact-stores/azure.md === ### Azure Blob Storage Artifact Store The Azure Artifact Store is a ZenML integration that utilizes Azure Blob Storage to store artifacts. It is ideal for projects requiring shared storage, remote components, or production-grade MLOps. #### Use Cases Consider using the Azure Artifact Store if: - You need to share pipeline results with team members or stakeholders. - Your stack includes remote components (e.g., Kubeflow or Kubernetes). - You require more storage than your local machine can provide. - You are running pipelines at scale. #### Deployment Steps 1. **Install Azure Integration**: ```shell zenml integration install azure -y ``` 2. **Register the Azure Artifact Store**: The mandatory configuration parameter is the root path URI, formatted as `az://container-name` or `abfs://container-name`. ```shell zenml artifact-store register az_store -f azure --path=az://container-name zenml stack register custom_stack -a az_store ... --set ``` #### Authentication Methods Authentication is necessary for integrating the Azure Artifact Store. Options include: - **Implicit Authentication** (quick but limited): - Set environment variables for Azure credentials (account key, connection string, or service principal). - **Azure Service Connector** (recommended): - Register a service connector for better security and management: ```shell zenml service-connector register --type azure -i ``` - Non-interactive example: ```shell zenml service-connector register --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= --resource-type blob-container --resource-id ``` #### Connecting the Artifact Store After setting up the service connector, connect it to the Azure Artifact Store: ```shell zenml artifact-store connect -i ``` Or non-interactive: ```shell zenml artifact-store connect --connector ``` #### Using ZenML Secrets You can store Azure credentials in a ZenML Secret for better management: ```shell zenml secret create az_secret --account_name='' --account_key='' ``` Register the artifact store with the secret: ```shell zenml artifact-store register az_store -f azure --path='az://your-container' --authentication_secret=az_secret ``` #### Summary Using the Azure Artifact Store is similar to other artifact stores in ZenML. For detailed guidance, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-azure/#zenml.integrations.azure.artifact_stores). ================================================== === File: docs/book/component-guide/artifact-stores/s3.md === ### Summary: Storing Artifacts in an AWS S3 Bucket #### Overview The S3 Artifact Store is a ZenML integration that utilizes AWS S3 or compatible services (e.g., MinIO, Ceph RGW) for artifact storage. It is ideal for projects requiring shared storage, remote components, or production-grade MLOps. #### Use Cases Consider using the S3 Artifact Store when: - You need to share pipeline results. - Your stack components run remotely (e.g., on Kubernetes). - Local storage is insufficient. - You require scalable artifact management. #### Deployment Steps 1. **Install S3 Integration**: ```shell zenml integration install s3 -y ``` 2. **Register S3 Artifact Store**: - Required parameter: `--path=s3://bucket-name`. - Example: ```shell zenml artifact-store register s3_store -f s3 --path=s3://bucket-name zenml stack register custom_stack -a s3_store ... --set ``` #### Authentication Methods - **Implicit Authentication**: Quick local setup using AWS CLI credentials. Note that some functionalities may be limited. - **AWS Service Connector**: Recommended for better security and access control. Register using: ```sh zenml service-connector register --type aws -i ``` For a specific S3 bucket: ```sh zenml service-connector register --type aws --resource-type s3-bucket --resource-name --auto-configure ``` #### Connecting Artifact Store To connect the S3 Artifact Store to an AWS Service Connector: ```sh zenml artifact-store connect -i ``` Or non-interactively: ```sh zenml artifact-store connect --connector ``` #### Using ZenML Secrets For enhanced security, store AWS access keys in a ZenML secret: ```shell zenml secret create s3_secret --aws_access_key_id='' --aws_secret_access_key='' zenml artifact-store register s3_store -f s3 --path='s3://your-bucket' --authentication_secret=s3_secret ``` #### Advanced Configuration Customize the S3 Artifact Store with advanced options: - `client_kwargs`: For connection parameters (e.g., `endpoint_url`). - `config_kwargs`: For botocore client configurations. - `s3_additional_kwargs`: For S3 API parameters (e.g., `ServerSideEncryption`). Example: ```shell zenml artifact-store register minio_store -f s3 --path='s3://minio_bucket' --authentication_secret=s3_secret --client_kwargs='{"endpoint_url": "http://minio.cluster.local:9000", "region_name": "us-east-1"}' ``` #### Usage Using the S3 Artifact Store is similar to other Artifact Store flavors in ZenML. For detailed usage, refer to the [Artifact Store documentation](./artifact-stores.md#how-to-use-it). ================================================== === File: docs/book/component-guide/artifact-stores/local.md === ### Local Artifact Store The Local Artifact Store in ZenML is a built-in option that stores artifacts on your local filesystem. #### Use Cases - Ideal for beginners or those in the experimental phase of using ZenML. - No need for external resources or managed object-store services like Amazon S3 or Google Cloud Storage. - Not suitable for production due to limitations in sharing, high availability, scalability, and backup features. #### Limitations - Artifacts cannot be accessed from other machines. - Artifact visualizations are unavailable when using a local Artifact Store with a cloud-deployed ZenML instance. - Compatible only with local orchestrators (e.g., local Orchestrator, local Kubeflow, local Kubernetes) and local model deployers (e.g., MLflow). - Step Operators cannot be used with a local Artifact Store. #### Deployment The default ZenML stack includes a local Artifact Store. To view the current configuration: ```shell $ zenml stack list $ zenml artifact-store describe ``` Artifacts are stored in a default path, which can be customized during registration, but using the default is recommended to avoid issues. To register a custom local Artifact Store: ```shell # Register the local artifact store zenml artifact-store register custom_local --flavor local # Register and set a stack with the new artifact store zenml stack register custom_stack -o default -a custom_local --set ``` #### Usage Using the local Artifact Store is similar to using any other Artifact Store flavor. For detailed implementation and configuration, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-artifact_stores/#zenml.artifact_stores.local_artifact_store). ================================================== === File: docs/book/component-guide/artifact-stores/custom.md === ### Summary: Developing a Custom Artifact Store in ZenML **Overview**: ZenML provides built-in Artifact Store implementations for local and cloud storage. To use a different object storage service, you can create a custom Artifact Store by extending ZenML. #### Base Abstraction The `BaseArtifactStore` class is central to the ZenML stack. Key points include: 1. **Configuration**: Requires a `path` parameter for the root of the artifact store. 2. **Supported Schemes**: The `SUPPORTED_SCHEMES` class variable must be defined in subclasses to indicate supported file path schemes (e.g., `{"abfs://", "az://"}` for Azure). 3. **Abstract Methods**: Subclasses must implement the following methods: - `open`, `copyfile`, `exists`, `glob`, `isdir`, `listdir`, `makedirs`, `mkdir`, `remove`, `rename`, `rmtree`, `stat`, `walk`. **Example Code**: ```python from zenml.enums import StackComponentType from zenml.stack import StackComponent, StackComponentConfig from typing import Union, List, Any, Iterable, Tuple, ClassVar, Type PathType = Union[bytes, str] class BaseArtifactStoreConfig(StackComponentConfig): path: str SUPPORTED_SCHEMES: ClassVar[Set[str]] class BaseArtifactStore(StackComponent): @abstractmethod def open(self, name: PathType, mode: str = "r") -> Any: ... @abstractmethod def copyfile(self, src: PathType, dst: PathType, overwrite: bool = False) -> None: ... @abstractmethod def exists(self, path: PathType) -> bool: ... @abstractmethod def glob(self, pattern: PathType) -> List[PathType]: ... @abstractmethod def isdir(self, path: PathType) -> bool: ... @abstractmethod def listdir(self, path: PathType) -> List[PathType]: ... @abstractmethod def makedirs(self, path: PathType) -> None: ... @abstractmethod def mkdir(self, path: PathType) -> None: ... @abstractmethod def remove(self, path: PathType) -> None: ... @abstractmethod def rename(self, src: PathType, dst: PathType, overwrite: bool = False) -> None: ... @abstractmethod def rmtree(self, path: PathType) -> None: ... @abstractmethod def stat(self, path: PathType) -> Any: ... @abstractmethod def walk(self, top: PathType, topdown: bool = True, onerror: Optional[Callable[..., None]] = None) -> Iterable[Tuple[PathType, List[PathType], List[PathType]]]: ... class BaseArtifactStoreFlavor(Flavor): @property @abstractmethod def name(self) -> Type["BaseArtifactStore"]: ... @property def type(self) -> StackComponentType: return StackComponentType.ARTIFACT_STORE @property def config_class(self) -> Type[StackComponentConfig]: return BaseArtifactStoreConfig @property @abstractmethod def implementation_class(self) -> Type["BaseArtifactStore"]: ... ``` #### Integration with ZenML When an Artifact Store is instantiated and added to a stack, it creates a filesystem for ZenML pipelines, allowing methods like `fileio.open(...)` to utilize the defined `open(...)` method. #### Steps to Build a Custom Artifact Store 1. Inherit from `BaseArtifactStore` and implement the abstract methods. 2. Inherit from `BaseArtifactStoreConfig` and define `SUPPORTED_SCHEMES`. 3. Inherit from `BaseArtifactStoreFlavor` to combine both classes. **Registration Command**: ```shell zenml artifact-store flavor register ``` Example: ```shell zenml artifact-store flavor register flavors.my_flavor.MyArtifactStoreFlavor ``` #### Important Considerations - Ensure ZenML is initialized at the root of your repository for proper flavor resolution. - After registration, verify with: ```shell zenml artifact-store flavor list ``` - **Visualization Support**: Custom Artifact Stores must authenticate to back-ends without relying on local environments. Install necessary dependencies in your deployment environment. For more detailed implementation, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-artifact_stores/#zenml.artifact_stores.base_artifact_store.BaseArtifactStore). ================================================== === File: docs/book/component-guide/artifact-stores/gcp.md === ### Google Cloud Storage (GCS) Artifact Store The GCS Artifact Store is a ZenML integration that utilizes Google Cloud Storage (GCS) for storing ZenML artifacts in a GCP bucket. #### Use Cases Consider using GCS when: - You need to share pipeline results with team members or stakeholders. - Your stack includes remote components (e.g., Kubeflow, Kubernetes). - Local storage is insufficient for your needs. - You require a production-grade MLOps solution. #### Deployment To deploy the GCS Artifact Store, install the GCP integration: ```shell zenml integration install gcp -y ``` Register the GCS Artifact Store with the root path URI pointing to your GCS bucket: ```shell zenml artifact-store register gs_store -f gcp --path=gs://bucket-name zenml stack register custom_stack -a gs_store ... --set ``` #### Authentication Methods Authentication is essential for using the GCS Artifact Store: 1. **Implicit Authentication**: Quick setup using local Google Cloud CLI credentials. Requires installation of the Google Cloud CLI. Note that some dashboard functionalities may be limited. 2. **GCP Service Connector (Recommended)**: Provides better security and configuration management. Register a connector: ```sh zenml service-connector register --type gcp -i ``` For a specific GCS bucket: ```sh zenml service-connector register --type gcp --resource-type gcs-bucket --resource-name --auto-configure ``` Ensure your GCP credentials have permissions to access the GCS bucket. #### Connecting the GCS Artifact Store After setting up a GCP Service Connector, connect it to the GCS Artifact Store: ```sh zenml artifact-store register -f gcp --path='gs://your-bucket' zenml artifact-store connect -i ``` For non-interactive connection: ```sh zenml artifact-store connect --connector ``` #### Using GCP Credentials Alternatively, use a GCP Service Account Key stored in a ZenML Secret: ```shell zenml secret create gcp_secret --token=@path/to/service_account_key.json zenml artifact-store register gcs_store -f gcp --path='gs://your-bucket' --authentication_secret=gcp_secret zenml stack register custom_stack -a gs_store ... --set ``` #### Usage Using the GCS Artifact Store is similar to other Artifact Store flavors in ZenML. For detailed information, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-gcp/#zenml.integrations.gcp.artifact_stores.gcp_artifact_store). ================================================== === File: docs/book/component-guide/artifact-stores/artifact-stores.md === # Artifact Stores ## Overview The Artifact Store is a crucial component in the MLOps stack, serving as a persistent storage layer for artifacts generated by machine learning pipelines, such as datasets and models. ZenML automatically serializes and saves these artifacts, enabling features like caching, provenance tracking, and reproducibility. ### Key Points - Not all pipeline outputs are stored in the Artifact Store; storage behavior is defined by the Materializer associated with the artifact type. - Custom Materializers can be created for specific storage needs (e.g., external model registries). - The Artifact Store can also support other stack components, such as the Great Expectations Data Validator. ### Configuration - The Artifact Store must be registered in your ZenML stack. - ZenML provides several built-in and integration-based Artifact Store flavors: | Artifact Store | Flavor | Integration | URI Schema(s) | Notes | |----------------|--------|-------------|----------------|-------| | Local | `local`| Built-in | None | Default store for local filesystem. | | Amazon S3 | `s3` | `s3` | `s3://` | Uses AWS S3 for storage. | | Google Cloud Storage | `gcp` | `gcp` | `gs://` | Uses GCP for storage. | | Azure | `azure`| `azure` | `abfs://`, `az://` | Uses Azure Blob Storage. | | Custom | _custom_| | _custom_ | User-defined implementation. | To list available Artifact Store flavors, use: ```shell zenml artifact-store flavor list ``` ### Registration Example Each Artifact Store requires a `path` attribute during registration: ```shell zenml artifact-store register s3_store -f s3 --path s3://my_bucket ``` ## Usage Typically, users interact with the Artifact Store indirectly through higher-level APIs. Artifacts can be returned from pipeline steps or retrieved post-execution. ### Low-Level API For custom Materializers or specific storage needs, the low-level Artifact Store API can be accessed via: - `zenml.io.fileio`: For operations like `open`, `copy`, `rename`, etc. - `zenml.utils.io_utils`: For higher-level utilities to manage object transfers. ### Example Code To write an artifact: ```python import os from zenml.client import Client from zenml.io import fileio root_path = Client().active_stack.artifact_store.path artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.txt") fileio.makedirs(os.path.dirname(artifact_uri)) with fileio.open(artifact_uri, "w") as f: f.write("example artifact") ``` To read an artifact: ```python import os from zenml.client import Client from zenml.utils import io_utils root_path = Client().active_stack.artifact_store.path artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.txt") artifact_contents = io_utils.read_file_contents_as_string(artifact_uri) ``` ### Temporary File Handling For serialization with external libraries: ```python import os import tempfile from zenml.client import Client from zenml.io import fileio root_path = Client().active_stack.artifact_store.path artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.json") with tempfile.NamedTemporaryFile(mode="w", suffix=".json", delete=True) as f: # Save to temporary file and copy to artifact store fileio.copy(f.name, artifact_uri) ``` This summary encapsulates the essential aspects of setting up and using the Artifact Store in ZenML, ensuring that critical information is preserved while maintaining conciseness. ================================================== === File: docs/book/component-guide/data-validators/custom.md === ### Developing a Custom Data Validator in ZenML **Overview**: ZenML allows for the creation of custom Data Validators, which can integrate various data logging and validation libraries. Familiarity with ZenML's component flavor concepts is recommended before proceeding. **Important Notes**: - The base abstraction for Data Validators is under development; avoid extending them for now. - Existing Data Validator flavors can be used, but custom implementations may require future refactoring. ### Steps to Build a Custom Data Validator 1. **Create a Class**: Inherit from `BaseDataValidator` and override necessary abstract methods based on the library/service you want to integrate. 2. **Configuration Class**: If needed, create a class inheriting from `BaseDataValidatorConfig`. 3. **Combine Classes**: Inherit from `BaseDataValidatorFlavor` to bring the validator and config together. 4. **Standard Steps**: Optionally, provide standard steps for easy integration into pipelines. ### Registration Register your custom Data Validator flavor using the CLI with dot notation: ```shell zenml data-validator flavor register ``` For example, if your flavor class is in `flavors/my_flavor.py`: ```shell zenml data-validator flavor register flavors.my_flavor.MyDataValidatorFlavor ``` **Best Practices**: Initialize ZenML at the root of your repository to ensure proper resolution of the flavor class. ### Verification After registration, confirm the new flavor is available: ```shell zenml data-validator flavor list ``` ### Key Interactions - **CustomDataValidatorFlavor**: Used during flavor creation via CLI. - **CustomDataValidatorConfig**: Validates user-provided values during stack component registration. - **CustomDataValidator**: Engaged when the component is in use, allowing separation of flavor configuration from implementation. This structured approach enables the integration of custom data validation while maintaining compatibility with ZenML's evolving architecture. ================================================== === File: docs/book/component-guide/data-validators/great-expectations.md === ### Great Expectations with ZenML **Overview**: Great Expectations is an open-source library for data quality checks, profiling, and documentation. The ZenML integration allows users to run data quality tests on `pandas.DataFrame` within pipelines, automate corrective actions, and generate documentation of results. #### Use Cases - **Data Profiling**: Automatically generates validation rules (Expectations) from dataset properties. - **Data Quality**: Validates datasets against predefined or inferred Expectations. - **Data Docs**: Maintains human-readable documentation of validation rules and results. #### Deployment To use the Great Expectations Data Validator in ZenML, install the integration: ```shell zenml integration install great_expectations -y ``` **Registering the Data Validator**: 1. **Automatic Management**: ```shell zenml data-validator register ge_data_validator --flavor=great_expectations zenml stack register custom_stack -dv ge_data_validator ... --set ``` 2. **Using Existing Configuration**: ```shell zenml data-validator register ge_data_validator --flavor=great_expectations --context_root_dir=/path/to/my/great_expectations zenml stack register custom_stack -dv ge_data_validator ... --set ``` 3. **Migrating Configuration**: ```shell zenml data-validator register ge_data_validator --flavor=great_expectations --context_config=@/path/to/my/great_expectations/great_expectations.yaml zenml stack register custom_stack -dv ge_data_validator ... --set ``` **Advanced Configuration**: - `configure_zenml_stores`: Automatically updates configuration to use ZenML's Artifact Store. - `configure_local_docs`: Sets up a local Data Docs site for visualization. #### Usage in Pipelines **Data Profiler Step**: Automatically generates an Expectation Suite from a `pandas.DataFrame`. ```python from zenml.integrations.great_expectations.steps import great_expectations_profiler_step ge_profiler_step = great_expectations_profiler_step.with_options( parameters={ "expectation_suite_name": "steel_plates_suite", "data_asset_name": "steel_plates_train_df", } ) ``` **Pipeline Example**: ```python @pipeline def profiling_pipeline(): dataset, _ = importer() ge_profiler_step(dataset) profiling_pipeline() ``` **Data Validator Step**: Validates a dataset against an existing Expectation Suite. ```python from zenml.integrations.great_expectations.steps import great_expectations_validator_step ge_validator_step = great_expectations_validator_step.with_options( parameters={ "expectation_suite_name": "steel_plates_suite", "data_asset_name": "steel_plates_train_df", } ) ``` **Validation Pipeline Example**: ```python @pipeline def validation_pipeline(): dataset, condition = importer() results = ge_validator_step(dataset, condition) message = checker(results) validation_pipeline() ``` #### Direct Interaction with Great Expectations You can directly use the Great Expectations library in custom steps while leveraging ZenML's serialization and versioning. ```python import great_expectations as ge from zenml.integrations.great_expectations.data_validators import GreatExpectationsDataValidator @step def create_custom_expectation_suite() -> ExpectationSuite: context = GreatExpectationsDataValidator.get_data_context() suite = context.create_expectation_suite(expectation_suite_name="custom_suite") # Add expectations... context.save_expectation_suite(suite) context.build_data_docs() return suite ``` #### Visualization Results can be visualized in the ZenML dashboard or within Jupyter notebooks using: ```python from zenml.client import Client def visualize_results(pipeline_name: str, step_name: str) -> None: pipeline = Client().get_pipeline(pipeline_name) last_run = pipeline.last_run validation_step = last_run.steps[step_name] validation_step.visualize() visualize_results("validation_pipeline", "profiler") ``` This summary encapsulates the essential aspects of using Great Expectations with ZenML, including deployment, configuration, usage in pipelines, and visualization of results. ================================================== === File: docs/book/component-guide/data-validators/deepchecks.md === ### Summary of Deepchecks Integration with ZenML **Overview**: Deepchecks is an open-source library integrated with ZenML for validating data and models in pipelines. It provides tests for data integrity, data drift, model drift, and model performance, facilitating automated corrective actions and visual interpretations. **Use Cases**: - **Data Integrity Checks**: Identify issues like missing values and conflicting labels in datasets. - **Data Drift Checks**: Compare datasets to detect feature and label drift. - **Model Performance Checks**: Evaluate model performance using metrics like confusion matrices. - **Multi-Model Performance Reports**: Summarize performance scores across multiple models. **Deployment**: To install the Deepchecks integration: ```shell zenml integration install deepchecks -y ``` Register the Deepchecks Data Validator: ```shell zenml data-validator register deepchecks_data_validator --flavor=deepchecks zenml stack register custom_stack -dv deepchecks_data_validator ... --set ``` **Usage**: Deepchecks validation checks are categorized based on input requirements: 1. **Data Integrity Checks**: Single dataset input. 2. **Data Drift Checks**: Two datasets (target and reference). 3. **Model Validation Checks**: Single dataset and model input. 4. **Model Drift Checks**: Two datasets and a model input. You can use Deepchecks in ZenML pipelines through: - Standard Deepchecks steps. - Custom step implementations using the Deepchecks Data Validator. - Direct calls to the Deepchecks library. **Warning**: For remote orchestrators (e.g., Kubeflow, Vertex), extend the base Docker image to include necessary binaries for `opencv2`. Create a Dockerfile: ```shell ARG ZENML_VERSION=0.20.0 FROM zenmldocker/zenml:${ZENML_VERSION} AS base RUN apt-get update && apt-get install ffmpeg libsm6 libxext6 -y ``` Use it in your pipeline definition: ```python docker_settings = DockerSettings(dockerfile="deepchecks-zenml.Dockerfile") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` **Standard Steps**: 1. **Data Integrity Check**: ```python from zenml.integrations.deepchecks.steps import deepchecks_data_integrity_check_step data_validator = deepchecks_data_integrity_check_step.with_options( parameters=dict(dataset_kwargs=dict(label="target", cat_features=[])) ) ``` 2. **Example Pipeline**: ```python @pipeline(settings={"docker": docker_settings}) def data_validation_pipeline(): df_train, df_test = data_loader() data_validator(dataset=df_train) ``` 3. **Custom Data Integrity Check**: ```python @step def data_integrity_check(dataset: pd.DataFrame) -> SuiteResult: data_validator = DeepchecksDataValidator.get_active_data_validator() suite = data_validator.data_validation( dataset=dataset, check_list=[DeepchecksDataIntegrityCheck.TABULAR_OUTLIER_SAMPLE_DETECTION] ) return suite ``` **Visualizing Results**: Use the ZenML dashboard or Jupyter notebooks to visualize results: ```python from zenml.client import Client def visualize_results(pipeline_name: str, step_name: str) -> None: pipeline = Client().get_pipeline(pipeline=pipeline_name) last_run = pipeline.last_run step = last_run.steps[step_name] step.visualize() ``` This summary captures the essential details of using Deepchecks with ZenML, including deployment, usage, and visualization of results. ================================================== === File: docs/book/component-guide/data-validators/data-validators.md === # Data Validators Overview Data Validators are essential tools in machine learning (ML) that ensure data quality and monitor model performance throughout the ML lifecycle. They help prevent issues stemming from poor data, which can lead to unreliable model outcomes. ## Key Features - **Data Profiling**: Analyzes data characteristics. - **Data Integrity Testing**: Ensures data consistency and accuracy. - **Drift Detection**: Identifies data and model drift during various pipeline stages (data ingestion, training, inference). Data Validators generate data profiles and quality reports, which are versioned and stored in the Artifact Store for later retrieval and visualization. ## Use Cases Employ Data Validators in the following scenarios: - Early logging of data quality and model performance. - Regular integrity checks for pipelines ingesting new data. - Continuous training pipelines comparing new and reference data. - Batch inference or online inference pipelines to detect drift and skew. ## Data Validator Flavors Data Validators are optional components in ZenML, with various integrations available: | Data Validator | Features | Data Types | Model Types | Notes | Flavor/Integration | |----------------------|---------------------------------------------------|------------------------------------|---------------------------------------|-------------------------------------------------|---------------------------| | **Deepchecks** | Data quality, drift, performance | `pandas.DataFrame`, `DataLoader` | `ClassifierMixin`, `torch.nn.Module` | Integrate validation tests into pipelines | `deepchecks` | | **Evidently** | Data quality, drift, performance | `pandas.DataFrame` | N/A | Generate reports and visualizations | `evidently` | | **Great Expectations** | Data profiling, quality | `pandas.DataFrame` | N/A | Data testing and profiling | `great_expectations` | | **Whylogs/WhyLabs** | Data drift | `pandas.DataFrame` | N/A | Generate data profiles for WhyLabs | `whylogs` | To view available Data Validator flavors, use: ```shell zenml data-validator flavor list ``` ## Usage Steps 1. **Configure**: Add a Data Validator to your ZenML stack. 2. **Integrate**: Use built-in validation steps in your pipelines or directly in custom steps. 3. **Access Artifacts**: Retrieve validation results (data profiles, test reports) for further processing or visualization. For detailed usage, refer to the specific Data Validator flavor documentation. ================================================== === File: docs/book/component-guide/data-validators/evidently.md === ### Summary of Evidently Data Validator Documentation **Overview**: The Evidently Data Validator, integrated with ZenML, utilizes the Evidently library to analyze data quality, data drift, model drift, and model performance. It generates reports and runs checks that can be automated or visualized for further interpretation. **Use Cases**: Evidently is beneficial for monitoring and debugging machine learning models by providing: - **Data Quality Reports**: Analyze feature statistics and compare datasets. - **Data Drift Reports**: Detect changes in feature distributions. - **Target Drift Reports**: Identify changes in target functions or model predictions. - **Performance Reports**: Evaluate model performance against past or alternative models. **Deployment**: 1. Install the Evidently integration: ```shell zenml integration install evidently -y ``` 2. Register the Data Validator: ```shell zenml data-validator register evidently_data_validator --flavor=evidently zenml stack register custom_stack -dv evidently_data_validator ... --set ``` **Usage**: - **Data Profiling**: Generate reports using `pandas.DataFrame` or datasets. Requires target and prediction columns for certain analyses. - **Evidently Report Step**: Simplifies report generation in pipelines: ```python from zenml.integrations.evidently.steps import evidently_report_step text_data_report = evidently_report_step.with_options( parameters=dict( column_mapping=EvidentlyColumnMapping(target="Rating", ...), metrics=[EvidentlyMetricConfig.metric("DataQualityPreset"), ...], download_nltk_data=True, ), ) ``` - **Data Validation**: Run automated tests using the `evidently_test_step`: ```python from zenml.integrations.evidently.steps import evidently_test_step text_data_test = evidently_test_step.with_options( parameters=dict( column_mapping=EvidentlyColumnMapping(target="Rating", ...), tests=[EvidentlyTestConfig.test("DataQualityTestPreset"), ...], download_nltk_data=True, ), ) ``` **Direct Usage**: You can directly use the Evidently library in custom steps: ```python from evidently.report import Report @step def data_profiler(dataset: pd.DataFrame): report = Report(metrics=[metric_preset.DataQualityPreset()]) report.run(current_data=dataset, reference_data=dataset) return report.json(), HTMLString(report.show(mode="inline").data) ``` **Visualization**: Reports can be visualized in the ZenML dashboard or Jupyter notebooks: ```python def visualize_results(pipeline_name: str, step_name: str): pipeline = Client().get_pipeline(pipeline=pipeline_name) evidently_step = pipeline.last_run.steps[step_name] evidently_step.visualize() ``` **Important Links**: - [Evidently Documentation](https://docs.evidentlyai.com/) - [ZenML Documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-evidently/) This summary captures the essential technical details and usage instructions for the Evidently Data Validator within ZenML, ensuring that critical information is retained for further inquiries. ================================================== === File: docs/book/component-guide/data-validators/whylogs.md === ### Summary of Whylogs/WhyLabs Profiling Documentation #### Overview The **Whylogs/WhyLabs Data Validator** integrates with ZenML to generate and track data profiles using the **whylogs** library. These profiles provide descriptive statistics of data, enabling automated corrective actions and interactive visualizations. #### Use Cases Whylogs is useful for: - **Data Quality**: Validate model input data quality. - **Data Drift**: Detect shifts in model input features. - **Model Drift**: Identify training-serving skew and performance degradation. Currently, it supports tabular data in `pandas.DataFrame` format. #### Deployment To deploy the Whylogs Data Validator, install the integration: ```shell zenml integration install whylogs -y ``` Register the Data Validator: ```shell zenml data-validator register whylogs_data_validator --flavor=whylogs zenml stack register custom_stack -dv whylogs_data_validator ... --set ``` For WhyLabs logging, create a ZenML Secret to store authentication details: ```shell zenml secret create whylabs_secret \ --whylabs_default_org_id= \ --whylabs_api_key= zenml data-validator register whylogs_data_validator --flavor=whylogs \ --authentication_secret=whylabs_secret ``` Enable logging for custom pipeline steps by setting `upload_to_whylabs=True`. #### Usage Whylogs profiling functions generate a `DatasetProfileView` from a `pandas.DataFrame`. Three usage methods: 1. **Standard Step**: Use `WhylogsProfilerStep` for ease of use. 2. **Custom Step**: Call validation methods in custom implementations. 3. **Direct Library Use**: Utilize the whylogs library directly for full flexibility. Example of a standard step: ```python from zenml.integrations.whylogs.steps import get_whylogs_profiler_step train_data_profiler = get_whylogs_profiler_step(dataset_id="model-2") ``` Example of a custom step: ```python @step def data_profiler(dataset: pd.DataFrame) -> DatasetProfileView: data_validator = WhylogsDataValidator.get_active_data_validator() profile = data_validator.data_profiling(dataset) data_validator.upload_profile_view(profile) return profile ``` #### Visualizing Profiles Profiles can be visualized in the ZenML dashboard or using Jupyter notebooks: ```python def visualize_statistics(step_name: str, reference_step_name: Optional[str] = None) -> None: pipe = Client().get_pipeline(pipeline="data_profiling_pipeline") whylogs_step = pipe.last_run.steps[step_name] whylogs_step.visualize() ``` #### Additional Resources For more details, refer to the official Whylogs documentation and the complete list of configuration parameters in the ZenML SDK docs. ================================================== === File: docs/book/component-guide/orchestrators/local.md === ### Local Orchestrator The local orchestrator is a built-in component of ZenML that allows you to run pipelines locally without additional setup. #### When to Use It - Ideal for beginners starting with ZenML. - Useful for quickly experimenting and debugging new pipelines. #### Deployment No extra setup is required; it comes with ZenML. #### Usage To register and activate the local orchestrator in your stack: ```shell zenml orchestrator register --flavor=local zenml stack register -o ... --set ``` Run any ZenML pipeline using the local orchestrator: ```shell python file_that_runs_a_zenml_pipeline.py ``` For more details and configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-orchestrators/#zenml.orchestrators.local.local_orchestrator.LocalOrchestrator). ================================================== === File: docs/book/component-guide/orchestrators/custom.md === # Develop a Custom Orchestrator ## Overview To create a custom orchestrator in ZenML, it's essential to understand the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). ## Base Implementation ZenML's `BaseOrchestrator` provides a simplified interface for orchestration tools, abstracting ZenML-specific details. ### Key Classes - **BaseOrchestratorConfig**: Base class for orchestrator configurations. - **BaseOrchestrator**: Abstract class requiring implementation of: - `prepare_or_run_pipeline(deployment, stack, environment)`: Prepares and runs a pipeline. - `get_orchestrator_run_id()`: Returns a unique run ID for the active orchestrator run. - **BaseOrchestratorFlavor**: Abstract class defining: - `name`: Flavor name. - `type`: Returns `StackComponentType.ORCHESTRATOR`. - `config_class`: Returns `BaseOrchestratorConfig`. - `implementation_class`: Returns the orchestrator implementation class. ## Creating a Custom Orchestrator 1. **Inherit from `BaseOrchestrator`** and implement the required methods. 2. **Create a configuration class** inheriting from `BaseOrchestratorConfig`. 3. **Combine both** by inheriting from `BaseOrchestratorFlavor`, providing a name. ### Registering the Flavor Use the CLI to register your flavor: ```shell zenml orchestrator flavor register ``` Example: ```shell zenml orchestrator flavor register flavors.my_flavor.MyOrchestratorFlavor ``` ### Important Notes - Ensure ZenML is initialized at the root of your repository. - After registration, list available flavors: ```shell zenml orchestrator flavor list ``` ## Implementation Guide 1. **Create Orchestrator Class**: Inherit from `BaseOrchestrator` or `ContainerizedOrchestrator` if using Docker. 2. **Implement `prepare_or_run_pipeline(...)`**: Convert the pipeline for your orchestration tool and manage execution order. 3. **Implement `get_orchestrator_run_id()`**: Return a unique ID for each pipeline run. ### Optional Features - **Scheduling**: Handle `deployment.schedule` if supported. - **Resource Specification**: Manage CPU, GPU, or memory settings from `step.config.resource_settings`. ### Code Sample ```python from zenml.entrypoints import StepEntrypointConfiguration from zenml.models import PipelineDeploymentResponseModel from zenml.orchestrators import ContainerizedOrchestrator from zenml.stack import Stack class MyOrchestrator(ContainerizedOrchestrator): def get_orchestrator_run_id(self) -> str: ... def prepare_or_run_pipeline(self, deployment: PipelineDeploymentResponseModel, stack: Stack, environment: Dict[str, str]) -> None: if deployment.schedule: ... for step_name, step in deployment.step_configurations.items(): image = self.get_image(deployment, step_name) command = StepEntrypointConfiguration.get_entrypoint_command() arguments = StepEntrypointConfiguration.get_entrypoint_arguments(step_name, deployment.id) ... ``` ## Enabling CUDA for GPU To run steps on a GPU, follow the [instructions for enabling CUDA](../../how-to/pipeline-development/training-with-gpus/README.md). ================================================== === File: docs/book/component-guide/orchestrators/hyperai.md === # HyperAI Orchestrator Summary The **HyperAI Orchestrator** is a component of the HyperAI cloud compute platform that facilitates the deployment of AI pipelines on HyperAI instances. It is specifically designed for remote ZenML deployments, and using it with local deployments may cause issues. ## When to Use - For a managed solution to run pipelines. - If you are a HyperAI customer. ## Prerequisites 1. A running HyperAI instance accessible via the internet with SSH key-based access. 2. A recent version of Docker installed, including Docker Compose. 3. The appropriate [NVIDIA Driver](https://www.nvidia.com/en-us/drivers/unix/) installed on the HyperAI instance. 4. The [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) installed and configured (optional for GPU use). ## Functionality The orchestrator utilizes Docker Compose to create and execute a Docker Compose file for each ZenML pipeline step, ensuring that steps only run if upstream steps complete successfully. It can connect to the container registry for smooth Docker image transfers. ### Scheduled Pipelines The orchestrator supports: - **Cron expressions** for periodic runs (requires `crontab`). - **Scheduled runs** for one-time executions at a specified time (requires `at`). ## Deployment Steps 1. **Configure HyperAI Service Connector**: ```shell zenml service-connector register --type=hyperai --auth-method=rsa-key --base64_ssh_key= --hostnames=, --username= ``` 2. **Register the Orchestrator**: ```shell zenml orchestrator register --flavor=hyperai zenml stack register -o ... --set ``` 3. **Run a ZenML Pipeline**: ```shell python file_that_runs_a_zenml_pipeline.py ``` ### Enabling CUDA for GPU Use To utilize GPU acceleration, follow the instructions provided in the relevant documentation to enable CUDA settings. This summary retains essential technical details and instructions for using the HyperAI Orchestrator effectively. ================================================== === File: docs/book/component-guide/orchestrators/orchestrators.md === # Orchestrators in ZenML ## Overview The orchestrator is a crucial component in the MLOps stack, responsible for executing machine learning pipelines. It ensures that pipeline steps run only when all required inputs are available. ### Key Features - **Environment Setup**: Prepares the environment for pipeline execution. - **Artifact Storage**: Stores all artifacts produced by pipeline runs. - **Mandatory Component**: Must be configured in all ZenML stacks. ### Orchestrator Flavors ZenML offers several orchestrator flavors, including: | Orchestrator | Flavor | Integration | Notes | |-----------------------------|-----------------|------------------|-------------------------------------| | LocalOrchestrator | `local` | _built-in_ | Runs pipelines locally. | | LocalDockerOrchestrator | `local_docker` | _built-in_ | Runs pipelines locally using Docker.| | KubernetesOrchestrator | `kubernetes` | `kubernetes` | Runs pipelines in Kubernetes. | | KubeflowOrchestrator | `kubeflow` | `kubeflow` | Runs pipelines using Kubeflow. | | VertexOrchestrator | `vertex` | `gcp` | Runs pipelines in Vertex AI. | | SagemakerOrchestrator | `sagemaker` | `aws` | Runs pipelines in Sagemaker. | | AzureMLOrchestrator | `azureml` | `azure` | Runs pipelines in AzureML. | | TektonOrchestrator | `tekton` | `tekton` | Runs pipelines using Tekton. | | AirflowOrchestrator | `airflow` | `airflow` | Runs pipelines using Airflow. | | SkypilotAWSOrchestrator | `vm_aws` | `skypilot[aws]` | Runs pipelines in AWS VMs. | | SkypilotGCPOrchestrator | `vm_gcp` | `skypilot[gcp]` | Runs pipelines in GCP VMs. | | SkypilotAzureOrchestrator | `vm_azure` | `skypilot[azure]`| Runs pipelines in Azure VMs. | | HyperAIOrchestrator | `hyperai` | `hyperai` | Runs pipelines in HyperAI.ai. | | Custom Implementation | _custom_ | | Extend the orchestrator abstraction. | To view available orchestrator flavors, use: ```shell zenml orchestrator flavor list ``` ### Usage You don't need to interact directly with the orchestrator in your code. Simply ensure the desired orchestrator is part of your active ZenML stack and execute your pipeline with: ```shell python file_that_runs_a_zenml_pipeline.py ``` ### Inspecting Runs To get the URL of the orchestrator UI for a specific pipeline run: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value ``` ### Specifying Resources You can specify hardware requirements for steps in your pipeline. Refer to the documentation on [runtime configuration](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md) for details. If unsupported, consider using [step operators](../step-operators/step-operators.md). ================================================== === File: docs/book/component-guide/orchestrators/local-docker.md === ### Local Docker Orchestrator The Local Docker Orchestrator is a built-in orchestrator in ZenML that runs pipelines locally using Docker. #### When to Use - For running pipeline steps in isolated local environments. - For debugging pipeline issues without incurring costs for remote infrastructure. #### Deployment Ensure Docker is installed and running. #### Usage To register and activate the local Docker orchestrator in your stack: ```shell zenml orchestrator register --flavor=local_docker zenml stack register -o ... --set ``` Run any ZenML pipeline with: ```shell python file_that_runs_a_zenml_pipeline.py ``` #### Additional Configuration You can customize the Local Docker orchestrator using `LocalDockerOrchestratorSettings`. For details on attributes, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-orchestrators/#zenml.orchestrators.local_docker.local_docker_orchestrator.LocalDockerOrchestratorSettings) and the [runtime configuration guide](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md). Example of specifying CPU count (Windows only): ```python from zenml import step, pipeline from zenml.orchestrators.local_docker.local_docker_orchestrator import LocalDockerOrchestratorSettings @step def return_one() -> int: return 1 settings = { "orchestrator": LocalDockerOrchestratorSettings(run_args={"cpu_count": 3}) } @pipeline(settings=settings) def simple_pipeline(): return_one() ``` #### Enabling CUDA for GPU To run steps on a GPU, follow the instructions [here](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA for full acceleration. ================================================== === File: docs/book/component-guide/orchestrators/skypilot-vm.md === ### SkyPilot VM Orchestrator Overview The SkyPilot VM Orchestrator, integrated with ZenML, allows provisioning and management of virtual machines (VMs) across supported cloud providers via the SkyPilot framework. It simplifies running machine learning workloads on the cloud, offering cost efficiency, high GPU availability, and managed execution. It is recommended for users needing GPU access without the complexities of cloud infrastructure management. #### Usage Recommendations Use the SkyPilot VM Orchestrator if: - You want to leverage cost savings with spot VMs and auto-selection of the cheapest resources. - You require high GPU availability across multiple regions. - You prefer not to maintain Kubernetes or pay for managed solutions. **Warning:** This component is intended for remote ZenML deployments only. #### Functionality The orchestrator automates VM provisioning and scaling, supporting both on-demand and managed spot VMs. It includes: - An optimizer for selecting the cheapest VM options. - An autostop feature to clean up idle clusters. **Note:** The orchestrator does not support pipeline scheduling. **Info:** All ZenML pipeline runs execute in Docker containers on provisioned VMs. Configure Docker settings with `docker_run_args=["--gpus=all"]` for GPU support. #### Deployment No special steps are needed for deployment. Ensure you have permissions to provision VMs on your chosen cloud provider and configure the SkyPilot orchestrator using service connectors. **Supported Platforms:** AWS, GCP, Azure. #### Installation To use the SkyPilot VM Orchestrator, install the relevant SkyPilot integration: ```shell # AWS pip install "zenml[connectors-aws]" zenml integration install aws skypilot_aws # GCP pip install "zenml[connectors-gcp]" zenml integration install gcp skypilot_gcp # Azure pip install "zenml[connectors-azure]" zenml integration install azure skypilot_azure ``` #### Configuration for Cloud Providers 1. **AWS**: Register the AWS service connector and orchestrator. ```shell zenml service-connector register aws-skypilot-vm --type aws --auto-configure zenml orchestrator register --flavor vm_aws zenml orchestrator connect --connector aws-skypilot-vm ``` 2. **GCP**: Register the GCP service connector and orchestrator. ```shell gcloud auth application-default login zenml service-connector register gcp-skypilot-vm -t gcp --auth-method user-account --auto-configure zenml orchestrator register --flavor vm_gcp zenml orchestrator connect --connector gcp-skypilot-vm ``` 3. **Azure**: Register the Azure service connector and orchestrator. ```shell zenml service-connector register azure-skypilot-vm -t azure --auth-method access-token --auto-configure zenml orchestrator register --flavor vm_azure zenml orchestrator connect --connector azure-skypilot-vm ``` 4. **Lambda Labs**: Directly use API keys without a service connector. ```shell zenml integration install skypilot_lambda zenml secret create lambda_api_key --scope user --api_key= zenml orchestrator register --flavor vm_lambda --api_key={{lambda_api_key.api_key}} ``` 5. **Kubernetes**: Configure the Kubernetes service connector and orchestrator. ```shell zenml integration install skypilot_kubernetes zenml service-connector register kubernetes-skypilot --type kubernetes -i zenml orchestrator register --flavor sky_kubernetes zenml orchestrator connect --connector kubernetes-skypilot ``` #### Additional Configuration You can customize the orchestrator settings based on the cloud provider, including: - `instance_type`, `cpus`, `memory`, `accelerators`, `region`, `zone`, etc. - For example, AWS settings can be configured as follows: ```python from zenml.integrations.skypilot_aws.flavors.skypilot_orchestrator_aws_vm_flavor import SkypilotAWSOrchestratorSettings skypilot_settings = SkypilotAWSOrchestratorSettings( cpus="2", memory="16", accelerators="V100:2", use_spot=True, region="us-west-1", cluster_name="my_cluster", idle_minutes_to_autostop=60, docker_run_args=["--gpus=all"] ) @pipeline(settings={"orchestrator": skypilot_settings}) ``` #### Step-Specific Resources You can configure resources for individual steps in a pipeline, allowing for tailored resource allocation. If no specific settings are provided, the orchestrator defaults to the overall settings. To disable step-based settings, use: ```shell zenml orchestrator update --disable_step_based_settings=True ``` This flexibility optimizes performance and cost for each pipeline step. For more details on configuration, refer to the SDK documentation. ================================================== === File: docs/book/component-guide/orchestrators/sagemaker.md === # AWS Sagemaker Orchestrator Summary ## Overview The AWS Sagemaker Orchestrator, part of ZenML, is a serverless ML workflow tool designed for running machine learning pipelines on AWS Sagemaker. It provides a production-ready, repeatable cloud orchestrator with minimal setup. **Warning**: This component is intended for remote ZenML deployments only; local deployments may cause issues. ## When to Use Use the Sagemaker orchestrator if: - You are using AWS. - You need a production-grade orchestrator with a UI for tracking pipeline runs. - You prefer a managed and serverless solution. ## Functionality The Sagemaker orchestrator utilizes Sagemaker Pipelines to create `PipelineStep` for each ZenML pipeline step, which can include Sagemaker Processing or Training jobs. ## Deployment Requirements 1. Deploy ZenML to the cloud, ideally in the same region as Sagemaker. 2. Ensure connection to the remote ZenML server. 3. Enable necessary IAM permissions for your role. ## Usage Prerequisites - Install ZenML AWS and S3 integrations: ```shell zenml integration install aws s3 ``` - Install Docker. - Set up a remote artifact store and container registry. - Assign an IAM role with `AmazonSageMakerFullAccess` and `sagemaker.amazonaws.com` as a Principal Service. ### Authentication Methods 1. **Service Connector** (recommended): ```shell zenml service-connector register --type aws -i zenml orchestrator register --flavor=sagemaker --execution_role= zenml orchestrator connect --connector ``` 2. **Explicit Authentication**: ```shell zenml orchestrator register --flavor=sagemaker --execution_role= --aws_access_key_id=... --aws_secret_access_key=... --region=... ``` 3. **Implicit Authentication**: ```shell zenml orchestrator register --flavor=sagemaker --execution_role= python run.py # Uses default AWS profile ``` ## Running Pipelines To run a ZenML pipeline with the Sagemaker orchestrator: ```shell python run.py ``` Output indicates the status of the pipeline run. ## Sagemaker UI Access the Sagemaker UI via Sagemaker Studio to view pipeline runs and logs. ## Debugging If a pipeline fails before starting, check the Sagemaker UI for error messages and logs. ## Configuration You can customize configurations at the pipeline or step level using `SagemakerOrchestratorSettings`. For example: ```python from zenml.integrations.aws.flavors.sagemaker_orchestrator_flavor import SagemakerOrchestratorSettings sagemaker_orchestrator_settings = SagemakerOrchestratorSettings( instance_type="ml.m5.large", volume_size_in_gb=30, environment={"MY_ENV_VAR": "my_value"} ) @step(settings={"orchestrator": sagemaker_orchestrator_settings}) def my_step() -> None: pass ``` ## Warm Pools Enable Warm Pools to reduce startup time: ```python sagemaker_orchestrator_settings = SagemakerOrchestratorSettings( keep_alive_period_in_seconds=300 # 5 minutes ) ``` ## S3 Data Access Configure S3 data access for jobs: - **Import Data**: ```python sagemaker_orchestrator_settings = SagemakerOrchestratorSettings( input_data_s3_mode="File", input_data_s3_uri="s3://bucket/path" ) ``` - **Export Data**: ```python sagemaker_orchestrator_settings = SagemakerOrchestratorSettings( output_data_s3_mode="EndOfJob", output_data_s3_uri="s3://bucket/results" ) ``` ## Tagging Add tags to pipeline executions and jobs: ```python pipeline_settings = SagemakerOrchestratorSettings( pipeline_tags={"project": "my-ml-project"} ) ``` ## Scheduling Pipelines Schedule pipelines using cron expressions or fixed intervals: ```python @pipeline def my_scheduled_pipeline(): pass my_scheduled_pipeline.with_options( schedule=Schedule(cron_expression="0/5 * * * ? *") )() ``` ## IAM Permissions Ensure the IAM role has permissions for scheduling and managing Sagemaker jobs. Define a `scheduler_role` if needed. ### Example IAM Policy ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "scheduler:ListSchedules", "scheduler:GetSchedule", "scheduler:CreateSchedule", "scheduler:UpdateSchedule", "scheduler:DeleteSchedule" ], "Resource": "*" } ] } ``` This summary captures the essential technical details of using the AWS Sagemaker Orchestrator with ZenML while omitting redundancy and verbose explanations. ================================================== === File: docs/book/component-guide/orchestrators/kubeflow.md === # Kubeflow Orchestrator Overview The Kubeflow orchestrator is part of the ZenML `kubeflow` integration, leveraging [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/) for pipeline execution. It is designed for remote ZenML deployments and should not be used with local setups. ## When to Use Use the Kubeflow orchestrator if you need: - A production-grade orchestrator. - A UI for tracking pipeline runs. - Familiarity with Kubernetes or willingness to manage a Kubernetes cluster. - Deployment and maintenance of Kubeflow Pipelines. ## Deployment Steps To run ZenML pipelines on Kubeflow, set up a Kubernetes cluster and install Kubeflow Pipelines. Below are deployment instructions for various cloud providers: ### AWS 1. Set up an [EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html). 2. Configure AWS CLI and run: ```powershell aws eks --region REGION update-kubeconfig --name CLUSTER_NAME ``` 3. Install Kubeflow Pipelines. 4. (Optional) Set up an AWS Service Connector for secure access. ### GCP 1. Set up a [GKE cluster](https://cloud.google.com/kubernetes-engine/docs/quickstart). 2. Configure Google Cloud CLI and run: ```powershell gcloud container clusters get-credentials CLUSTER_NAME ``` 3. Install Kubeflow Pipelines. 4. (Optional) Set up a GCP Service Connector. ### Azure 1. Set up an [AKS cluster](https://azure.microsoft.com/en-in/services/kubernetes-service/#documentation). 2. Configure `az` CLI and run: ```powershell az aks get-credentials --resource-group RESOURCE_GROUP --name CLUSTER_NAME ``` 3. Install Kubeflow Pipelines. 4. **Note**: Change the container runtime to `k8sapi` if using `containerd`. ### Other Kubernetes 1. Set up a Kubernetes cluster. 2. Install Kubeflow Pipelines. 3. (Optional) Set up a Kubernetes Service Connector for remote access. ## Usage To use the Kubeflow orchestrator: 1. Ensure a Kubernetes cluster with Kubeflow Pipelines is installed. 2. Deploy a remote ZenML server. 3. Install the ZenML `kubeflow` integration: ```shell zenml integration install kubeflow ``` 4. Install Docker (unless using a remote Image Builder). 5. (Optional) Install `kubectl`. ### Registering the Orchestrator You can register the orchestrator in two ways: 1. **With Service Connector**: ```shell zenml service-connector list-resources --resource-type kubernetes-cluster -e zenml orchestrator register --flavor kubeflow --connector --resource-id zenml stack register -o -a -c ``` 2. **Without Service Connector**: ```shell zenml orchestrator register --flavor=kubeflow --kubernetes_context= zenml stack register -o -a -c ``` ### Running Pipelines To execute a ZenML pipeline: ```shell python file_that_runs_a_zenml_pipeline.py ``` ### Accessing Kubeflow UI To get the Kubeflow UI URL: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"] ``` ### Additional Configuration You can configure the Kubeflow orchestrator using `KubeflowOrchestratorSettings` for attributes like: - `client_args` - `user_namespace` - `pod_settings` Example: ```python from zenml.integrations.kubeflow.flavors.kubeflow_orchestrator_flavor import KubeflowOrchestratorSettings kubeflow_settings = KubeflowOrchestratorSettings( client_args={}, user_namespace="my_namespace", pod_settings={"affinity": {...}, "tolerations": [...]} ) @pipeline(settings={"orchestrator": kubeflow_settings}) def my_pipeline(): ... ``` ### Multi-Tenancy Deployment For multi-tenancy, set the `kubeflow_hostname` during registration: ```shell zenml orchestrator register --flavor=kubeflow --kubeflow_hostname= ``` Use the appropriate settings for namespace and authentication: ```python kubeflow_settings = KubeflowOrchestratorSettings( client_username="{{kubeflow_secret.username}}", client_password="{{kubeflow_secret.password}}", user_namespace="namespace_name" ) ``` ### Using Secrets Create secrets for sensitive information: ```shell zenml secret create kubeflow_secret --username=admin --password=abc123 ``` ### Important Notes - Ensure the Kubernetes service is named `ml-pipeline` for ZenML connectivity. - For GPU support, follow specific instructions to enable CUDA. - Refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubeflow/#zenml.integrations.kubeflow.orchestrators.kubeflow_orchestrator.KubeflowOrchestrator) for a complete list of attributes and configurations. ================================================== === File: docs/book/component-guide/orchestrators/lightning.md === # Lightning AI Orchestrator Overview ## Description The Lightning AI Orchestrator, integrated with ZenML, enables the execution of machine learning pipelines on Lightning AI's infrastructure, utilizing its scalable compute resources. It is designed for remote ZenML deployments only. ## Use Cases - Fast execution of pipelines on GPU instances. - Integration with existing Lightning AI projects. - Simplified deployment and scaling of ML workflows. - Utilization of Lightning AI's optimizations for ML workloads. ## Deployment Requirements - A Lightning AI account and credentials. - No additional infrastructure deployment needed; it uses Lightning AI's managed resources. ## Functionality 1. **Pipeline Execution**: Archives the ZenML repository and uploads it to Lightning AI Studio. 2. **Environment Setup**: Uses `lightning-sdk` to create a studio and run commands (e.g., installing dependencies). 3. **Machine Type Support**: Supports both CPU and GPU instances, configurable in `LightningOrchestratorSettings`. ## Setup Instructions 1. Install the Lightning integration: ```shell zenml integration install lightning ``` 2. Ensure a remote artifact store is part of your stack. 3. Obtain Lightning AI credentials: - `LIGHTNING_USER_ID` - `LIGHTNING_API_KEY` - Optional: `LIGHTNING_USERNAME`, `LIGHTNING_TEAMSPACE`, `LIGHTNING_ORG` 4. Register the orchestrator: ```shell zenml orchestrator register lightning_orchestrator \ --flavor=lightning \ --user_id= \ --api_key= \ --username= \ # optional --teamspace= \ # optional --organization= # optional ``` 5. Activate the stack: ```bash zenml stack register lightning_stack -o lightning_orchestrator ... --set ``` ## Pipeline Configuration Configure the orchestrator at the pipeline level: ```python from zenml.integrations.lightning.flavors.lightning_orchestrator_flavor import LightningOrchestratorSettings lightning_settings = LightningOrchestratorSettings( main_studio_name="my_studio", machine_type="cpu", async_mode=True, custom_commands=["pip install -r requirements.txt"] ) @pipeline(settings={"orchestrator.lightning": lightning_settings}) def my_pipeline(): ... ``` ## Running Pipelines Execute the pipeline with: ```shell python file_that_runs_a_zenml_pipeline.py ``` ## Monitoring and Management Use the Lightning AI UI to monitor running applications. Retrieve the UI URL for a pipeline run: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value ``` ## Additional Configuration You can specify settings at both pipeline and step levels. For GPU usage, set the machine type accordingly: ```python lightning_settings = LightningOrchestratorSettings( machine_type="gpu" # or specific types like `A10G` ) ``` Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-lightning/#zenml.integrations.lightning.flavors.lightning_orchestrator_flavor.LightningOrchestratorSettings) for a complete list of attributes and [runtime configuration](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md) details. ================================================== === File: docs/book/component-guide/orchestrators/azureml.md === # AzureML Orchestrator Summary **Overview**: AzureML is a cloud-based orchestration service by Microsoft for building, training, deploying, and managing machine learning models. It supports the entire ML lifecycle, from data preparation to monitoring. ## When to Use AzureML Orchestrator - If you are using Azure. - For a production-grade orchestrator. - To track pipeline runs via a UI. - For a managed solution to run pipelines. ## Functionality The ZenML AzureML orchestrator uses the AzureML Python SDK v2 to build ML pipelines, creating AzureML CommandComponents for each ZenML step. ## Deployment To use the AzureML orchestrator: 1. Deploy ZenML to the cloud (preferably in the same region as AzureML). 2. Ensure connection to the remote ZenML server. ## Installation Requirements - Install the ZenML Azure integration: ```shell zenml integration install azure ``` - Docker installed and running or a remote image builder. - A remote artifact store and container registry. - An Azure resource group with an AzureML workspace. ## Authentication Methods 1. **Default Authentication**: Simplifies the process using Azure credentials. 2. **Service Principal Authentication (recommended)**: Requires creating a service principal on Azure and registering a ZenML Azure Service Connector: ```bash zenml service-connector register --type azure -i zenml orchestrator connect -c ``` ## Docker Integration ZenML builds a Docker image for each pipeline run, named `/zenml:`. ## AzureML UI Each AzureML workspace includes a Machine Learning studio for managing and debugging pipelines. ## Configuration Settings The `AzureMLOrchestratorSettings` class configures compute resources with three modes: 1. **Serverless Compute (Default)**: ```python azureml_settings = AzureMLOrchestratorSettings(mode="serverless") ``` 2. **Compute Instance**: ```python azureml_settings = AzureMLOrchestratorSettings( mode="compute-instance", compute_name="my-gpu-instance", size="Standard_NC6s_v3", idle_time_before_shutdown_minutes=20, ) ``` 3. **Compute Cluster**: ```python azureml_settings = AzureMLOrchestratorSettings( mode="compute-cluster", compute_name="my-gpu-cluster", size="Standard_NC6s_v3", tier="Dedicated", min_instances=2, max_instances=10, idle_time_before_scaledown_down=60, ) ``` ## Scheduling Pipelines AzureML supports scheduling pipelines using JobSchedules with cron expressions or intervals: ```python @pipeline def my_pipeline(): ... my_pipeline = my_pipeline.with_options( schedule=Schedule(cron_expression="*/5 * * * *") ) my_pipeline() ``` Users must manage the lifecycle of the schedule via the Azure UI. For more details on compute sizes, refer to the [AzureML documentation](https://learn.microsoft.com/en-us/azure/machine-learning/concept-compute-target?view=azureml-api-2#supported-vm-series-and-sizes). ================================================== === File: docs/book/component-guide/orchestrators/kubernetes.md === ### Kubernetes Orchestrator Overview The ZenML `kubernetes` integration allows you to orchestrate and scale ML pipelines on a Kubernetes cluster without writing Kubernetes code. It is a lightweight alternative to distributed orchestrators like Airflow or Kubeflow, running each pipeline step in separate Kubernetes pods, managed by a master pod using topological sorting. This orchestrator is faster and simpler than Kubeflow, making it suitable for teams seeking distributed orchestration without the overhead of managing Kubeflow. **Warning**: This component is intended for remote ZenML deployments only. Using it with local deployments may cause issues. ### When to Use the Kubernetes Orchestrator - If you want a lightweight solution for running pipelines on Kubernetes. - If you prefer not to maintain Kubeflow Pipelines. - If you want to avoid costs associated with managed solutions like Vertex. ### Deployment Requirements - A Kubernetes cluster (check the [cloud guide](../../user-guide/cloud-guide/cloud-guide.md) for deployment options). - A remote ZenML server connected to the Kubernetes cluster. ### Setup Instructions 1. **Install the ZenML Kubernetes Integration**: ```shell zenml integration install kubernetes ``` 2. **Prerequisites**: - Docker and kubectl installed. - A remote artifact store and container registry as part of your stack. - Optionally, configure a Service Connector for better portability. 3. **Register the Orchestrator**: - **With Service Connector**: ```shell zenml orchestrator register --flavor kubernetes zenml orchestrator connect --connector zenml stack register -o ... --set ``` - **Without Service Connector**: ```shell zenml orchestrator register --flavor=kubernetes --kubernetes_context= zenml stack register -o ... --set ``` ### Running a Pipeline To run a ZenML pipeline: ```shell python file_that_runs_a_zenml_pipeline.py ``` You should see logs for all Kubernetes pods and can verify pod creation with: ```shell kubectl get pods -n zenml ``` ### Interacting with Pods You can manage pods using labels: ```shell kubectl delete pod -n zenml -l pipeline= ``` ### Additional Configuration - Default namespace: `zenml`, with a service account `zenml-service-account`. - Custom settings can be configured for: - `kubernetes_namespace` - `service_account_name` - `pod_settings` (node selectors, tolerations, resources, etc.) Example of custom settings: ```python from zenml.integrations.kubernetes.flavors.kubernetes_orchestrator_flavor import KubernetesOrchestratorSettings kubernetes_settings = KubernetesOrchestratorSettings( pod_settings={ "node_selectors": {"cloud.google.com/gke-nodepool": "ml-pool"}, "resources": {"requests": {"cpu": "2", "memory": "4Gi"}}, }, kubernetes_namespace="ml-pipelines", service_account_name="zenml-pipeline-runner" ) ``` ### Step-Level Configuration You can define settings at the step level to override pipeline settings: ```python @step(settings={"orchestrator": k8s_settings}) def train_model(data: dict) -> None: ... ``` ### GPU Configuration To run steps on GPU, follow the instructions to enable CUDA and customize settings accordingly. For detailed attributes and additional configurations, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubernetes/). ================================================== === File: docs/book/component-guide/orchestrators/databricks.md === # Databricks Orchestrator Overview The Databricks Orchestrator, part of the ZenML integration, allows users to run ML pipelines on Databricks, leveraging its distributed computing capabilities and optimized environment for big data processing. ## When to Use Use the Databricks orchestrator if: - You are already using Databricks for data and ML workloads. - You want to utilize Databricks' distributed computing for ML pipelines. - You seek a managed solution that integrates with Databricks services. ## Prerequisites To use the Databricks orchestrator, you need: - An active Databricks workspace (AWS, Azure, GCP). - A Databricks account or service account with permissions to create and run jobs. ## How It Works 1. **Wheel Package Creation**: ZenML creates a Python wheel package containing your pipeline code and dependencies. 2. **Job Definition**: ZenML uses the Databricks SDK to create a job definition that includes pipeline steps and cluster settings (Spark version, worker count, node type). 3. **Execution**: The job retrieves the wheel package and executes the pipeline in the correct order based on dependencies. 4. **Monitoring**: ZenML retrieves logs and job status for monitoring. ## Usage Instructions 1. **Install Databricks Integration**: ```shell zenml integration install databricks ``` 2. **Register the Orchestrator**: ```shell zenml orchestrator register databricks_orchestrator --flavor=databricks --host="https://xxxxx.x.azuredatabricks.net" --client_id={{databricks.client_id}} --client_secret={{databricks.client_secret}} ``` 3. **Add to Stack**: ```shell zenml stack register databricks_stack -o databricks_orchestrator ... --set ``` 4. **Run Pipeline**: ```shell python run.py ``` ## Databricks UI Access pipeline run details and logs through the Databricks UI. Retrieve the UI URL in Python: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value ``` ## Scheduling Pipelines Use the native scheduling capability to run pipelines on a schedule: ```python from zenml.config.schedule import Schedule pipeline_instance.run( schedule=Schedule(cron_expression="*/5 * * * *") ) ``` **Note**: Only `cron_expression` is supported, and Java Timezone IDs must be used. ## Additional Configuration Customize the Databricks orchestrator with `DatabricksOrchestratorSettings`: ```python from zenml.integrations.databricks.flavors.databricks_orchestrator_flavor import DatabricksOrchestratorSettings databricks_settings = DatabricksOrchestratorSettings( spark_version="15.3.x-scala2.12", num_workers="3", node_type_id="Standard_D4s_v5", autoscale=(2, 3), schedule_timezone="America/Los_Angeles" ) ``` Specify settings at the pipeline or step level: ```python @pipeline(settings={"orchestrator": databricks_settings}) def my_pipeline(): ... ``` ## GPU Support To enable GPU support, adjust `spark_version` and `node_type_id`: ```python databricks_settings = DatabricksOrchestratorSettings( spark_version="15.3.x-gpu-ml-scala2.12", node_type_id="Standard_NC24ads_A100_v4", autoscale=(1, 2) ) ``` For full GPU acceleration, follow instructions to enable CUDA. For a complete list of attributes and configuration options, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-databricks/#zenml.integrations.databricks.flavors.databricks_orchestrator_flavor.DatabricksOrchestratorSettings). ================================================== === File: docs/book/component-guide/orchestrators/vertex.md === # Google Cloud Vertex AI Orchestrator ## Overview Vertex AI Pipelines is a serverless ML workflow tool on Google Cloud Platform (GCP) designed for running production-ready, repeatable pipelines with minimal setup. It is intended for use within a remote ZenML deployment. ## When to Use Use the Vertex orchestrator if: - You are using GCP. - You need a production-grade orchestrator with a UI for tracking pipeline runs. - You prefer a managed, serverless solution. ## Deployment Steps 1. Deploy ZenML to the cloud, ideally in the same GCP project as the Vertex infrastructure. 2. Ensure connection to the remote ZenML server. 3. Enable Vertex-relevant APIs on the GCP project. ## Usage Requirements To use the Vertex orchestrator: - Install the ZenML GCP integration: ```shell zenml integration install gcp ``` - Install and run Docker. - Set up a remote artifact store and container registry. - Obtain GCP credentials with appropriate permissions. ### GCP Credentials and Permissions You can authenticate using: - `gcloud` CLI. - A service account key file. - (Recommended) A GCP Service Connector. ### Vertex AI Pipeline Components 1. **ZenML Client Environment**: Runs ZenML code and needs permissions to create jobs in Vertex Pipelines. 2. **Vertex AI Pipeline Environment**: Runs pipeline steps in GCP, requiring a workload service account with permissions to execute Vertex AI pipelines. ### Configuration Use-Cases 1. **Local `gcloud` CLI with User Account**: ```shell zenml orchestrator register \ --flavor=vertex \ --project= \ --location= \ --synchronous=true ``` 2. **GCP Service Connector with Single Service Account**: ```shell zenml service-connector register --type gcp --auth-method=service-account --project_id= --service_account_json=@connectors-vertex-ai-workload.json --resource-type gcp-generic zenml orchestrator register \ --flavor=vertex \ --location= \ --synchronous=true \ --workload_service_account=@.iam.gserviceaccount.com zenml orchestrator connect --connector ``` 3. **GCP Service Connector with Different Service Accounts**: Requires multiple service accounts for different permissions, following the principle of least privilege. ### Configuring the Stack To register and activate a stack with the orchestrator: ```shell zenml stack register -o ... --set ``` ### Running Pipelines Run any ZenML pipeline using: ```shell python file_that_runs_a_zenml_pipeline.py ``` ### Vertex UI Access pipeline run details through the Vertex UI. Retrieve the UI URL in Python: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"] ``` ### Scheduling Pipelines Schedule pipelines using: ```python from datetime import datetime, timedelta from zenml import pipeline from zenml.config.schedule import Schedule @pipeline def my_pipeline(): ... my_pipeline = my_pipeline.with_options( schedule=Schedule(cron_expression="*/5 * * * *") ) my_pipeline() ``` **Note**: Only `cron_expression`, `start_time`, and `end_time` are supported. ### Additional Configuration Use `VertexOrchestratorSettings` for job labels and GPU specifications: ```python from zenml.integrations.gcp.flavors.vertex_orchestrator_flavor import VertexOrchestratorSettings vertex_settings = VertexOrchestratorSettings(labels={"key": "value"}) ``` Specify resource settings: ```python from zenml.config import ResourceSettings resource_settings = ResourceSettings(cpu_count=8, memory="16GB") ``` For GPU usage: ```python vertex_settings = VertexOrchestratorSettings( pod_settings={"node_selectors": {"cloud.google.com/gke-accelerator": "NVIDIA_TESLA_A100"}} ) resource_settings = ResourceSettings(gpu_count=1) ``` ### Enabling CUDA for GPU Follow specific instructions to enable CUDA for GPU acceleration. For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-gcp/#zenml.integrations.gcp.orchestrators.vertex_orchestrator.VertexOrchestrator). ================================================== === File: docs/book/component-guide/orchestrators/tekton.md === # Tekton Orchestrator **Tekton** is an open-source framework for CI/CD systems, enabling developers to build, test, and deploy applications across cloud and on-premise environments. This component is intended for use in a **remote ZenML deployment** only. ### When to Use Tekton Use the Tekton orchestrator if: - You need a production-grade orchestrator. - You want a UI to track pipeline runs. - You are using or willing to set up a Kubernetes cluster. - You can deploy and maintain Tekton Pipelines. ### Deployment Steps 1. **Set Up Kubernetes Cluster**: Ensure you have a remote ZenML server and a Kubernetes cluster (EKS, GKE, or AKS) set up. 2. **Install `kubectl`**: Download and configure `kubectl` for your cluster. 3. **Install Tekton Pipelines**: Follow the installation guide from Tekton. **Example Commands**: - **AWS**: ```powershell aws eks --region REGION update-kubeconfig --name CLUSTER_NAME ``` - **GCP**: ```powershell gcloud container clusters get-credentials CLUSTER_NAME ``` - **Azure**: ```powershell az aks get-credentials --resource-group RESOURCE_GROUP --name CLUSTER_NAME ``` **Note**: Ensure Tekton Pipelines version is >=0.38.3. ### Using Tekton 1. Install the ZenML `tekton` integration: ```shell zenml integration install tekton -y ``` 2. Ensure Docker is installed and running. 3. Deploy Tekton pipelines on a remote cluster. 4. Set up a remote artifact store and container registry. **Register Orchestrator**: - With Service Connector: ```shell zenml orchestrator register --flavor tekton zenml orchestrator connect --connector ``` - Without Service Connector: ```shell zenml orchestrator register --flavor=tekton --kubernetes_context= ``` **Run Pipeline**: ```shell python file_that_runs_a_zenml_pipeline.py ``` ### Tekton UI Access the Tekton UI for pipeline details and logs: ```bash kubectl get ingress -n tekton-pipelines -o jsonpath='{.items[0].spec.rules[0].host}' ``` ### Additional Configuration Configure `TektonOrchestratorSettings` for node selectors, affinity, and tolerations: ```python from zenml.integrations.tekton.flavors.tekton_orchestrator_flavor import TektonOrchestratorSettings tekton_settings = TektonOrchestratorSettings( pod_settings={ "affinity": {...}, "tolerations": [...] } ) ``` Specify hardware requirements using `ResourceSettings`: ```python resource_settings = ResourceSettings(cpu_count=8, memory="16GB") ``` Apply settings at the pipeline or step level: ```python @pipeline(settings={"orchestrator": tekton_settings, "resources": resource_settings}) def my_pipeline(): ... @step(settings={"orchestrator": tekton_settings, "resources": resource_settings}) def my_step(): ... ``` ### Enabling CUDA for GPU For GPU support, follow specific instructions to enable CUDA for full acceleration. For more details on configuration and attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-tekton/). ================================================== === File: docs/book/component-guide/orchestrators/airflow.md === ### Airflow Orchestrator for ZenML Pipelines ZenML pipelines can be executed as Airflow DAGs, leveraging Airflow's orchestration capabilities alongside ZenML's ML-specific features. Each ZenML step runs in a separate Docker container managed by Airflow. #### When to Use Airflow - Proven production-grade orchestrator. - Existing use of Airflow. - Local pipeline execution. - Willingness to deploy and maintain Airflow. #### Deployment Options - **Local Deployment**: No additional setup required. - **Remote Deployment**: - Use ZenML GCP Terraform module for Google Cloud Composer. - Managed services like Google Cloud Composer, Amazon MWAA, or Astronomer. - Manual deployment (refer to [Airflow docs](https://airflow.apache.org/docs/apache-airflow/stable/production-deployment.html)). **Required Python Packages** for Airflow server: - `pydantic~=2.7.1` - `apache-airflow-providers-docker` or `apache-airflow-providers-cncf-kubernetes` #### Usage Steps 1. Install ZenML Airflow integration: ```shell zenml integration install airflow ``` 2. Ensure Docker is installed and running. 3. Register the orchestrator: ```shell zenml orchestrator register --flavor=airflow --local=True zenml stack register -o ... --set ``` **Local Setup**: - Create a virtual environment: ```bash python -m venv airflow_server_environment source airflow_server_environment/bin/activate pip install "apache-airflow==2.4.0" "apache-airflow-providers-docker<3.8.0" "pydantic~=2.7.1" ``` - Set environment variables (optional): - `AIRFLOW_HOME`: Default `~/airflow` - `AIRFLOW__CORE__DAGS_FOLDER`: Default `/dags` - `AIRFLOW__SCHEDULER__DAG_DIR_LIST_INTERVAL`: Default 30 seconds. **Start Local Airflow**: ```bash airflow standalone ``` Access UI at [http://localhost:8080](http://localhost:8080). **Run ZenML Pipeline**: ```shell python file_that_runs_a_zenml_pipeline.py ``` Copy the generated `.zip` file to the Airflow DAGs directory. #### Remote Deployment Requirements - Remote ZenML server. - Deployed Airflow server. - Remote artifact store and container registry. In remote setups, executing `pipeline.run()` creates a `.zip` file for Airflow, which must be placed in the DAGs directory. #### Scheduling Pipelines Set schedules in the past: ```python from datetime import datetime, timedelta from zenml.pipelines import Schedule scheduled_pipeline = fashion_mnist_pipeline.with_options( schedule=Schedule( start_time=datetime.now() - timedelta(hours=1), end_time=datetime.now() + timedelta(hours=1), interval_second=timedelta(minutes=15), catchup=False, ) ) scheduled_pipeline() ``` #### Airflow UI Access the UI at [http://localhost:8080](http://localhost:8080). Admin credentials are `admin` and the password is found in `/standalone_admin_password.txt`. #### Additional Configuration Use `AirflowOrchestratorSettings` for custom configurations. For GPU usage, follow specific instructions for enabling CUDA. #### Using Different Airflow Operators Specify the operator in `AirflowOrchestratorSettings`: ```python from zenml.integrations.airflow.flavors.airflow_orchestrator_flavor import AirflowOrchestratorSettings airflow_settings = AirflowOrchestratorSettings( operator="docker", # or "kubernetes_pod" operator_args={} ) ``` **Custom Operators**: Reference any operator class in your Airflow environment. **Custom DAG Generator**: Provide a custom Python module for DAG generation by implementing required classes and constants. For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-airflow/#zenml.integrations.airflow.orchestrators.airflow_orchestrator.AirflowOrchestrator). ================================================== === File: docs/book/how-to/debug-and-solve-issues.md === # Debugging ZenML Issues Guide This document provides a concise guide for debugging common issues with ZenML, including best practices for seeking help. ## When to Seek Help Before asking for assistance, follow this checklist: - Search Slack using the built-in search function. - Check [GitHub issues](https://github.com/zenml-io/zenml/issues). - Use the search bar on the [documentation site](https://docs.zenml.io). - Review the [common errors](debug-and-solve-issues.md#most-common-errors) section. - Analyze [additional logs](debug-and-solve-issues.md#41-additional-logs) and [client/server logs](debug-and-solve-issues.md#client-and-server-logs). If you still need help, post your question on [Slack](https://zenml.io/slack). ## How to Post on Slack Include the following information in your post: ### 1. System Information Run the command below and attach the output: ```shell zenml info -a -s ``` For specific package issues, use: ```shell zenml info -p ``` ### 2. What Happened? Briefly describe: - Your goal - Expected outcome - Actual outcome ### 3. Reproducing the Error Provide step-by-step instructions or a video to reproduce the error. ### 4. Relevant Log Output Attach relevant logs and error tracebacks. Include outputs from: - `zenml status` - `zenml stack describe` If necessary, use services like [Pastebin](https://pastebin.com/) for long tracebacks. #### Additional Logs If default logs are insufficient, change the verbosity level: ```shell export ZENML_LOGGING_VERBOSITY=DEBUG ``` ### Client and Server Logs To view server logs, run: ```shell zenml logs ``` ## Common Errors ### Error Initializing REST Store Occurs as: ```bash RuntimeError: Error initializing rest store with URL 'http://127.0.0.1:8237'... ``` Solution: Re-run `zenml login --local` after each machine restart. ### Column 'step_configuration' Cannot Be Null Occurs as: ```bash sqlalchemy.exc.IntegrityError: (pymysql.err.IntegrityError) (1048, "Column 'step_configuration' cannot be null") ``` Solution: Ensure step configurations do not exceed the character limit. ### 'NoneType' Object Has No Attribute 'Name' Occurs when required stack components are missing: ```shell AttributeError: 'NoneType' object has no attribute 'name' ``` Solution: Register the missing component, e.g.: ```shell zenml experiment-tracker register mlflow_tracker --flavor=mlflow zenml stack update -e mlflow_tracker ``` This guide aims to streamline the debugging process for ZenML users, enhancing the efficiency of troubleshooting and support. ================================================== === File: docs/book/how-to/pipeline-development/README.md === # Pipeline Development in ZenML This section outlines the key components and processes involved in developing pipelines using ZenML. ## Key Components: 1. **Pipelines**: A sequence of steps that define the workflow for data processing and machine learning tasks. 2. **Steps**: Individual tasks within a pipeline, which can include data ingestion, preprocessing, model training, and evaluation. 3. **Artifacts**: Outputs generated by steps, such as trained models or processed datasets. ## Development Process: 1. **Define Pipeline**: Use the `@pipeline` decorator to create a pipeline function. ```python from zenml.pipelines import pipeline @pipeline def my_pipeline(): # Define steps here ``` 2. **Create Steps**: Define each step as a function and decorate it with `@step`. ```python from zenml.steps import step @step def data_ingestion(): # Ingest data logic @step def model_training(data): # Model training logic ``` 3. **Run Pipeline**: Execute the pipeline using the ZenML CLI or programmatically. ```python from zenml.pipelines import run run(my_pipeline) ``` ## Configuration: - **Step Configuration**: Customize step parameters using the `@step` decorator. - **Pipeline Configuration**: Set pipeline-level configurations in a YAML file or through environment variables. ## Best Practices: - Modularize steps for reusability. - Use version control for pipeline definitions. - Monitor and log pipeline executions for debugging. This summary encapsulates the essential elements of pipeline development in ZenML, providing a clear framework for creating and managing data workflows. ================================================== === File: docs/book/how-to/pipeline-development/develop-locally/README.md === ### Develop Locally This section outlines best practices for developing pipelines locally, enabling faster iteration and cost-effective testing. Developers often work with a smaller subset of data or synthetic data. ZenML supports local development, allowing users to later push and run pipelines on more powerful remote hardware. ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/how-to/pipeline-development/develop-locally/keep-your-dashboard-server-clean.md === ### Summary of ZenML Pipeline Management Documentation **Overview**: This documentation provides guidance on maintaining a clean development environment for ZenML pipelines, focusing on managing pipeline runs, models, and artifacts. #### 1. **Running Locally** To avoid cluttering a shared server, run pipelines locally by disconnecting from the remote server: ```bash zenml login --local ``` Reconnect with: ```bash zenml login ``` #### 2. **Pipeline Runs** - **Unlisted Runs**: Create runs without associating them with a pipeline: ```python pipeline_instance.run(unlisted=True) ``` These runs won't appear on the pipeline's dashboard. - **Deleting Pipeline Runs**: Delete a specific run: ```bash zenml pipeline runs delete ``` To delete all runs from the last 24 hours: ```python import datetime from zenml.client import Client def delete_recent_pipeline_runs(): zc = Client() time_filter = (datetime.datetime.now(datetime.timezone.utc) - datetime.timedelta(hours=24)).strftime("%Y-%m-%d %H:%M:%S") recent_runs = zc.list_pipeline_runs(created=f"gt:{time_filter}") for run in recent_runs: zc.delete_pipeline_run(run.id) if __name__ == "__main__": delete_recent_pipeline_runs() ``` #### 3. **Pipelines** - **Deleting Pipelines**: Remove unnecessary pipelines: ```bash zenml pipeline delete ``` - **Unique Pipeline Names**: Assign custom names to runs: ```python training_pipeline = training_pipeline.with_options(run_name="custom_pipeline_run_name") training_pipeline() ``` #### 4. **Models** Models must be registered when defining a pipeline. To delete a model: ```bash zenml model delete ``` #### 5. **Artifacts** - **Pruning Artifacts**: Remove unreferenced artifacts: ```bash zenml artifact prune ``` Use `--only-artifact` or `--only-metadata` flags to control deletion behavior. #### 6. **Cleaning Your Environment** For a complete reset of the local environment: ```bash zenml clean ``` Use the `--local` flag to delete local files related to the active stack. This command does not affect server data. By following these practices, you can maintain an organized and efficient ZenML pipeline environment. ================================================== === File: docs/book/how-to/pipeline-development/develop-locally/local-prod-pipeline-variants.md === ### Summary: Creating Pipeline Variants for Local Development and Production in ZenML When developing ZenML pipelines, it's useful to create different variants for local development and production. This allows for rapid iteration during development while maintaining a robust setup for production. Variants can be created through: 1. **Configuration Files** 2. **Code Implementation** 3. **Environment Variables** #### 1. Using Configuration Files ZenML supports YAML configuration files for pipeline and step settings. Example configuration for development: ```yaml enable_cache: False parameters: dataset_name: "small_dataset" steps: load_data: enable_cache: False ``` To apply this configuration: ```python from zenml import step, pipeline @step def load_data(dataset_name: str) -> dict: ... @pipeline def ml_pipeline(dataset_name: str): load_data(dataset_name) if __name__ == "__main__": ml_pipeline.with_options(config_path="path/to/config.yaml")() ``` Separate files can be created for development (`config_dev.yaml`) and production (`config_prod.yaml`). #### 2. Implementing Variants in Code You can directly implement variants in your code: ```python import os from zenml import step, pipeline @step def load_data(dataset_name: str) -> dict: ... @pipeline def ml_pipeline(is_dev: bool = False): dataset = "small_dataset" if is_dev else "full_dataset" load_data(dataset) if __name__ == "__main__": is_dev = os.environ.get("ZENML_ENVIRONMENT") == "dev" ml_pipeline(is_dev=is_dev) ``` This allows switching between variants using a boolean flag. #### 3. Using Environment Variables Environment variables can dictate which variant to run: ```python import os config_path = "config_dev.yaml" if os.environ.get("ZENML_ENVIRONMENT") == "dev" else "config_prod.yaml" ml_pipeline.with_options(config_path=config_path)() ``` Run the pipeline using: - `ZENML_ENVIRONMENT=dev python run.py` - `ZENML_ENVIRONMENT=prod python run.py` #### Development Variant Considerations For development variants, optimize for faster iteration by: - Using smaller datasets - Specifying a local execution stack - Reducing training epochs and batch size - Using smaller base models Example configuration: ```yaml parameters: dataset_path: "data/small_dataset.csv" epochs: 1 batch_size: 16 stack: local_stack ``` Or in code: ```python @pipeline def ml_pipeline(is_dev: bool = False): dataset = "data/small_dataset.csv" if is_dev else "data/full_dataset.csv" epochs = 1 if is_dev else 100 batch_size = 16 if is_dev else 64 load_data(dataset) train_model(epochs=epochs, batch_size=batch_size) ``` Creating different pipeline variants facilitates local testing and debugging while ensuring a full-scale production configuration, enhancing the development workflow. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/retrieve-used-configuration-of-a-run.md === To extract the configuration used in a completed pipeline run, you can access the `config` attribute of the pipeline run or a specific step within it. ### Code Example: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run() # Access general configuration pipeline_run.config # Access configuration for a specific step pipeline_run.steps[].config ``` This allows you to retrieve both the overall pipeline configuration and the configuration for individual steps. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/README.md === ZenML enables the configuration and execution of pipelines using YAML files at runtime. These configuration files allow users to set parameters, manage caching behavior, and configure stack components. Key resources include: - **What can be configured**: Details on configurable options. - **Configuration hierarchy**: Structure of configuration settings. - **Autogenerate a template YAML file**: Instructions for generating a YAML template. For more information, refer to the linked sections. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/autogenerate-a-template-yaml-file.md === ### Summary of Documentation #### Autogenerate a Template YAML File To create a configuration file template for your pipeline, use the `.write_run_configuration_template()` method. This generates a YAML file with all options commented out, allowing you to select relevant settings. #### Code Example ```python from zenml import pipeline @pipeline(enable_cache=True) def simple_ml_pipeline(parameter: int): dataset = load_data(parameter=parameter) train_model(dataset) simple_ml_pipeline.write_run_configuration_template(path="") ``` #### Generated YAML Configuration Template The generated YAML template includes various sections with optional and required parameters: - **build**: Pipeline build configuration. - **enable_artifact_metadata**: Optional boolean. - **model**: Contains model metadata fields such as `name`, `version`, and `tags`. - **parameters**: Optional mapping for parameters. - **run_name**: Optional run name. - **schedule**: Configuration for scheduling runs. - **settings**: Docker settings including: - `apt_packages`, `dockerfile`, `environment`, etc. - Resource specifications: `cpu_count`, `gpu_count`, `memory`. - **steps**: Defines each step in the pipeline (e.g., `load_data`, `train_model`), including: - Metadata, output configurations, and settings similar to the main settings. #### Additional Configuration You can configure the pipeline with a specific stack using: ```python simple_ml_pipeline.write_run_configuration_template(stack=) ``` This documentation provides a concise overview of how to generate and customize a YAML configuration for a ZenML pipeline, ensuring all critical information is retained. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/runtime-configuration.md === ### Summary of Runtime Configuration Settings in ZenML **Overview**: ZenML allows configuring runtime settings for stack components and pipelines through a central concept called `BaseSettings`. These settings enable customization of resources, containerization processes, and component-specific configurations. #### Types of Settings 1. **General Settings**: Applicable to all ZenML pipelines. - Examples: - `DockerSettings`: Docker configuration. - `ResourceSettings`: Resource allocation settings. 2. **Stack-Component-Specific Settings**: Provide runtime configurations for specific components, identified by keys like `` or `.`. Settings for inactive components are ignored. - Examples: - `SkypilotAWSOrchestratorSettings` - `KubeflowOrchestratorSettings` - `MLflowExperimentTrackerSettings` - `WandbExperimentTrackerSettings` - `WhylogsDataValidatorSettings` - `SagemakerStepOperatorSettings` - `VertexStepOperatorSettings` - `AzureMLStepOperatorSettings` #### Registration-Time vs. Real-Time Settings - **Registration-Time Settings**: Static configurations set during component registration (e.g., `tracking_url` for MLflow). - **Real-Time Settings**: Dynamic configurations that can change per pipeline run (e.g., `experiment_name`). Default values can be specified during registration, which can be overridden at runtime. #### Key Specification for Settings When defining stack-component-specific settings, use the correct key format: `` or `.`. If only the category is specified, ZenML applies the settings to the corresponding component flavor in the stack. #### Example Code Snippets **Python Code**: ```python @step(step_operator="nameofstepoperator", settings={"step_operator": {"estimator_args": {"instance_type": "m7g.medium"}}}) def my_step(): ... @step(step_operator="nameofstepoperator", settings={"step_operator": SagemakerStepOperatorSettings(instance_type="m7g.medium")}) def my_step(): ... ``` **YAML Configuration**: ```yaml steps: my_step: step_operator: "nameofstepoperator" settings: step_operator: estimator_args: instance_type: m7g.medium ``` This concise overview covers the essential aspects of configuring runtime settings in ZenML, including types of settings, their differences, and how to implement them in code. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/configuration-hierarchy.md === ### Configuration Hierarchy in ZenML In ZenML, configurations can be set at both the pipeline and step levels, with specific rules governing their precedence: - Code configurations override YAML file configurations. - Step-level configurations override pipeline-level configurations. - For attributes, dictionaries are merged. #### Example Code ```python from zenml import pipeline, step from zenml.config import ResourceSettings @step def load_data(parameter: int) -> dict: ... @step(settings={"resources": ResourceSettings(gpu_count=1, memory="2GB")}) def train_model(data: dict) -> None: ... @pipeline(settings={"resources": ResourceSettings(cpu_count=2, memory="1GB")}) def simple_ml_pipeline(parameter: int): ... # Configuration results train_model.configuration.settings["resources"] # -> cpu_count: 2, gpu_count=1, memory="2GB" simple_ml_pipeline.configuration.settings["resources"] # -> cpu_count: 2, memory="1GB" ``` ### Key Points - Step configurations take precedence over pipeline configurations. - The example demonstrates how resource settings are merged, with step settings overriding those at the pipeline level. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/how-to-use-config.md === ### Configuration Files in ZenML **Overview**: Configuration can be specified in a YAML file or directly in code, but using a YAML file is recommended for better separation of concerns. **Usage**: To apply a configuration file to a pipeline, use the `with_options(config_path=)` method. **Example YAML Configuration**: ```yaml enable_cache: False parameters: dataset_name: "best_dataset" steps: load_data: enable_cache: False ``` **Example Python Code**: ```python from zenml import step, pipeline @step def load_data(dataset_name: str) -> dict: ... @pipeline def simple_ml_pipeline(dataset_name: str): load_data(dataset_name) if __name__ == "__main__": simple_ml_pipeline.with_options(config_path=)() ``` **Functionality**: The above code runs the `simple_ml_pipeline` with caching disabled for the `load_data` step and sets the `dataset_name` parameter to "best_dataset". ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/what-can-be-configured.md === ### Configuration Overview This documentation outlines the configuration options available in a YAML file for a ZenML pipeline. Key sections include build settings, enabling flags, model specifications, parameters, run names, scheduling, Docker settings, resource settings, and step-specific configurations. #### Sample YAML Configuration ```yaml build: dcd6fafb-c200-4e85-8328-428bef98d804 enable_artifact_metadata: True enable_artifact_visualization: False enable_cache: False enable_step_logs: True extra: any_param: 1 another_random_key: "some_string" model: name: "classification_model" version: production audience: "Data scientists" description: "This classifies hotdogs and not hotdogs" ethics: "No ethical implications" license: "Apache 2.0" limitations: "Only works for hotdogs" tags: ["sklearn", "hotdog", "classification"] parameters: dataset_name: "another_dataset" run_name: "my_great_run" schedule: catchup: true cron_expression: "* * * * *" settings: docker: apt_packages: ["curl"] copy_files: True dockerfile: "Dockerfile" dockerignore: ".dockerignore" environment: ZENML_LOGGING_VERBOSITY: DEBUG parent_image: "zenml-io/zenml-cuda" requirements: ["torch"] skip_build: False resources: cpu_count: 2 gpu_count: 1 memory: "4Gb" steps: train_model: parameters: data_source: "best_dataset" experiment_tracker: "mlflow_production" step_operator: "vertex_gpu" outputs: {} failure_hook_source: {} success_hook_source: {} enable_artifact_metadata: True enable_artifact_visualization: True enable_cache: False enable_step_logs: True extra: {} model: {} settings: docker: {} resources: {} step_operator.sagemaker: estimator_args: instance_type: m7g.medium ``` ### Key Configuration Points - **Enable Flags**: Boolean flags control various behaviors: - `enable_artifact_metadata`: Attach metadata to artifacts. - `enable_artifact_visualization`: Attach visualizations of artifacts. - `enable_cache`: Use caching. - `enable_step_logs`: Enable tracking of step logs. - **Build ID**: Specifies the Docker image to use, skipping the build process if provided. - **Model Configuration**: Defines the model's name, version, description, and tags. - **Parameters**: JSON-serializable values for pipeline and step configurations. Step parameters take precedence over pipeline parameters. - **Run Name**: Unique identifier for the run; should not be static when scheduled. - **Settings**: - **Docker Settings**: Configuration for Docker building, including requirements and environment variables. - **Resource Settings**: Defines CPU, GPU, and memory allocations for the pipeline. - **Step-Specific Configuration**: Includes settings that can only be applied at the step level, such as `experiment_tracker`, `step_operator`, and output configurations. ### Important Notes - Ensure unique `run_name` for each execution. - Parameters are distinct from artifacts; they are used for runtime configuration, while artifacts are inputs/outputs of steps. - For detailed Docker settings and resource configurations, refer to the respective documentation sections. ================================================== === File: docs/book/how-to/pipeline-development/run-remote-notebooks/README.md === ### Summary: Running Remote Pipelines from Jupyter Notebooks ZenML allows the definition and execution of steps and pipelines directly from Jupyter notebooks. The process involves extracting code from notebook cells and running it as Python modules within Docker containers for remote execution. #### Key Points: - **Execution Environment**: Notebook cells must adhere to specific conditions for ZenML to function correctly. - **Documentation Links**: - [Limitations of defining steps in notebook cells](limitations-of-defining-steps-in-notebook-cells.md) - [Run a single step from a notebook](run-a-single-step-from-a-notebook.md) This setup enables seamless integration of Jupyter notebooks with ZenML's remote execution capabilities. ================================================== === File: docs/book/how-to/pipeline-development/run-remote-notebooks/limitations-of-defining-steps-in-notebook-cells.md === # Limitations of Defining Steps in Notebook Cells To run ZenML steps defined in notebook cells remotely (with a remote orchestrator or step operator), the following conditions must be met: - The cell can only contain Python code; Jupyter magic commands or shell commands (starting with `%` or `!`) are not allowed. - The cell **must not** call code from other notebook cells. However, importing functions or classes from Python files is permitted. - The cell **must not** rely on imports from previous cells; it must perform all necessary imports, including ZenML imports like `from zenml import step`. ================================================== === File: docs/book/how-to/pipeline-development/run-remote-notebooks/run-a-single-step-from-a-notebook.md === ### Running a Single Step from a Notebook To execute a single step remotely from a notebook, call the step like a standard Python function. ZenML will create and run a pipeline with that step on the active stack. Be aware of the [limitations](limitations-of-defining-steps-in-notebook-cells.md) when defining steps in notebook cells. #### Code Example ```python from zenml import step import pandas as pd from sklearn.base import ClassifierMixin from sklearn.svm import SVC from typing import Tuple, Annotated @step(step_operator="") def svc_trainer( X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001, ) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: """Train a sklearn SVC classifier.""" model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) train_acc = model.score(X_train.to_numpy(), y_train.to_numpy()) print(f"Train accuracy: {train_acc}") return model, train_acc X_train = pd.DataFrame(...) # Define your training data y_train = pd.Series(...) # Define your training labels # Execute the step model, train_acc = svc_trainer(X_train=X_train, y_train=y_train) ``` This code snippet demonstrates how to define a step for training an SVC classifier and how to call it directly from a notebook. ================================================== === File: docs/book/how-to/pipeline-development/configure-python-environments/README.md === # Configure Python Environments ZenML deployments involve managing multiple environments. This guide outlines how to handle dependencies and configurations effectively. ## Overview of Environments - **Client Environment (Runner Environment)**: Where ZenML pipelines are compiled (e.g., `run.py`). - Types include: - Local development - CI runner in production - ZenML Pro runner - Runner image orchestrated by the ZenML server - Use a package manager (e.g., `pip`, `poetry`) to install ZenML and required integrations. ### Key Steps in Client Environment 1. Compile pipeline representation using `@pipeline`. 2. Create/trigger pipeline and step build environments if running remotely. 3. Trigger a run in the orchestrator. **Note**: The `@pipeline` function is called only in the client environment, focusing on compile time logic. ## ZenML Server Environment The ZenML server is a FastAPI application managing pipelines and metadata, including the ZenML Dashboard. Install dependencies during deployment, especially for custom integrations. For more details, refer to [configuring the server environment](./configure-the-server-environment.md). ## Execution Environments Locally, the client, server, and execution environments are the same. Remotely, ZenML transfers code to the orchestrator using Docker images called execution environments. ZenML manages Docker image configuration starting from a base image containing ZenML and Python, adding pipeline dependencies. Refer to the [containerize your pipeline](../../../how-to/customize-docker-builds/README.md) guide for managing Docker configurations. ## Image Builder Environment Execution environments are typically created locally using the Docker client, which requires installation and permissions. ZenML provides image builders, a specialized stack component for building and pushing Docker images in a different environment. If no image builder is configured, ZenML defaults to the local image builder for consistency across builds. ================================================== === File: docs/book/how-to/pipeline-development/configure-python-environments/handling-dependencies.md === ### Handling Conflicting Dependencies in ZenML This documentation addresses common issues with conflicting dependencies when using ZenML alongside other libraries. ZenML is designed to be stack- and integration-agnostic, allowing flexibility in pipeline execution, but this can lead to dependency conflicts. #### Installing Dependencies Use the command: ```bash zenml integration install ... ``` to install dependencies for specific integrations. After installing additional dependencies, verify that all ZenML requirements are met by running: ```bash zenml integration list ``` Look for the green tick symbol next to your desired integrations. #### Suggestions for Resolving Dependency Conflicts 1. **Use `pip-compile` for Reproducibility**: Consider using `pip-compile` from the [pip-tools package](https://pip-tools.readthedocs.io/) to create a static `requirements.txt` file. For an alternative, use `uv pip compile` if applicable. Refer to the [gitflow repository](https://github.com/zenml-io/zenml-gitflow#-software-requirements-management) for practical examples. 2. **Run `pip check`**: Use `pip check` to verify compatibility of your environment's dependencies. This command will list any conflicts. 3. **Known Dependency Issues**: ZenML has specific version requirements for some packages. For example, it requires `click~=8.0.3` for its CLI, and using higher versions may cause issues. #### Manual Dependency Installation You can bypass ZenML's integration installation and manually install dependencies, though this is not recommended. The command: ```bash zenml integration export-requirements --output-file integration-requirements.txt INTEGRATION_NAME ``` or ```bash zenml integration export-requirements INTEGRATION_NAME ``` will help you obtain the required dependencies. Adjust these requirements as needed. If using a remote orchestrator, update the `DockerSettings` object with the new dependency versions to ensure proper functionality. ### Note The `zenml integration install ...` command executes a `pip install ...` in the background, installing the dependencies specified in the integration definition. ================================================== === File: docs/book/how-to/pipeline-development/configure-python-environments/configure-the-server-environment.md === ### Configure the Server Environment The ZenML server environment is set up using environment variables that must be configured prior to deploying your server instance. For a comprehensive list of available environment variables, refer to the [environment variables documentation](../../../reference/environment-variables.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/configuring-a-pipeline-at-runtime.md === ### Runtime Configuration of a Pipeline To run a pipeline with a different configuration, use the `pipeline.with_options` method. There are two primary ways to configure options: 1. **Explicit Configuration**: ```python with_options(steps={"trainer": {"parameters": {"param1": 1}}}) ``` 2. **YAML File**: ```python with_options(config_file="path_to_yaml_file") ``` For more details on these options, refer to the [configuration files documentation](../../pipeline-development/use-configuration-files/README.md). **Exception**: When triggering a pipeline from a client or another pipeline, use the `PipelineRunConfiguration` object. Additional information can be found [here](../../trigger-pipelines/use-templates-python.md#advanced-usage-run-a-template-from-another-pipeline). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/retry-steps.md === ### ZenML Step Retry Configuration ZenML offers a built-in mechanism for automatically retrying steps upon failure, which is particularly useful for handling transient errors, such as resource shortages on GPU-backed hardware. You can configure the following parameters for step retries: - **max_retries:** Maximum number of retry attempts. - **delay:** Initial delay (in seconds) before the first retry. - **backoff:** Multiplier for the delay after each retry. #### Using the @step Decorator You can define the retry configuration directly in your step using the `@step` decorator: ```python from zenml.config.retry_config import StepRetryConfig @step( retry=StepRetryConfig( max_retries=3, delay=10, backoff=2 ) ) def my_step() -> None: raise Exception("This is a test exception") ``` #### Important Note Infinite retries are not supported. Setting `max_retries` to a very high value or omitting it will still enforce an internal limit to prevent infinite loops. It is advisable to choose a reasonable `max_retries` based on your use case. ### Related Documentation - [Failure/Success Hooks](use-failure-success-hooks.md) - [Configure Pipelines](../../pipeline-development/use-configuration-files/how-to-use-config.md) ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/README.md === ### Summary of ZenML Pipeline Documentation **Overview**: Building pipelines in ZenML is straightforward using the `@step` and `@pipeline` decorators. #### Example Code ```python from zenml import pipeline, step @step def load_data() -> dict: return {'features': [[1, 2], [3, 4], [5, 6]], 'labels': [0, 1, 0]} @step def train_model(data: dict) -> None: total_features = sum(map(sum, data['features'])) total_labels = sum(data['labels']) print(f"Trained model using {len(data['features'])} data points. " f"Feature sum is {total_features}, label sum is {total_labels}") @pipeline def simple_ml_pipeline(): dataset = load_data() train_model(dataset) simple_ml_pipeline() # Execute the pipeline ``` #### Execution and Logging Upon execution, the pipeline is logged to the ZenML dashboard, which requires a running ZenML server (local or remote). The dashboard displays the Directed Acyclic Graph (DAG) and associated metadata. #### Advanced Features For further customization and interaction with pipelines, refer to the following topics: - Configure pipeline/step parameters - Name and annotate step outputs - Control caching behavior - Customize step invocation IDs - Name pipeline runs - Use failure/success hooks - Hyperparameter tuning - Attach and fetch metadata within steps - Enable/disable log storage - Access secrets in a step For detailed guidance, consult the respective documentation links provided in the original text. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/compose-pipelines.md === ### Summary: Reusing Steps Between Pipelines in ZenML ZenML allows for the composition of pipelines to avoid code duplication by extracting common functionality into separate functions. This is achieved by calling one pipeline from within another. #### Example Code: ```python from zenml import pipeline @pipeline def data_loading_pipeline(mode: str): data = training_data_loader_step() if mode == "train" else test_data_loader_step() return preprocessing_step(data) @pipeline def training_pipeline(): training_data = data_loading_pipeline(mode="train") model = training_step(data=training_data) test_data = data_loading_pipeline(mode="test") evaluation_step(model=model, data=test_data) ``` In this example, `data_loading_pipeline` is invoked within `training_pipeline`, effectively integrating its steps. Only the parent pipeline will be visible in the dashboard. For triggering a pipeline from another, refer to the relevant documentation. #### Additional Resources: - Learn more about orchestrators [here](../../../component-guide/orchestrators/orchestrators.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/using-a-custom-step-invocation-id.md === ### Summary of Custom Step Invocation ID in ZenML When invoking a ZenML step in a pipeline, each step is assigned a unique **invocation ID**. This ID can be used to define the execution order of pipeline steps or to fetch information post-execution. #### Key Points: - The first invocation of a step uses its name as the invocation ID (e.g., `my_step`). - Subsequent invocations append a suffix (e.g., `my_step_2`, `my_step_3`) to ensure uniqueness. - A custom invocation ID can be assigned by passing it as an argument, but it must be unique across all invocations in the pipeline. #### Example Code: ```python from zenml import pipeline, step @step def my_step() -> None: ... @pipeline def example_pipeline(): my_step() # ID: my_step my_step() # ID: my_step_2 my_step(id="my_custom_invocation_id") # Custom ID ``` This concise overview retains all critical information regarding the use of custom invocation IDs in ZenML pipelines. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/get-past-pipeline-step-runs.md === To retrieve past pipeline or step runs in ZenML, use the `get_pipeline` method along with the `last_run` property or by indexing into the runs. Here’s a concise example: ```python from zenml.client import Client client = Client() # Retrieve a pipeline by its name p = client.get_pipeline("mlflow_train_deploy_pipeline") # Get the latest run of this pipeline latest_run = p.last_run # Access the first run by index first_run = p[0] ``` This code demonstrates how to access the latest and first runs of a specified pipeline. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/use-pipeline-step-parameters.md === ### Summary of Documentation on Parameterization in ZenML Pipelines #### Overview Steps and pipelines in ZenML can be parameterized similarly to Python functions. Parameters can be passed as artifacts (outputs from other steps) or as explicit values. #### Step Parameters - **Artifacts**: Outputs from previous steps, used for data sharing. - **Parameters**: Explicitly provided values that are not dependent on other steps. Only JSON-serializable values (via Pydantic) can be passed as parameters. For non-JSON-serializable objects (e.g., NumPy arrays), use External Artifacts. #### Example Code ```python from zenml import step, pipeline @step def my_step(input_1: int, input_2: int) -> None: pass @pipeline def my_pipeline(): int_artifact = some_other_step() my_step(input_1=int_artifact, input_2=42) ``` #### YAML Configuration Parameters can be defined in a YAML file for flexibility: ```yaml # config.yaml parameters: environment: production steps: my_step: parameters: input_2: 42 ``` #### Pipeline with YAML ```python from zenml import step, pipeline @step def my_step(input_1: int, input_2: int) -> None: ... @pipeline def my_pipeline(environment: str): ... if __name__=="__main__": my_pipeline.with_options(config_path="config.yaml")() ``` #### Conflicting Settings Conflicts may arise if parameters are defined in both the YAML file and the code. An error will be raised in such cases, providing details for resolution. #### Caching Behavior - **Parameters**: A step is cached only if all parameter values match previous executions. - **Artifacts**: A step is cached only if all input artifacts match previous executions. If upstream artifacts are not cached, the step will execute every time. #### Additional Resources - For more on using configuration files: [Use Configuration Files](use-pipeline-step-parameters.md) - For caching behavior: [Control Caching Behavior](control-caching-behavior.md) ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/step-output-typing-and-annotation.md === ### Summary of Step Output Typing and Annotation in ZenML **Step Outputs Storage**: Outputs from steps are stored in an artifact store. Annotate and name them for clarity. #### Type Annotations - Type annotations are optional but beneficial: - **Type Validation**: Ensures correct input types from upstream steps. - **Better Serialization**: With annotations, ZenML selects the appropriate materializer for outputs. Custom materializers can be created if built-in ones are insufficient. **Warning**: The built-in `CloudpickleMaterializer` can handle any object but is not production-ready due to potential compatibility issues across Python versions and security vulnerabilities. #### Example Code ```python from typing import Tuple from zenml import step @step def square_root(number: int) -> float: return number ** 0.5 @step def divide(a: int, b: int) -> Tuple[int, int]: return a // b, a % b ``` To enforce type annotations, set the environment variable `ZENML_ENFORCE_TYPE_ANNOTATIONS` to `True`. #### Tuple vs. Multiple Outputs - A return statement with a tuple literal indicates multiple outputs. Otherwise, it is treated as a single output of type `Tuple`. **Example Code**: ```python @step def my_step() -> Tuple[int, int]: return 0, 1 # Multiple outputs @step def my_step() -> Tuple[int, ...]: return (0, 1) if condition else (0, 1, 2) # Variable length ``` #### Step Output Names - Default naming: `output` for single outputs and `output_0, output_1, ...` for multiple outputs. - Custom names can be assigned using `Annotated` type annotation. **Example Code**: ```python from typing_extensions import Annotated from zenml import step @step def square_root(number: int) -> Annotated[float, "custom_output_name"]: return number ** 0.5 @step def divide(a: int, b: int) -> Tuple[Annotated[int, "quotient"], Annotated[int, "remainder"]]: return a // b, a % b ``` If no custom names are provided, artifacts are named as `{pipeline_name}::{step_name}::output`. ### Additional Resources - For more on output annotation: [return-multiple-outputs-from-a-step.md](../../data-artifact-management/handle-data-artifacts/return-multiple-outputs-from-a-step.md) - For custom data types: [handle-custom-data-types.md](../../data-artifact-management/handle-data-artifacts/handle-custom-data-types.md) ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/schedule-a-pipeline.md === ### Summary: Scheduling Pipelines in ZenML #### Supported Orchestrators Not all orchestrators support scheduling. The following orchestrators do support it: - AirflowOrchestrator: ✅ - AzureMLOrchestrator: ✅ - DatabricksOrchestrator: ✅ - HyperAIOrchestrator: ✅ - KubeflowOrchestrator: ✅ - KubernetesOrchestrator: ✅ - SagemakerOrchestrator: ✅ - VertexOrchestrator: ✅ The following do **not** support scheduling: - LocalOrchestrator: ⛔️ - LocalDockerOrchestrator: ⛔️ - Skypilot (AWS, Azure, GCP, Lambda): ⛔️ - TektonOrchestrator: ⛔️ #### Setting a Schedule To set a schedule for a pipeline, use the `Schedule` class with either a cron expression or human-readable notations: ```python from zenml.config.schedule import Schedule from zenml import pipeline from datetime import datetime @pipeline() def my_pipeline(...): ... # Using cron expression schedule = Schedule(cron_expression="5 14 * * 3") # Using human-readable notation schedule = Schedule(start_time=datetime.now(), interval_second=1800) my_pipeline = my_pipeline.with_options(schedule=schedule) my_pipeline() ``` For more details on scheduling options, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.schedule.Schedule). #### Pausing/Stopping a Schedule The method to pause or stop a scheduled run depends on the orchestrator. For example, in Kubeflow, this can be done through its UI. Users should consult their orchestrator's documentation for specific instructions. **Important Note:** ZenML schedules the run, but managing the lifecycle of the schedule is the user's responsibility. Running a pipeline with a schedule multiple times creates separate scheduled pipelines with unique names. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/fan-in-fan-out.md === ### Summary of Fan-in and Fan-out Patterns in ZenML The fan-out/fan-in pattern is a pipeline architecture where a single step splits into multiple parallel operations (fan-out) and consolidates results back into a single step (fan-in). This pattern is effective for parallel processing, distributed workloads, and data transformations. #### Example Code ```python from zenml import step, get_step_context, pipeline from zenml.client import Client @step def load_step() -> str: return "Hello from ZenML!" @step def process_step(input_data: str) -> str: return input_data @step def combine_step(step_prefix: str, output_name: str) -> None: run_name = get_step_context().pipeline_run.name run = Client().get_pipeline_run(run_name) processed_results = {step_info.name: step_info.outputs[output_name][0].load() for step_name, step_info in run.steps.items() if step_name.startswith(step_prefix)} print(",".join([f"{k}: {v}" for k, v in processed_results.items()])) @pipeline(enable_cache=False) def fan_out_fan_in_pipeline(parallel_count: int) -> None: input_data = load_step() after = [process_step(input_data, id=f"process_{i}") for i in range(parallel_count)] combine_step(step_prefix="process_", output_name="output", after=after) fan_out_fan_in_pipeline(parallel_count=8) ``` #### Key Points - **Fan-out**: Enables parallel processing, enhancing resource utilization. - **Fan-in**: Aggregates results from parallel branches. - **Use Cases**: Suitable for parallel data processing, distributed model training, ensemble methods, batch processing, and data validation. - **Limitations**: 1. Steps may run sequentially if the orchestrator does not support parallel execution. 2. The number of steps must be predefined; dynamic step creation is not supported. #### Important Note When implementing the fan-in step, results from parallel steps must be queried using the ZenML Client, as direct passing of results is not allowed. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/access-secrets-in-a-step.md === # Accessing Secrets in ZenML Steps ZenML secrets are secure groupings of **key-value pairs** stored in the ZenML secrets store, each identified by a **name** for easy reference in pipelines and stacks. For configuration and creation details, refer to the [platform guide on secrets](../../../getting-started/deploying-zenml/secret-management.md). You can access secrets within your steps using the ZenML `Client` API, allowing you to securely use API keys without hard-coding them. ## Example Code ```python from zenml import step from zenml.client import Client from somewhere import authenticate_to_some_api @step def secret_loader() -> None: """Load the example secret from the server.""" secret = Client().get_secret("") authenticate_to_some_api( username=secret.secret_values["username"], password=secret.secret_values["password"], ) ``` ### Additional Resources - [Learn how to create and manage secrets](../../interact-with-secrets.md) - [Secrets backend in ZenML](../../../getting-started/deploying-zenml/secret-management.md) ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/delete-a-pipeline.md === ### Deleting Pipelines and Pipeline Runs #### Delete a Pipeline You can delete a pipeline using either the CLI or the Python SDK. **CLI Command:** ```shell zenml pipeline delete ``` **Python SDK:** ```python from zenml.client import Client Client().delete_pipeline() ``` **Note:** Deleting a pipeline does not remove associated runs or artifacts. To delete multiple pipelines with the same prefix, use the following script: ```python from zenml.client import Client client = Client() pipelines_list = client.list_pipelines(name="startswith:test_pipeline", size=100) target_pipeline_ids = [p.id for p in pipelines_list.items] confirmation = input("Do you really want to delete these pipelines? (y/n): ").lower() if confirmation == 'y': for pid in target_pipeline_ids: client.delete_pipeline(pid) ``` #### Delete a Pipeline Run You can delete a pipeline run using the CLI or the Python SDK. **CLI Command:** ```shell zenml pipeline runs delete ``` **Python SDK:** ```python from zenml.client import Client Client().delete_pipeline_run() ``` ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/name-your-pipeline-runs.md === ### Naming Pipeline Runs Pipeline run names are automatically generated based on the current date and time, as shown below: ```bash Pipeline run training_pipeline-2023_05_24-12_41_04_576473 has finished in 3.742s. ``` To customize the run name, use the `run_name` parameter in the `with_options()` method: ```python training_pipeline = training_pipeline.with_options( run_name="custom_pipeline_run_name" ) training_pipeline() ``` **Important Notes:** - Run names must be unique. If running pipelines multiple times or on a schedule, compute the run name dynamically or use placeholders. - Placeholders can be set in the `@pipeline` decorator or `pipeline.with_options` function. **Standard Placeholders:** - `{date}`: Current date (e.g., `2024_11_27`) - `{time}`: Current UTC time (e.g., `11_07_09_326492`) Example of using placeholders: ```python training_pipeline = training_pipeline.with_options( run_name="custom_pipeline_run_name_{experiment_name}_{date}_{time}" ) training_pipeline() ``` ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/use-failure-success-hooks.md === ### Summary of ZenML Failure and Success Hooks Documentation #### Overview ZenML allows the use of hooks to perform actions after step execution, useful for notifications, logging, or resource cleanup. There are two types of hooks: - **`on_failure`**: Triggered when a step fails. - **`on_success`**: Triggered when a step succeeds. #### Defining Hooks Hooks are defined as callback functions accessible within the pipeline repository. For failure hooks, you can include a `BaseException` argument to access the specific exception that caused the failure. **Example:** ```python from zenml import step def on_failure(exception: BaseException): print(f"Step failed: {exception}") def on_success(): print("Step succeeded!") @step(on_failure=on_failure) def my_failing_step() -> int: raise ValueError("Error") @step(on_success=on_success) def my_successful_step() -> int: return 1 ``` #### Pipeline-Level Hooks Hooks can also be defined at the pipeline level to apply to all steps, overriding step-level hooks if defined. **Example:** ```python from zenml import pipeline @pipeline(on_failure=on_failure, on_success=on_success) def my_pipeline(...): ... ``` #### Accessing Step Information in Hooks You can access step information using `get_step_context()` within your hook functions. **Example:** ```python from zenml import get_step_context def on_failure(exception: BaseException): context = get_step_context() print(context.step_run.name) ``` #### Using the Alerter Component You can integrate the Alerter component to notify users about step success or failure. **Example:** ```python from zenml import get_step_context, Client def on_failure(): step_name = get_step_context().step_run.name Client().active_stack.alerter.post(f"{step_name} just failed!") ``` Standard hooks for alerter notifications can be used as follows: ```python from zenml.hooks import alerter_success_hook, alerter_failure_hook @step(on_failure=alerter_failure_hook, on_success=alerter_success_hook) def my_step(...): ... ``` #### OpenAI ChatGPT Failure Hook This hook generates potential fixes for exceptions using OpenAI's API. Ensure you have the OpenAI integration installed and your API key stored in a ZenML secret. **Installation:** ```shell zenml integration install openai zenml secret create openai --api_key= ``` **Usage:** ```python from zenml.integration.openai.hooks import openai_chatgpt_alerter_failure_hook @step(on_failure=openai_chatgpt_alerter_failure_hook) def my_step(...): ... ``` This integration can provide suggestions to help resolve issues in your code. If you have GPT-4 enabled, you can use `openai_gpt4_alerter_failure_hook` for enhanced suggestions. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/run-an-individual-step.md === # Running Individual Steps in ZenML To execute a single step in your stack, call the step like a regular Python function. ZenML will create a temporary pipeline for this step, which will be `unlisted` and visible in the "Runs" tab of the dashboard. ## Example Code for Step Execution ```python from zenml import step import pandas as pd from sklearn.base import ClassifierMixin from sklearn.svm import SVC from typing import Tuple, Annotated @step(step_operator="") def svc_trainer( X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001, ) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: """Train a sklearn SVC classifier.""" model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) train_acc = model.score(X_train.to_numpy(), y_train.to_numpy()) print(f"Train accuracy: {train_acc}") return model, train_acc # Prepare training data X_train = pd.DataFrame(...) y_train = pd.Series(...) # Execute the step model, train_acc = svc_trainer(X_train=X_train, y_train=y_train) ``` ## Running the Step Function Directly To run the step function without ZenML, use the `entrypoint(...)` method: ```python model, train_acc = svc_trainer.entrypoint(X_train=X_train, y_train=y_train) ``` ### Default Behavior Configuration To make direct function calls the default behavior, set the environment variable `ZENML_RUN_SINGLE_STEPS_WITHOUT_STACK` to `True`. This will bypass the ZenML stack when calling the step. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/run-pipelines-asynchronously.md === ### Summary: Running Pipelines Asynchronously By default, pipelines run synchronously, allowing the terminal to display logs during execution. To run pipelines asynchronously, you can configure the orchestrator with `synchronous=False` either at the global level or temporarily at the pipeline configuration level. **Python Code Example:** ```python from zenml import pipeline @pipeline(settings={"orchestrator": {"synchronous": False}}) def my_pipeline(): ... ``` **YAML Configuration Example:** ```yaml settings: orchestrator.: synchronous: false ``` For more information on orchestrators, refer to the [orchestrators documentation](../../../component-guide/orchestrators/orchestrators.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/control-caching-behavior.md === ### Caching Behavior in ZenML Pipelines By default, steps in ZenML pipelines cache results when code and parameters remain unchanged. #### Step and Pipeline Caching Configuration - **Step Level Caching**: - Use `@step(enable_cache=True)` to enable caching. - Use `@step(enable_cache=False)` to disable caching, overriding pipeline settings. - **Pipeline Level Caching**: - Use `@pipeline(enable_cache=True)` to enable caching for the entire pipeline. ```python @step(enable_cache=True) def load_data(parameter: int) -> dict: ... @step(enable_cache=False) def train_model(data: dict) -> None: ... @pipeline(enable_cache=True) def simple_ml_pipeline(parameter: int): ... ``` #### Modifying Cache Settings Caching settings can be modified after initial configuration: ```python my_step.configure(enable_cache=...) my_pipeline.configure(enable_cache=...) ``` {% hint style="info" %} Caching occurs only when code and parameters are unchanged. {% endhint %} For YAML configuration details, refer to the [configuration files documentation](../../pipeline-development/use-configuration-files/). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/control-execution-order-of-steps.md === # Control Execution Order of Steps in ZenML ZenML determines the execution order of pipeline steps based on data dependencies. For instance, in the following pipeline, `step_3` depends on the outputs of `step_1` and `step_2`, allowing both to run in parallel before `step_3` starts. ```python from zenml import pipeline @pipeline def example_pipeline(): step_1_output = step_1() step_2_output = step_2() step_3(step_1_output, step_2_output) ``` To enforce specific execution order constraints, you can use non-data dependencies by specifying invocation IDs. For example, to ensure `my_step` runs after `other_step`, use `my_step(after="other_step")`. For multiple dependencies, pass a list: `my_step(after=["other_step", "other_step_2"])`. For more details on invocation IDs, refer to the [documentation here](using-a-custom-step-invocation-id.md). ```python from zenml import pipeline @pipeline def example_pipeline(): step_1_output = step_1(after="step_2") step_2_output = step_2() step_3(step_1_output, step_2_output) ``` In this example, `step_1` will only start after `step_2` has completed. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/fetching-pipelines.md === ### Summary: Inspecting a Finished Pipeline Run and Its Outputs #### Overview This documentation covers how to inspect a completed pipeline run and its outputs, including accessing artifacts, metadata, and the lineage of pipeline runs. #### Pipeline Hierarchy The structure of pipelines consists of: - **Pipelines** (1:N) → **Runs** (1:N) → **Steps** (1:N) → **Artifacts** #### Fetching Pipelines - **Get a Specific Pipeline:** ```python from zenml.client import Client pipeline_model = Client().get_pipeline("first_pipeline") ``` - **List All Pipelines:** - **Python:** ```python pipelines = Client().list_pipelines() ``` - **CLI:** ```shell zenml pipeline list ``` #### Pipeline Runs - **Get All Runs of a Pipeline:** ```python runs = pipeline_model.runs ``` - **Get the Last Run:** ```python last_run = pipeline_model.last_run # OR: pipeline_model.runs[0] ``` - **Execute and Get Latest Run:** ```python run = training_pipeline() # Executes the pipeline ``` - **Fetch a Specific Run:** ```python pipeline_run = Client().get_pipeline_run("first_pipeline-2023_06_20-16_20_13_274466") ``` #### Run Information - **Status:** ```python status = run.status # Possible states: initialized, failed, completed, running, cached ``` - **Configuration:** ```python pipeline_config = run.config pipeline_settings = run.config.settings ``` - **Component-Specific Metadata:** ```python run_metadata = run.run_metadata orchestrator_url = run_metadata["orchestrator_url"].value ``` #### Steps - **Get Steps of a Run:** ```python steps = run.steps step = run.steps["first_step"] ``` #### Artifacts - **Access Output Artifacts:** ```python output = step.outputs["output_name"] # or step.output for single output my_pytorch_model = output.load() ``` - **Fetch Artifacts Directly:** ```python artifact = Client().get_artifact('iris_dataset') output = artifact.versions['2022'] # Get specific version ``` #### Artifact Information - **Metadata:** ```python output_metadata = output.run_metadata storage_size_in_bytes = output_metadata["storage_size"].value ``` - **Visualizations:** ```python output.visualize() # For Jupyter notebooks ``` #### Fetching Information During Run Execution - **Access Previous Runs Within a Step:** ```python from zenml import get_step_context from zenml.client import Client @step def my_step(): current_run_name = get_step_context().pipeline_run.name current_run = Client().get_pipeline_run(current_run_name) previous_run = current_run.pipeline.runs[1] # Index 0 is the current run ``` #### Code Example A comprehensive example demonstrating the loading of a model from a pipeline: ```python from typing_extensions import Tuple, Annotated import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.base import ClassifierMixin from sklearn.svm import SVC from zenml import pipeline, step from zenml.client import Client @step def training_data_loader() -> Tuple[Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"]]: iris = load_iris(as_frame=True) return train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) @step def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: model = SVC(gamma=gamma).fit(X_train.to_numpy(), y_train.to_numpy()) return model, model.score(X_train.to_numpy(), y_train.to_numpy()) @pipeline def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": last_run = training_pipeline() model = last_run.steps["svc_trainer"].outputs["trained_model"].load() ``` This summary encapsulates the essential details for inspecting pipeline runs and their outputs, ensuring clarity and brevity while retaining critical information. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/reference-environment-variables-in-configurations.md === # Reference Environment Variables in Configurations ZenML allows referencing environment variables in configurations using the syntax `${ENV_VARIABLE_NAME}`. ## In-code Example ```python from zenml import step @step(extra={"value_from_environment": "${ENV_VAR}"}) def my_step() -> None: ... ``` ## In a Configuration File Example ```yaml extra: value_from_environment: ${ENV_VAR} combined_value: prefix_${ENV_VAR}_suffix ``` This feature enhances flexibility in both code and configuration files. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/tag-your-pipeline-runs.md === # Tagging Pipeline Runs You can specify tags for your pipeline runs in the following ways: 1. **Configuration File**: ```yaml # config.yaml tags: - tag_in_config_file ``` 2. **Code**: - Using the `@pipeline` decorator: ```python @pipeline(tags=["tag_on_decorator"]) def my_pipeline(): ... ``` - Using the `with_options` method: ```python my_pipeline = my_pipeline.with_options(tags=["tag_on_with_options"]) ``` When you run the pipeline, tags from all specified locations will be merged and applied to the run. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/hyper-parameter-tuning.md === ### Summary: Running Hyperparameter Tuning with ZenML This documentation outlines how to perform hyperparameter tuning using ZenML through a simple pipeline that implements a grid search for different learning rates. The process involves two main steps: `train_step` for training models with varying learning rates and `selection_step` for evaluating and selecting the best model based on performance. #### Key Components: 1. **Train Step**: - Accepts a `learning_rate` parameter. - Trains a model and returns it. ```python @step def train_step(learning_rate: float) -> Annotated[ClassifierMixin, model_output_name]: return ... # Train model ``` 2. **Selection Step**: - Retrieves the pipeline run context and collects trained models based on their learning rates. - Evaluates models to determine the best one. ```python @step def selection_step(step_prefix: str, output_name: str) -> None: run_name = get_step_context().pipeline_run.name run = Client().get_pipeline_run(run_name) trained_models_by_lr = { step_info.config.parameters["learning_rate"]: step_info.outputs[output_name][0].load() for step_name, step_info in run.steps.items() if step_name.startswith(step_prefix) } for lr, model in trained_models_by_lr.items(): ... # Evaluate models ``` 3. **Pipeline Definition**: - Constructs the pipeline by iterating through a specified number of steps, invoking `train_step` for each learning rate. - Calls `selection_step` after training to evaluate the models. ```python @pipeline def my_pipeline(step_count: int) -> None: after = [train_step(learning_rate=i * 0.0001, id=f"train_step_{i}") for i in range(step_count)] selection_step(step_prefix="train_step_", output_name=model_output_name, after=after) my_pipeline(step_count=4) ``` #### Important Notes: - The current limitation is that a variable number of artifacts cannot be passed into a step programmatically; thus, `selection_step` must query all artifacts via the ZenML Client. - Additional resources and examples are available in the ZenML GitHub repository, specifically in the `hp_tuning` folder, which includes: - `hp_tuning_single_search(...)`: For randomized hyperparameter search. - `hp_tuning_select_best_model(...)`: For selecting the best model from previous searches. This concise overview captures the essential technical details necessary for understanding and implementing hyperparameter tuning with ZenML. ================================================== === File: docs/book/how-to/pipeline-development/training-with-gpus/README.md === # Summary of GPU Resource Management in ZenML ## Overview ZenML allows scaling machine learning pipelines to the cloud, utilizing GPU-backed hardware through `ResourceSettings` for resource allocation and container environment adjustments. ## Specifying Resource Requirements To allocate resources for resource-intensive steps, use the following code: ```python from zenml.config import ResourceSettings from zenml import step @step(settings={"resources": ResourceSettings(cpu_count=8, gpu_count=2, memory="8GB")}) def training_step(...) -> ...: # train a model ``` If the orchestrator (e.g., Skypilot) does not support `ResourceSettings`, use orchestrator-specific settings: ```python from zenml import step from zenml.integrations.skypilot.flavors.skypilot_orchestrator_aws_vm_flavor import SkypilotAWSOrchestratorSettings skypilot_settings = SkypilotAWSOrchestratorSettings(cpus="2", memory="16", accelerators="V100:2") @step(settings={"orchestrator": skypilot_settings}) def training_step(...) -> ...: # train a model ``` Refer to each orchestrator's documentation for resource specification support. ## CUDA Configuration To utilize GPU capabilities, ensure your container is CUDA-enabled: 1. **Specify a CUDA-enabled parent image**: ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 2. **Add ZenML as a pip requirement**: ```python docker_settings = DockerSettings( parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", requirements=["zenml==0.39.1", "torchvision"] ) ``` Ensure the chosen image is compatible with both local and remote environments. ## Resetting CUDA Cache To avoid GPU cache issues, reset the CUDA cache between steps: ```python import gc import torch def cleanup_memory() -> None: while gc.collect(): torch.cuda.empty_cache() @step def training_step(...): cleanup_memory() # train a model ``` ## Multi-GPU Training ZenML supports training across multiple GPUs on a single node. To implement this: - Create a script/function for parallel training. - Call this function from within the step, ensuring no multiple ZenML instances are spawned. For assistance, connect via [Slack](https://zenml.io/slack). ================================================== === File: docs/book/how-to/pipeline-development/training-with-gpus/accelerate-distributed-training.md === ### Summary: Distributed Training with Hugging Face's Accelerate in ZenML ZenML integrates with Hugging Face's Accelerate library to facilitate distributed training in machine learning pipelines, enabling the utilization of multiple GPUs or nodes. #### Using 🤗 Accelerate in Steps To enable distributed execution in training steps, use the `run_with_accelerate` decorator: ```python from zenml import step, pipeline from zenml.integrations.huggingface.steps import run_with_accelerate @run_with_accelerate(num_processes=4, multi_gpu=True) @step def training_step(some_param: int, ...): ... @pipeline def training_pipeline(some_param: int, ...): training_step(some_param, ...) ``` The decorator accepts arguments similar to the `accelerate launch` CLI command. For a complete list, refer to the [Accelerate CLI documentation](https://huggingface.co/docs/accelerate/en/package_reference/cli#accelerate-launch). #### Configuration Key arguments for `run_with_accelerate` include: - `num_processes`: Number of processes for training. - `cpu`: Force training on CPU. - `multi_gpu`: Enable distributed GPU training. - `mixed_precision`: Set mixed precision mode ('no', 'fp16', or 'bf16'). **Important Notes:** 1. Use the decorator directly on steps; it cannot be used as a function in pipeline definitions. 2. Use keyword arguments for calling steps. 3. Misuse raises a `RuntimeError` with guidance. For a full example, see the [llm-lora-finetuning project](https://github.com/zenml-io/zenml-projects/blob/main/llm-lora-finetuning/README.md). #### Container Configuration To run steps with Accelerate, ensure your environment is set up correctly: 1. **Specify a CUDA-enabled parent image:** ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 2. **Add Accelerate as a requirement:** ```python docker_settings = DockerSettings( parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", requirements=["accelerate", "torchvision"] ) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` #### Multi-GPU Training ZenML's Accelerate integration allows training with multiple GPUs on a single or multiple nodes, ideal for large datasets or complex models. Ensure your training step is wrapped with `run_with_accelerate`, configure the necessary arguments, and verify compatibility with distributed training. For assistance, connect with the ZenML community on [Slack](https://zenml.io/slack). By utilizing Accelerate within ZenML, you can effectively scale training processes while maintaining the pipeline structure. ================================================== === File: docs/book/how-to/customize-docker-builds/how-to-use-a-private-pypi-repository.md === ### How to Use a Private PyPI Repository To use a private PyPI repository that requires authentication, follow these steps: 1. **Store Credentials Securely**: Use environment variables for credentials. 2. **Configure Package Managers**: Set up `pip` or `poetry` to utilize these credentials during package installation. 3. **Custom Docker Images**: Consider using Docker images pre-configured with authentication. #### Example Setup with Environment Variables: ```python import os from my_simple_package import important_function from zenml.config import DockerSettings from zenml import step, pipeline docker_settings = DockerSettings( requirements=["my-simple-package==0.1.0"], environment={ 'PIP_EXTRA_INDEX_URL': f"https://{os.environ['PYPI_TOKEN']}@my-private-pypi-server.com/{os.environ['PYPI_USERNAME']}/" } ) @step def my_step(): return important_function() @pipeline(settings={"docker": docker_settings}) def my_pipeline(): my_step() if __name__ == "__main__": my_pipeline() ``` **Note**: Handle credentials with care, using secure methods for management and distribution within your team. ================================================== === File: docs/book/how-to/customize-docker-builds/docker-settings-on-a-pipeline.md === ### Summary: Using Docker Images to Run Your Pipeline #### Overview When running a pipeline with a remote orchestrator, a Dockerfile is dynamically generated to build a Docker image using ZenML. The Dockerfile includes the following steps: 1. **Parent Image**: Starts from a parent image with ZenML installed, typically the official ZenML image. 2. **Pip Dependencies**: Automatically installs required dependencies based on integrations used in the stack. Custom dependencies can be added. 3. **Source Files**: Optionally copies source files into the Docker container for execution. 4. **Environment Variables**: Sets user-defined environment variables. For customization options, refer to the [DockerSettings object](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.docker_settings.DockerSettings). #### Configuring Docker Settings You can customize Docker builds using the `DockerSettings` class: ```python from zenml.config import DockerSettings ``` **Pipeline-wide Configuration**: ```python docker_settings = DockerSettings() @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() ``` **Step-specific Configuration**: ```python @step(settings={"docker": docker_settings}) def my_step() -> None: pass ``` **YAML Configuration**: ```yaml settings: docker: ... steps: step_name: settings: docker: ... ``` Refer to the configuration hierarchy [here](../pipeline-development/use-configuration-files/configuration-hierarchy.md). #### Specifying Docker Build Options To pass build options to the image builder: ```python docker_settings = DockerSettings(build_config={"build_options": {...}}) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` **MacOS ARM Architecture Note**: Specify the target platform to enable local Docker caching: ```python docker_settings = DockerSettings(build_config={"build_options": {"platform": "linux/amd64"}}) ``` #### Custom Parent Images You can specify a custom parent image or Dockerfile. Ensure it has Python, pip, and ZenML installed. **Using a Pre-built Parent Image**: ```python docker_settings = DockerSettings(parent_image="my_registry.io/image_name:tag") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` **Skipping Docker Builds**: ```python docker_settings = DockerSettings( parent_image="my_registry.io/image_name:tag", skip_build=True ) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` **Warning**: Using a pre-built image may lead to unintended behavior. Ensure your code files are included in the specified image. Read more [here](./use-a-prebuilt-image.md). ================================================== === File: docs/book/how-to/customize-docker-builds/README.md === ### Using Docker Images to Run Your Pipeline ZenML executes pipeline steps sequentially in the local Python environment. For remote orchestrators or step operators, ZenML builds Docker images to run pipelines in isolated environments. This section covers how to customize the Docker build process. **Key Points:** - **Docker Integration**: ZenML leverages Docker to ensure a consistent execution environment for pipelines. - **Customization**: Users can control the Dockerization process to fit their specific needs. For more details, refer to the sections on [cloud orchestration](../../user-guide/production-guide/cloud-orchestration.md) and [step operators](../../component-guide/step-operators/step-operators.md). ================================================== === File: docs/book/how-to/customize-docker-builds/which-files-are-built-into-the-image.md === ### ZenML Image Building and File Handling ZenML determines the root directory of source files in the following order: 1. If `zenml init` has been run in the current or a parent directory, that directory is used. 2. Otherwise, the parent directory of the executing Python file is used. **DockerSettings Attributes:** - `allow_download_from_code_repository`: If `True`, files from a registered code repository with no local changes are downloaded instead of included in the image. - `allow_download_from_artifact_store`: If the previous option is `False` or no suitable repository exists, and this is `True`, code is archived and uploaded to the artifact store. - `allow_including_files_in_images`: If both previous options are `False`, files are included in the Docker image if this is `True`, necessitating a new image build for code changes. **Warning:** Setting all attributes to `False` is not recommended, as it can lead to unexpected behavior. You must ensure all files are correctly placed in the Docker images. ### File Management - **Excluding Files:** Use a `.gitignore` file to exclude files when downloading from a code repository. - **Including Files:** Use a `.dockerignore` file to exclude files when building the Docker image. This can be done by: - Creating a `.dockerignore` in the source root. - Specifying a `.dockerignore` file explicitly: ```python docker_settings = DockerSettings(build_config={"dockerignore": "/path/to/.dockerignore"}) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` This setup helps manage which files are included or excluded in the Docker image effectively. ================================================== === File: docs/book/how-to/customize-docker-builds/use-a-prebuilt-image.md === ### Summary of ZenML Prebuilt Image Usage #### Overview ZenML allows users to skip building a Docker image for pipeline execution by using a prebuilt image. This can save time and costs, especially when dependencies are large or the local system is slow. However, using a prebuilt image means updates to code or dependencies won't be reflected unless included in the image. #### Using Prebuilt Images To use a prebuilt image, configure the `DockerSettings` class with the desired `parent_image` and set `skip_build` to `True`. ```python docker_settings = DockerSettings( parent_image="my_registry.io/image_name:tag", skip_build=True ) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` Ensure the image is accessible from the registry for the orchestrator and other components. #### Requirements for the Parent Image The specified `parent_image` must contain: - All dependencies required for the pipeline. - Any code files if no code repository is registered and `allow_download_from_artifact_store` is `False`. If using an image built by ZenML previously, it can be reused as long as it was built for the same stack. #### Stack and Integration Requirements To ensure all necessary dependencies are included in the image: 1. **Stack Requirements**: ```python from zenml.client import Client stack_name = Client().set_active_stack(stack_name) active_stack = Client().active_stack stack_requirements = active_stack.requirements() ``` 2. **Integration Requirements**: ```python from zenml.integrations.registry import integration_registry from zenml.integrations.constants import HUGGINGFACE, PYTORCH import itertools required_integrations = [PYTORCH, HUGGINGFACE] integration_requirements = set( itertools.chain.from_iterable( integration_registry.select_integration_requirements( integration_name=integration, target_os=OperatingSystemType.LINUX, ) for integration in required_integrations ) ) ``` 3. **Project-Specific Requirements**: Include all project dependencies in a requirements file: ```Dockerfile RUN pip install -r FILE ``` 4. **System Packages**: Include necessary `apt` packages: ```Dockerfile RUN apt-get update && apt-get install -y --no-install-recommends YOUR_APT_PACKAGES ``` 5. **Project Code Files**: - If a code repository is registered, ZenML will manage code files. - If not, ensure code files are included in the image or allow downloading from the artifact store. Ensure that Python, `pip`, and `zenml` are installed in the image, and that the working directory is set to `/app`. This guide provides a concise approach to using prebuilt images in ZenML pipelines, ensuring all necessary components are included for successful execution. ================================================== === File: docs/book/how-to/customize-docker-builds/use-your-own-docker-files.md === # Using Custom Docker Files in ZenML ZenML allows users to specify a custom Dockerfile, build context directory, and build options for dynamic parent image creation during pipeline execution. The build process varies based on whether a Dockerfile is provided: - **No Dockerfile**: If requirements or environment variables necessitate an image build, ZenML will create one; otherwise, it uses the specified `parent_image`. - **Dockerfile Specified**: ZenML builds an image from the provided Dockerfile. If further requirements necessitate an additional image, ZenML builds a second image; otherwise, the first image is used to run the pipeline. The installation order for packages in the Docker image, based on the `DockerSettings` configuration, is as follows: 1. Packages from the local Python environment. 2. Packages from the `requirements` attribute. 3. Packages from `required_integrations` and stack requirements. **Note**: The intermediate image may also be used directly to execute pipeline steps. ## Code Example ```python docker_settings = DockerSettings( dockerfile="/path/to/dockerfile", build_context_root="/path/to/build/context", parent_image_build_config={ "build_options": ..., "dockerignore": ... } ) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` ================================================== === File: docs/book/how-to/customize-docker-builds/how-to-reuse-builds.md === # Summary of ZenML Build Reuse Documentation ## Overview This guide explains how to reuse builds in ZenML to enhance pipeline efficiency. A build encapsulates a pipeline and its stack, including Docker images and optionally the pipeline code. ## What is a Build? A build is a representation of a pipeline and its execution environment. It includes Docker images with all necessary requirements. To list builds for a pipeline, use: ```bash zenml pipeline builds list --pipeline_id='startswith:ab53ca' ``` To create a build manually: ```bash zenml pipeline build --stack vertex-stack my_module.my_pipeline_instance ``` ## Reusing Builds ZenML automatically reuses existing builds that match the pipeline and stack. You can specify a build ID in the pipeline configuration to force the use of a specific build. Note that reusing a build executes the code in the Docker image, not local changes. To include local changes, disconnect your code from the build by registering a code repository or using the artifact store. ### Artifact Store ZenML can upload your code to the artifact store by default if no code repository is detected. ### Code Repositories Registering a code repository speeds up Docker builds by allowing ZenML to build images without source files and download them before execution. This method is highly recommended as ZenML automatically identifies and reuses matching builds. To install a required integration (e.g., GitHub): ```sh zenml integration install github ``` ### Detecting Local Code Repositories ZenML checks if the files used in a pipeline run are tracked in registered code repositories. It computes the source root and verifies inclusion in local checkouts. ### Tracking Code Versions If a local code repository is detected, ZenML stores the current commit reference for the pipeline run. This reference is only tracked if the local checkout is clean. To ignore untracked files, set the environment variable: ```sh ZENML_CODE_REPOSITORY_IGNORE_UNTRACKED_FILES=True ``` ### Tips and Best Practices - Ensure the local checkout is clean and the latest commit is pushed to the remote repository for successful file downloads. - For options to enforce or disable file downloading, refer to the Docker settings documentation. This summary captures the essential technical details and best practices for reusing builds in ZenML, ensuring efficient pipeline execution while accommodating local code changes. ================================================== === File: docs/book/how-to/customize-docker-builds/specify-pip-dependencies-and-apt-packages.md === # Summary of Specifying Pip Dependencies and Apt Packages in ZenML ## Overview The configuration for specifying pip and apt dependencies is applicable only in remote pipelines, not local ones. When a pipeline runs with a remote orchestrator, a Dockerfile is dynamically generated to build the Docker image. ## DockerSettings Import `DockerSettings` using: ```python from zenml.config import DockerSettings ``` ### Default Behavior ZenML installs all packages required by the active stack automatically. Additional packages can be specified through various methods: ### Methods to Specify Packages 1. **Replicate Local Environment**: ```python docker_settings = DockerSettings(replicate_local_python_environment="pip_freeze") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 2. **Custom Command**: ```python docker_settings = DockerSettings(replicate_local_python_environment=[ "poetry", "export", "--extras=train", "--format=requirements.txt" ]) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 3. **List of Requirements**: ```python docker_settings = DockerSettings(requirements=["torch==1.12.0", "torchvision"]) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 4. **Requirements File**: ```python docker_settings = DockerSettings(requirements="/path/to/requirements.txt") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 5. **ZenML Integrations**: ```python from zenml.integrations.constants import PYTORCH, EVIDENTLY docker_settings = DockerSettings(required_integrations=[PYTORCH, EVIDENTLY]) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 6. **Apt Packages**: ```python docker_settings = DockerSettings(apt_packages=["git"]) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 7. **Disable Automatic Requirement Installation**: ```python docker_settings = DockerSettings(install_stack_requirements=False) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 8. **Custom Docker Settings for Steps**: ```python docker_settings = DockerSettings(requirements=["tensorflow"]) @step(settings={"docker": docker_settings}) def my_training_step(...): ... ``` ### Installation Order ZenML installs packages in the following order: - Local Python environment packages - Stack requirements (unless disabled) - Required integrations - Specified requirements ### Additional Installer Arguments You can specify additional arguments for the Python package installer: ```python docker_settings = DockerSettings(python_package_installer_args={"timeout": 1000}) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` ### Experimental: Using `uv` for Package Installation To use `uv` for faster package resolution: ```python docker_settings = DockerSettings(python_package_installer="uv") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` Note: `uv` is experimental and may lead to installation errors; revert to `pip` if issues arise. For detailed integration with PyTorch and `uv`, refer to the [Astral Docs](https://docs.astral.sh/uv/guides/integration/pytorch/). ================================================== === File: docs/book/how-to/customize-docker-builds/docker-settings-on-a-step.md === ### Summary of Docker Settings Customization in ZenML You can customize Docker settings at the step level in a ZenML pipeline, allowing different steps to use distinct Docker images. By default, all steps use the same Docker image defined at the pipeline level. To specify a different image for a step, use the `DockerSettings` in the step decorator. #### Example using Step Decorator ```python from zenml import step from zenml.config import DockerSettings @step( settings={ "docker": DockerSettings( parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime" ) } ) def training(...): ... ``` #### Example using Configuration File ```yaml steps: training: settings: docker: parent_image: pytorch/pytorch:2.2.0-cuda11.8-cudnn8-runtime required_integrations: - gcp - github requirements: - zenml - numpy ``` This allows for flexibility in managing dependencies and integrations for specific steps in the pipeline. ================================================== === File: docs/book/how-to/customize-docker-builds/define-where-an-image-is-built.md === ### Image Builder Definition in ZenML ZenML executes pipeline steps sequentially in the local Python environment when running locally. For remote orchestrators or step operators, it builds Docker images to run pipelines in isolated environments. By default, execution environments are created using the local Docker client, which requires Docker installation and permissions. ZenML provides **image builders**, a specialized stack component for building and pushing Docker images in a dedicated image builder environment. Even without a configured image builder, ZenML defaults to the local image builder to ensure consistency across builds, using the client environment as the image builder environment. Users do not need to interact directly with the image builder in their code. As long as the desired image builder is part of the active ZenML stack, it will be automatically utilized by any component requiring container image builds. ================================================== === File: docs/book/how-to/manage-zenml-server/README.md === # Manage Your ZenML Server This section provides best practices for upgrading your ZenML server, using it in production, and troubleshooting. It includes recommended upgrade steps and migration guides for transitioning between specific versions. Key Points: - **Upgrading**: Follow the recommended steps for a smooth upgrade process. - **Production Use**: Tips for effectively utilizing ZenML in a production environment. - **Troubleshooting**: Guidance on resolving common issues. - **Migration Guides**: Instructions for moving between certain ZenML versions. ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/how-to/manage-zenml-server/best-practices-upgrading-zenml.md === ### Best Practices for Upgrading ZenML #### Upgrading Your Server 1. **Data Backups**: - **Database Backup**: Create a backup of your MySQL database before upgrading to allow rollback if needed. - **Automated Backups**: Set up daily automated backups using services like AWS RDS, Google Cloud SQL, or Azure Database for MySQL. 2. **Upgrade Strategies**: - **Staged Upgrade**: Use two ZenML server instances (old and new) to migrate services gradually. - **Team Coordination**: Coordinate upgrade timings among teams sharing a server to minimize disruption. - **Separate ZenML Servers**: Consider dedicated servers for different teams to allow flexible upgrade schedules. 3. **Minimizing Downtime**: - **Upgrade Timing**: Schedule upgrades during low-activity periods. - **Avoid Mid-Pipeline Upgrades**: Be cautious of upgrades that might interrupt long-running pipelines. #### Upgrading Your Code 1. **Testing and Compatibility**: - **Local Testing**: Test locally after upgrading (`pip install zenml --upgrade`) and run old pipelines for compatibility. - **End-to-End Testing**: Develop simple tests to ensure compatibility with your pipeline code. - **Artifact Compatibility**: Be cautious with pickle-based materializers; use version-agnostic methods for critical artifacts. Load older artifacts using: ```python from zenml.client import Client artifact = Client().get_artifact_version('YOUR_ARTIFACT_ID') loaded_artifact = artifact.load() ``` 2. **Dependency Management**: - **Python Version**: Ensure your Python version is compatible with the new ZenML version (check the installation guide). - **External Dependencies**: Be aware of external dependencies that may not be compatible with the new version. 3. **Handling API Changes**: - **Changelog Review**: Review the changelog for new syntax or breaking changes. - **Migration Scripts**: Use migration scripts for database schema changes when available. By adhering to these best practices, you can minimize risks and ensure a smoother upgrade process for your ZenML server and code. Adapt these guidelines to fit your specific environment and infrastructure needs. ================================================== === File: docs/book/how-to/manage-zenml-server/using-zenml-server-in-prod.md === # Best Practices for Using ZenML Server in Production ## Overview This guide provides best practices for deploying ZenML server in production environments, focusing on performance, scaling, and security. ## Autoscaling Replicas To handle larger and longer-running pipelines, enable autoscaling based on your deployment environment: ### Kubernetes with Helm Use the following configuration in your Helm chart: ```yaml autoscaling: enabled: true minReplicas: 1 maxReplicas: 10 targetCPUUtilizationPercentage: 80 ``` ### ECS (AWS) 1. Navigate to your ZenML service in the ECS console. 2. Click "Update Service." 3. Enable autoscaling and set task limits. ### Cloud Run (GCP) 1. Go to your ZenML service in Cloud Run. 2. Click "Edit & Deploy new Revision." 3. Set minimum and maximum instances. ### Docker Compose Scale your service with: ```bash docker compose up --scale zenml-server=N ``` ## High Connection Pool Values Increase the thread pool size for better performance: ```yaml zenml: threadPoolSize: 100 ``` Adjust `zenml.database.poolSize` and `zenml.database.maxOverflow` accordingly. ## Scaling the Backing Database Monitor and scale your database based on: - **CPU Utilization**: Above 50% may require scaling. - **Freeable Memory**: Below 100-200 MB may indicate the need for scaling. ## Setting Up Ingress/Load Balancer Securely expose your ZenML server: ### Kubernetes with Helm Enable ingress: ```yaml zenml: ingress: enabled: true className: "nginx" ``` ### ECS Use Application Load Balancers for traffic routing. ### Cloud Run Utilize Cloud Load Balancing for service traffic. ### Docker Compose Set up an NGINX reverse proxy. ## Monitoring Implement monitoring to ensure smooth operation: ### Kubernetes with Helm Use Prometheus and Grafana. Example query for CPU utilization: ``` sum by(namespace) (rate(container_cpu_usage_seconds_total{namespace=~"zenml.*"}[5m])) ``` ### ECS Utilize CloudWatch for monitoring metrics. ### Cloud Run Use Cloud Monitoring for metrics visibility. ## Backups Establish a backup strategy to protect critical data: - Automated backups with a retention period (e.g., 30 days). - Periodic data exports to external storage (e.g., S3, GCS). - Manual backups before upgrades. This summary encapsulates the essential practices for deploying ZenML in production, ensuring optimal performance, scalability, and data safety. ================================================== === File: docs/book/how-to/manage-zenml-server/upgrade-zenml-server.md === ### ZenML Server Upgrade Guide #### Overview Upgrading your ZenML server varies based on deployment method. Always refer to the [best practices for upgrading ZenML](best-practices-upgrading-zenml.md) before proceeding. Upgrade promptly after a new version release to benefit from improvements and fixes. #### Upgrade Methods ##### Docker 1. **Check Data Persistence**: Ensure data is stored on persistent storage or an external MySQL instance. Consider backing up data. 2. **Delete Existing Container**: ```bash docker ps # Find your container ID docker stop docker rm ``` 3. **Deploy New Version**: ```bash docker run -it -d -p 8080:8080 --name zenmldocker/zenml-server: ``` ##### Kubernetes with Helm - **In-Place Upgrade (No Configuration Changes)**: ```bash helm -n upgrade zenml-server oci://public.ecr.aws/zenml/zenml --version --reuse-values ``` - **Upgrade with Configuration Changes**: 1. Extract current configuration: ```bash helm -n get values zenml-server > custom-values.yaml ``` 2. Modify `custom-values.yaml` as needed. 3. Upgrade using modified values: ```bash helm -n upgrade zenml-server oci://public.ecr.aws/zenml/zenml --version -f custom-values.yaml ``` > **Note**: Avoid changing the container image tag in the Helm chart unless you are certain, as it is tested with the default image tag. #### Important Notes - **Downgrading**: Not supported and may cause unexpected behavior. - **Python Client Version**: Should match the server version. ================================================== === File: docs/book/how-to/manage-zenml-server/troubleshoot-your-deployed-server.md === # Troubleshooting Tips for ZenML Deployment This document outlines common issues and solutions for deploying ZenML. ## Viewing Logs ### Kubernetes To view logs for the ZenML server in Kubernetes: 1. Check running pods: ```bash kubectl -n get pods ``` 2. If pods aren't running, get logs for all pods: ```bash kubectl -n logs -l app.kubernetes.io/name=zenml ``` 3. For specific container logs (use `zenml-db-init` for failing pods in `Init` state): ```bash kubectl -n logs -l app.kubernetes.io/name=zenml -c ``` * Use `--tail` to limit log lines or `--follow` for real-time logs. ### Docker To view logs for the ZenML server in Docker: - For `zenml login --local --docker`: ```shell zenml logs -f ``` - For `docker run`: ```shell docker logs zenml -f ``` - For `docker compose`: ```shell docker compose -p zenml logs -f ``` ## Fixing Database Connection Problems Common MySQL connection issues: - **Access Denied**: Check username/password. - **Can't Connect to MySQL**: Verify host settings. Test connection: ```bash mysql -h -u -p ``` *For Kubernetes, use `kubectl port-forward` to connect to MySQL from your local machine.* ## Fixing Database Initialization Problems If migrating from a newer to an older ZenML version results in `Revision not found` errors: 1. Log in to MySQL: ```bash mysql -h -u -p ``` 2. Drop the database: ```sql drop database ; ``` 3. Create a new database: ```sql create database ; ``` 4. Restart Kubernetes pods or Docker container to reinitialize the database. ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-thirty.md === ### Migration Guide: ZenML 0.20.0-0.23.0 to 0.30.0-0.39.1 **Warning:** Migrating to `0.30.0` involves non-reversible database changes. Downgrading to `<=0.23.0` is not possible after this migration. If on an older version, first follow the [0.20.0 Migration Guide](migration-zero-twenty.md) to avoid database migration issues. **Key Changes:** - ZenML 0.30.0 removes the `ml-pipelines-sdk` dependency. - Pipeline runs and artifacts are now stored natively in the ZenML database. **Migration Steps:** 1. Install ZenML 0.30.0: ```bash pip install zenml==0.30.0 zenml version # Should output 0.30.0 ``` The database migration will occur automatically upon executing any `zenml` CLI command after installation. ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-twenty.md === ### Migration Guide: ZenML 0.13.2 to 0.20.0 **Last Updated: 2023-07-24** ZenML 0.20.0 introduces significant architectural changes that may not be backwards compatible. This guide outlines the migration process for existing ZenML stacks and pipelines. #### Key Changes: - **Metadata Store**: ZenML now manages its own metadata, eliminating the need for separate Metadata Store components. If using remote stores, switch to a ZenML server deployment. - **ZenML Dashboard**: A new dashboard is included with all deployments. - **Profiles Removed**: ZenML Profiles are replaced by Projects. Existing profiles must be manually migrated. - **Decoupled Stack Component Configuration**: Stack component configuration is now separate from implementation, requiring updates for custom components. - **Collaborative Features**: The new ZenML server allows sharing of stacks and components among users. #### Migration Steps: 1. **Backup Metadata**: Before upgrading, back up your existing metadata stores. 2. **Upgrade ZenML**: Run `pip install zenml==0.20.0`. 3. **Connect to ZenML Server**: Use `zenml connect` to set up your client. 4. **Migrate Pipeline Runs**: - For SQLite: ```bash zenml pipeline runs migrate PATH/TO/LOCAL/STORE/metadata.db ``` - For other stores (e.g., MySQL): ```bash zenml pipeline runs migrate DATABASE_NAME --database_type=mysql --mysql_host=URL/TO/MYSQL --mysql_username=MYSQL_USERNAME --mysql_password=MYSQL_PASSWORD ``` #### New Commands: - **Deploy Server**: `zenml deploy --aws` - **Start Local Server**: `zenml up` - **Check Server Status**: `zenml status` #### Dashboard Access: To launch the ZenML Dashboard: ```bash zenml up ``` Access it at `http://127.0.0.1:8237`. #### Migration of Profiles: 1. Update ZenML to 0.20.0. 2. Connect to your ZenML server. 3. Use: ```bash zenml profile list zenml profile migrate PATH/TO/PROFILE ``` #### Configuration Changes: - **Renamed Classes**: - `Repository` → `Client` - `BaseStepConfig` → `BaseParameters` - **New Configuration Method**: Use `BaseSettings` for all runtime configurations. #### Example Migration for Steps: ```python @step( experiment_tracker="mlflow_stack_comp_name", settings={ "experiment_tracker.mlflow": { "experiment_name": "name", "nested": False } } ) ``` #### Future Changes: - Potential removal of the secrets manager from the stack. - Deprecation of `StepContext`. #### Reporting Bugs: For issues or feature requests, engage with the ZenML community on [Slack](https://zenml.io/slack) or submit a [GitHub Issue](https://github.com/zenml-io/zenml/issues/new/choose). This guide provides a concise overview of the migration process and key changes in ZenML 0.20.0, ensuring a smooth transition for users. ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-guide.md === ### ZenML Migration Guide **Overview**: This guide outlines the necessary steps to migrate ZenML code between versions, particularly when breaking changes are introduced. #### Versioning and Migration Types - **Minor Version Changes** (e.g., `0.X` to `0.Y`): May include breaking changes; migration is required. - **Major Version Changes** (e.g., `0.39.1` to `0.40.0`): Typically involve significant paradigm shifts; follow specific migration guides. #### Migration Examples - **No Breaking Changes**: `0.40.2` to `0.40.3` - No migration needed. - **Minor Breaking Changes**: `0.40.3` to `0.41.0` - Migration required. - **Major Breaking Changes**: `0.39.1` to `0.40.0` - Major shifts in code usage. #### Major Migration Guides Follow these guides sequentially for major version upgrades: 1. [0.13.2 → 0.20.0](migration-zero-twenty.md) 2. [0.23.0 → 0.30.0](migration-zero-thirty.md) 3. [0.39.1 → 0.41.0](migration-zero-forty.md) 4. [0.58.2 → 0.60.0](migration-zero-sixty.md) #### Release Notes For minor breaking changes, refer to the [ZenML Release Notes](https://github.com/zenml-io/zenml/releases) for details on changes introduced in each release. ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-sixty.md === ### Migration Guide: ZenML 0.58.2 to 0.60.0 (Pydantic 2 Edition) #### Overview ZenML has upgraded to Pydantic v2, introducing critical updates. While user experience remains largely unchanged, stricter validation may lead to new errors. For issues, contact us on [GitHub](https://github.com/zenml-io/zenml) or [Slack](https://zenml.io/slack-invite). #### Key Dependency Changes - **SQLModel**: Upgraded from `0.0.8` to `0.0.18` for compatibility with Pydantic v2. - **SQLAlchemy**: Upgraded from v1 to v2. Users of SQLAlchemy should review the [migration guide](https://docs.sqlalchemy.org/en/20/changelog/migration_20.html). #### Pydantic v2 Features - Enhanced performance using Rust. - New features in model design, configuration, validation, and serialization. Refer to the [Pydantic migration guide](https://docs.pydantic.dev/2.7/migration/) for details. #### Integration Changes - **Airflow**: Removed dependencies due to Airflow's reliance on SQLAlchemy v1. Use ZenML to create pipelines separately from Airflow. - **AWS**: Updated `sagemaker` to version `2.172.0` to support `protobuf` 4. - **Evidently**: Updated to versions between `0.4.16` and `0.4.22` for Pydantic v2 compatibility. - **Feast**: Removed extra `redis` dependency for compatibility. - **GCP & Kubeflow**: Upgraded `kfp` to v2, eliminating Pydantic dependencies. Expect functional changes in vertex step operator and orchestrator. See the [migration guide](https://www.kubeflow.org/docs/components/pipelines/v2/migration/). - **Great Expectations**: Updated dependency to `great-expectations>=0.17.15,<1.0` for Pydantic v2 support. - **MLflow**: Compatible with both Pydantic versions, but may downgrade Pydantic to v1 during installation. Expect deprecation warnings. - **Label Studio**: Updated to version 1.0 supporting Pydantic v2. - **Skypilot**: `skypilot[azure]` integration deactivated due to incompatibility with `azurecli`. Users should remain on the previous ZenML version. - **TensorFlow**: Requires `tensorflow>=2.12.0` due to dependency changes. Issues may arise on Python 3.8; consider upgrading Python. - **Tekton**: Updated to use `kfp` v2 for compatibility with Pydantic v2. #### Important Note Upgrading to ZenML 0.60.0 may cause dependency issues, especially with integrations not supporting Pydantic v2. It is recommended to set up a fresh Python environment for the upgrade. ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-forty.md === # Migration Guide: ZenML 0.39.1 to 0.41.0 ## Overview ZenML versions 0.40.0 and 0.41.0 introduced a new syntax for defining steps and pipelines. While the old syntax is still functional, it is deprecated and will be removed in future releases. ## Old Syntax vs. New Syntax ### Step Definition **Old Syntax:** ```python from zenml.steps import BaseParameters, Output, StepContext, step class MyStepParameters(BaseParameters): param_1: int param_2: Optional[float] = None @step def my_step(params: MyStepParameters, context: StepContext) -> Output(int_output=int, str_output=str): result = int(params.param_1 * (params.param_2 or 1)) return result, context.get_output_artifact_uri() ``` **New Syntax:** ```python from typing import Annotated, Optional, Tuple from zenml import get_step_context, step @step def my_step(param_1: int, param_2: Optional[float] = None) -> Tuple[Annotated[int, "int_output"], Annotated[str, "str_output"]]: result = int(param_1 * (param_2 or 1)) return result, get_step_context().get_output_artifact_uri() ``` ### Pipeline Definition **Old Syntax:** ```python from zenml.pipelines import pipeline @pipeline def my_pipeline(my_step): my_step() pipeline_instance = my_pipeline(my_step=my_step(params=MyStepParameters(param_1=17))) ``` **New Syntax:** ```python from zenml import pipeline @pipeline def my_pipeline(): my_step(param_1=17) my_pipeline() ``` ### Configuration and Execution **Old Syntax:** ```python pipeline_instance.configure(enable_cache=False) pipeline_instance.run(schedule=schedule) ``` **New Syntax:** ```python my_pipeline = my_pipeline.with_options(enable_cache=False, schedule=schedule) my_pipeline() ``` ### Fetching Pipeline Runs **Old Syntax:** ```python last_run = pipeline_instance.get_runs()[0] int_output = last_run.get_step["my_step"].outputs["int_output"].read() ``` **New Syntax:** ```python last_run = my_pipeline.last_run int_output = last_run.steps["my_step"].outputs["int_output"].load() ``` ### Step Execution Order **Old Syntax:** ```python @pipeline def my_pipeline(step_1, step_2, step_3): step_3.after(step_1) ``` **New Syntax:** ```python @pipeline def my_pipeline(): step_3(after=["step_1", "step_2"]) ``` ### Multiple Outputs **Old Syntax:** ```python @step def my_step() -> Output(int_output=int, str_output=str): ... ``` **New Syntax:** ```python @step def my_step() -> Tuple[Annotated[int, "int_output"], Annotated[str, "str_output"]]: ... ``` ### Accessing Run Information **Old Syntax:** ```python @step def my_step(context: StepContext) -> Any: step_name = context.step_name ``` **New Syntax:** ```python @step def my_step() -> Any: context = get_step_context() step_name = context.step_name ``` ## Conclusion This guide outlines the key changes in syntax and structure when migrating from ZenML version 0.39.1 to 0.41.0. For more details on specific functionalities, refer to the respective documentation sections. ================================================== === File: docs/book/how-to/manage-zenml-server/connecting-to-zenml/connect-in-with-your-user-interactive.md === ### ZenML Server Connection Guide To connect to the ZenML server using the ZenML CLI and web-based login, execute the following command: ```bash zenml login https://... ``` This command initiates a browser-based validation process. You can choose to trust the device, which issues a 30-day token, or not trust it, which issues a 24-hour token. **Note:** Device management for ZenML Pro tenants is not yet supported but will be available soon. To view all authorized devices, use: ```bash zenml authorized-device list ``` To inspect a specific device: ```bash zenml authorized-device describe ``` For added security, invalidate a token with: ```bash zenml authorized-device lock ``` ### Summary Steps: 1. Run `zenml login ` to connect. 2. Decide whether to trust the device. 3. List authorized devices with `zenml authorized-device list`. 4. Lock a device with `zenml authorized-device lock `. ### Important Notice Using the ZenML CLI ensures secure interaction with your ZenML tenants. Regularly manage device trust levels and revoke access by locking devices as needed, as each token can grant access to sensitive data and infrastructure. ================================================== === File: docs/book/how-to/manage-zenml-server/connecting-to-zenml/README.md === ### Connecting to ZenML Once ZenML is deployed, there are multiple methods to connect to it. ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) For detailed connection methods, refer to the relevant sections in the user guide. ================================================== === File: docs/book/how-to/manage-zenml-server/connecting-to-zenml/connect-with-an-api-token.md === ### Connect with an API Token API tokens authenticate with the ZenML server for temporary automation tasks, valid for a maximum of 1 hour and scoped to your user account. #### Generating an API Token To generate a new API token: 1. Go to the server's Settings page in your ZenML dashboard. 2. Select "API Tokens" from the left sidebar. 3. Click "Create new token." A dialog will display your new API token. #### Programmatic Access Use the generated API tokens for programmatic access to the ZenML server's REST API. This method is ideal for quick access without using the ZenML CLI or Python client. For detailed instructions, refer to the [API reference section](../../../reference/api-reference.md#using-a-short-lived-api-token). ================================================== === File: docs/book/how-to/manage-zenml-server/connecting-to-zenml/connect-with-a-service-account.md === ### Summary of ZenML Service Account and API Key Documentation #### Overview To authenticate with a ZenML server in non-interactive environments (e.g., CI/CD, serverless functions), create a service account and use its API key. #### Creating a Service Account Use the following command to create a service account and generate an API key: ```bash zenml service-account create ``` The API key will be displayed and cannot be retrieved later. #### Connecting to ZenML Server You can connect to the ZenML server using the API key via: 1. **CLI Method**: ```bash zenml login https://... --api-key ``` 2. **Environment Variables** (recommended for automated environments): ```bash export ZENML_STORE_URL=https://... export ZENML_STORE_API_KEY= ``` Setting these variables allows immediate interaction without needing to run `zenml login`. #### Managing Service Accounts and API Keys - **List Service Accounts**: ```bash zenml service-account list ``` - **List API Keys**: ```bash zenml service-account api-key list ``` - **Describe Service Account**: ```bash zenml service-account describe ``` - **Describe API Key**: ```bash zenml service-account api-key describe ``` #### Rotating API Keys To enhance security, regularly rotate API keys: ```bash zenml service-account api-key rotate ``` To retain the old API key for a specified duration (e.g., 60 minutes): ```bash zenml service-account api-key rotate --retain 60 ``` #### Deactivating Service Accounts or API Keys To deactivate a service account or API key: ```bash zenml service-account update --active false zenml service-account api-key update --active false ``` Deactivation takes immediate effect. #### Steps Summary 1. Create a service account and API key. 2. Connect using the API key. 3. List service accounts and API keys. 4. Rotate API keys regularly. 5. Deactivate unused accounts or keys. #### Programmatic Access Use the API key to obtain short-lived tokens for programmatic access to the ZenML REST API. Detailed documentation is available in the API reference section. #### Security Notice Regularly rotate API keys and deactivate or delete unused service accounts and keys to protect your data and infrastructure. ================================================== === File: docs/book/how-to/infrastructure-deployment/README.md === # Infrastructure and Deployment This section outlines the infrastructure setup and deployment processes for ZenML. ## Key Components - **Cloud Providers**: ZenML supports multiple cloud providers for deployment, including AWS, GCP, and Azure. - **Infrastructure as Code (IaC)**: Use tools like Terraform or CloudFormation for managing infrastructure. ## Deployment Steps 1. **Environment Setup**: Configure your cloud environment and ensure necessary permissions. 2. **Resource Provisioning**: Use IaC tools to provision required resources (e.g., VMs, storage). 3. **ZenML Installation**: Install ZenML using pip: ```bash pip install zenml ``` 4. **Pipeline Configuration**: Define and configure pipelines using ZenML’s API. 5. **Execution**: Run pipelines and monitor their execution through the ZenML dashboard. ## Best Practices - **Version Control**: Keep infrastructure code in version control systems (e.g., Git). - **Monitoring**: Implement monitoring for deployed resources to ensure reliability. This summary provides a concise overview of the infrastructure and deployment processes in ZenML, highlighting essential components and steps. ================================================== === File: docs/book/how-to/infrastructure-deployment/infrastructure-as-code/README.md === --- icon: network-wired description: > Leverage Infrastructure as Code to manage your ZenML stacks and components. --- # Integrate with Infrastructure as Code Infrastructure as Code (IaC) is the practice of managing and provisioning infrastructure through code rather than manual processes. This section covers how to integrate ZenML with popular IaC tools, specifically [Terraform](https://www.terraform.io/). ![ZenML stack on Terraform Registry](../../../.gitbook/assets/terraform_providers_screenshot.png) ================================================== === File: docs/book/how-to/infrastructure-deployment/infrastructure-as-code/terraform-stack-management.md === ### Summary: Registering Existing Infrastructure with ZenML - A Guide for Terraform Users #### Overview This guide assists advanced users in integrating ZenML with their existing Terraform infrastructure. It focuses on managing custom Terraform code using the ZenML provider. #### Two-Phase Approach 1. **Infrastructure Deployment**: Creating cloud resources. 2. **ZenML Registration**: Registering these resources as ZenML stack components. #### Phase 1: Infrastructure Deployment You may already have existing Terraform configurations, such as: ```hcl resource "google_storage_bucket" "ml_artifacts" { name = "company-ml-artifacts" location = "US" } resource "google_artifact_registry_repository" "ml_containers" { repository_id = "ml-containers" format = "DOCKER" } ``` #### Phase 2: ZenML Registration **Setup the ZenML Provider** Configure the ZenML provider to connect to your ZenML server: ```hcl terraform { required_providers { zenml = { source = "zenml-io/zenml" } } } provider "zenml" { # Configuration options from environment variables } ``` Generate an API key with: ```bash zenml service-account create ``` **Create Service Connectors** Service connectors manage authentication: ```hcl resource "zenml_service_connector" "gcp_connector" { name = "gcp-${var.environment}-connector" type = "gcp" auth_method = "service-account" configuration = { project_id = var.project_id service_account_json = file("service-account.json") } } ``` **Register Stack Components** Register various components: ```hcl locals { component_configs = { artifact_store = { type = "artifact_store" flavor = "gcp" configuration = { path = "gs://${google_storage_bucket.ml_artifacts.name}" } } container_registry = { type = "container_registry" flavor = "gcp" configuration = { uri = "${var.region}-docker.pkg.dev/${var.project_id}/${google_artifact_registry_repository.ml_containers.repository_id}" } } orchestrator = { type = "orchestrator" flavor = "vertex" configuration = { project = var.project_id region = var.region } } } } resource "zenml_stack_component" "components" { for_each = local.component_configs name = "existing-${each.key}" type = each.value.type flavor = each.value.flavor configuration = each.value.configuration connector_id = zenml_service_connector.gcp_connector.id } ``` **Assemble the Stack** Combine components into a stack: ```hcl resource "zenml_stack" "ml_stack" { name = "${var.environment}-ml-stack" components = { for k, v in zenml_stack_component.components : k => v.id } } ``` #### Practical Walkthrough: Registering Existing GCP Infrastructure **Prerequisites** - GCS bucket for artifacts - Artifact Registry repository - Service account for ML operations - Vertex AI enabled **Variables Configuration** Define variables in `variables.tf`: ```hcl variable "zenml_server_url" { type = string } variable "zenml_api_key" { type = string, sensitive = true } variable "project_id" { type = string } variable "region" { type = string, default = "us-central1" } variable "environment" { type = string } variable "gcp_service_account_key" { type = string, sensitive = true } ``` **Main Configuration** In `main.tf`, configure providers and resources: ```hcl terraform { required_providers { zenml = { source = "zenml-io/zenml" } google = { source = "hashicorp/google" } } } provider "zenml" { server_url = var.zenml_server_url api_key = var.zenml_api_key } provider "google" { project = var.project_id region = var.region } resource "google_storage_bucket" "artifacts" { name = "${var.project_id}-zenml-artifacts-${var.environment}" location = var.region } resource "google_artifact_registry_repository" "containers" { location = var.region repository_id = "zenml-containers-${var.environment}" format = "DOCKER" } resource "zenml_service_connector" "gcp" { name = "gcp-${var.environment}" type = "gcp" auth_method = "service-account" configuration = { project_id = var.project_id region = var.region service_account_json = var.gcp_service_account_key } } resource "zenml_stack_component" "artifact_store" { name = "gcs-${var.environment}" type = "artifact_store" flavor = "gcp" configuration = { path = "gs://${google_storage_bucket.artifacts.name}/artifacts" } connector_id = zenml_service_connector.gcp.id } resource "zenml_stack_component" "container_registry" { name = "gcr-${var.environment}" type = "container_registry" flavor = "gcp" configuration = { uri = "${var.region}-docker.pkg.dev/${var.project_id}/${google_artifact_registry_repository.containers.repository_id}" } connector_id = zenml_service_connector.gcp.id } resource "zenml_stack_component" "orchestrator" { name = "vertex-${var.environment}" type = "orchestrator" flavor = "vertex" configuration = { location = var.region synchronous = true } connector_id = zenml_service_connector.gcp.id } resource "zenml_stack" "gcp_stack" { name = "gcp-${var.environment}" components = { artifact_store = zenml_stack_component.artifact_store.id container_registry = zenml_stack_component.container_registry.id orchestrator = zenml_stack_component.orchestrator.id } } ``` **Outputs Configuration** Define outputs in `outputs.tf`: ```hcl output "stack_id" { value = zenml_stack.gcp_stack.id } output "stack_name" { value = zenml_stack.gcp_stack.name } output "artifact_store_path" { value = "${google_storage_bucket.artifacts.name}/artifacts" } output "container_registry_uri" { value = "${var.region}-docker.pkg.dev/${var.project_id}/${google_artifact_registry_repository.containers.repository_id}" } ``` **terraform.tfvars Configuration** Create a `terraform.tfvars` file: ```hcl zenml_server_url = "https://your-zenml-server.com" project_id = "your-gcp-project-id" region = "us-central1" environment = "dev" ``` Store sensitive variables in environment variables: ```bash export TF_VAR_zenml_api_key="your-zenml-api-key" export TF_VAR_gcp_service_account_key=$(cat path/to/service-account-key.json) ``` #### Usage Instructions 1. Initialize Terraform: ```bash terraform init ``` 2. Install ZenML integrations: ```bash zenml integration install gcp ``` 3. Review planned changes: ```bash terraform plan ``` 4. Apply configuration: ```bash terraform apply ``` 5. Set the active stack: ```bash zenml stack set $(terraform output -raw stack_name) ``` 6. Verify configuration: ```bash zenml stack describe ``` #### Best Practices - Use appropriate IAM roles and permissions. - Follow security practices for credential handling. - Consider using Terraform workspaces for environment management. - Regularly back up Terraform state files. - Version control Terraform configurations, excluding sensitive files. For more details, refer to the [ZenML provider](https://registry.terraform.io/providers/zenml-io/zenml/latest). ================================================== === File: docs/book/how-to/infrastructure-deployment/infrastructure-as-code/best-practices.md === # Best Practices for Using IaC with ZenML ## Overview This documentation outlines best practices for architecting scalable ML infrastructure using ZenML and Terraform. It addresses challenges such as supporting multiple ML teams, maintaining security, and enabling rapid iteration. ## ZenML Approach ZenML utilizes stack components as abstractions over infrastructure resources, allowing for a component-based architecture that promotes reusability and consistency. ### Part 1: Stack Component Architecture **Problem:** Different teams require varied ML infrastructure configurations. **Solution:** Create reusable modules that correspond to ZenML stack components. **Base Infrastructure Example:** ```hcl terraform { required_providers { zenml = { source = "zenml-io/zenml" } google = { source = "hashicorp/google" } } } resource "random_id" "suffix" { byte_length = 6 } module "base_infrastructure" { source = "./modules/base_infra" environment = var.environment project_id = var.project_id region = var.region resource_prefix = "zenml-${var.environment}-${random_id.suffix.hex}" } resource "zenml_service_connector" "base_connector" { name = "${var.environment}-base-connector" type = "gcp" auth_method = "service-account" configuration = { project_id = var.project_id region = var.region service_account_json = module.base_infrastructure.service_account_key } } resource "zenml_stack_component" "artifact_store" { name = "${var.environment}-artifact-store" type = "artifact_store" flavor = "gcp" configuration = { path = "gs://${module.base_infrastructure.artifact_store_bucket}/artifacts" } connector_id = zenml_service_connector.base_connector.id } resource "zenml_stack" "base_stack" { name = "${var.environment}-base-stack" components = { artifact_store = zenml_stack_component.artifact_store.id } } ``` Teams can extend this base stack with specific components for their needs. ### Part 2: Environment Management and Authentication **Problem:** Different environments require distinct configurations and authentication methods. **Solution:** Use a flexible service connector setup that adapts to each environment. ```hcl locals { env_config = { dev = { machine_type = "n1-standard-4", gpu_enabled = false, auth_method = "service-account", auth_configuration = { service_account_json = file("dev-sa.json") } } prod = { machine_type = "n1-standard-8", gpu_enabled = true, auth_method = "external-account", auth_configuration = { external_account_json = file("prod-sa.json") } } } } resource "zenml_service_connector" "env_connector" { name = "${var.environment}-connector" type = "gcp" auth_method = local.env_config[var.environment].auth_method dynamic "configuration" { for_each = try(local.env_config[var.environment].auth_configuration, {}) content { key = configuration.key; value = configuration.value } } } resource "zenml_stack_component" "env_orchestrator" { name = "${var.environment}-orchestrator" type = "orchestrator" flavor = "vertex" configuration = { location = var.region machine_type = local.env_config[var.environment].machine_type gpu_enabled = local.env_config[var.environment].gpu_enabled } connector_id = zenml_service_connector.env_connector.id } ``` ### Part 3: Resource Sharing and Isolation **Problem:** Projects require strict isolation to prevent unauthorized access. **Solution:** Implement resource scoping with project isolation. ```hcl locals { project_paths = { fraud_detection = "projects/fraud_detection/${var.environment}" recommendation = "projects/recommendation/${var.environment}" } } resource "zenml_stack_component" "project_artifact_stores" { for_each = local.project_paths name = "${each.key}-artifact-store" type = "artifact_store" flavor = "gcp" configuration = { path = "gs://${var.shared_bucket}/${each.value}" } connector_id = zenml_service_connector.env_connector.id } resource "zenml_stack" "project_stacks" { for_each = local.project_paths name = "${each.key}-stack" components = { artifact_store = zenml_stack_component.project_artifact_stores[each.key].id } } ``` ### Part 4: Advanced Stack Management Practices 1. **Stack Component Versioning** ```hcl locals { stack_version = "1.2.0" } resource "zenml_stack" "versioned_stack" { name = "stack-v${local.stack_version}" } ``` 2. **Service Connector Management** ```hcl resource "zenml_service_connector" "env_connector" { name = "${var.environment}-${var.purpose}-connector" type = var.connector_type auth_method = var.environment == "prod" ? "workload-identity" : "service-account" } ``` 3. **Component Configuration Management** ```hcl locals { base_configs = { orchestrator = { location = var.region, project = var.project_id } } } resource "zenml_stack_component" "configured_component" { name = "${var.environment}-${var.component_type}" type = var.component_type configuration = merge(local.base_configs[var.component_type], try(local.env_configs[var.environment][var.component_type], {})) } ``` 4. **Stack Organization and Dependencies** ```hcl module "ml_stack" { source = "./modules/ml_stack" depends_on = [module.base_infrastructure] components = { artifact_store = module.storage.artifact_store_id } } ``` 5. **State Management** ```hcl terraform { backend "gcs" { prefix = "terraform/state" } } data "terraform_remote_state" "infrastructure" { backend = "gcs" config = { bucket = var.state_bucket } } ``` ## Conclusion Using ZenML with Terraform allows for a flexible, maintainable, and secure ML infrastructure. Following these best practices ensures a clean and scalable codebase while adhering to infrastructure-as-code principles. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/service-connectors-guide.md === # Service Connectors Guide Summary This guide details the management of Service Connectors in ZenML for connecting to external resources. Key sections include: ## Getting Started - **Terminology**: Familiarize yourself with key terms related to Service Connectors, such as Service Connector Types, Resource Types, and Resource Names. - **Service Connector Types**: Understand various implementations (e.g., AWS, GCP) and their capabilities. Use CLI commands like `zenml service-connector list-types` to explore available types. ## Key Concepts - **Resource Types**: Classifications of resources (e.g., `kubernetes-cluster`, `docker-registry`) that unify access methods across different vendors. - **Resource Names**: Unique identifiers for resource instances, allowing access through Service Connectors. ## Service Connector Management - **Registering Service Connectors**: Use commands like `zenml service-connector register --type ` to set up connectors. Options for multi-type and multi-instance configurations are available. - **Auto-configuration**: Automatically extract credentials from local environments using commands like `zenml service-connector register --auto-configure`. ## Connecting Stack Components - Connect Stack Components to resources using commands like: ```sh zenml artifact-store connect --connector ``` - Use interactive CLI mode for guided setup. ## Verification and Discovery - Verify Service Connectors with `zenml service-connector verify ` to ensure valid configurations and accessible resources. - Discover available resources using: ```sh zenml service-connector list-resources ``` ## Examples - **Registering a Multi-Type Connector**: ```sh zenml service-connector register aws-multi-type --type aws --auto-configure ``` - **Connecting to a Resource**: ```sh zenml artifact-store connect s3-zenfiles --connector aws-multi-type ``` ## End-to-End Examples For comprehensive examples, refer to specific documentation for AWS, GCP, and Azure Service Connectors. This summary encapsulates the essential information needed to effectively manage Service Connectors in ZenML, ensuring users can connect to and utilize external resources efficiently. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/gcp-service-connector.md === ### GCP Service Connector Documentation Summary The **ZenML GCP Service Connector** enables authentication and access to various GCP resources, including GCS buckets, GKE clusters, and GCR container registries. It supports multiple authentication methods such as GCP user accounts, service accounts, OAuth 2.0 tokens, and implicit authentication. By default, it issues short-lived OAuth 2.0 tokens for enhanced security. #### Key Features: - **Authentication Methods**: - **Implicit Authentication**: Uses Application Default Credentials (ADC) without explicit configuration. - **GCP User Account**: Utilizes long-lived credentials, generating temporary OAuth 2.0 tokens by default. - **GCP Service Account**: Requires a service account key JSON; also generates temporary tokens. - **Service Account Impersonation**: Generates temporary STS credentials by impersonating another service account. - **External Account (Workload Identity)**: Authenticates using AWS IAM or Azure AD credentials. - **OAuth 2.0 Token**: Requires manual token management. - **Resource Types**: - **Generic GCP Resource**: For any GCP service using OAuth 2.0 tokens. - **GCS Bucket**: Requires specific permissions such as `storage.buckets.list`. - **GKE Cluster**: Requires permissions like `container.clusters.list`. - **GAR/GCR Container Registry**: Supports both Google Artifact Registry and legacy Google Container Registry. #### Installation: To install the GCP Service Connector: - Use PyPI: ```bash pip install "zenml[connectors-gcp]" ``` - Or install the entire GCP integration: ```bash zenml integration install gcp ``` #### Configuration Examples: - **Registering a Service Connector**: ```bash zenml service-connector register gcp-implicit --type gcp --auth-method implicit --auto-configure ``` - **Listing Resource Types**: ```bash zenml service-connector list-types --type gcp ``` - **Auto-Configuration**: ```bash zenml service-connector register gcp-auto --type gcp --auto-configure ``` #### Local Client Configuration: The GCP Service Connector can configure local clients (e.g., `gcloud`, `kubectl`, and Docker CLI) with credentials that have a short lifetime, requiring regular refreshes for security. #### Stack Components: The GCP Service Connector can connect various ZenML Stack Components to GCP resources, such as: - GCS Artifact Store - Kubernetes Orchestrator - GCP Container Registry #### End-to-End Examples: 1. **Multi-Type GCP Service Connector**: - Connects a GKE Kubernetes cluster, GCS bucket, and GCR registry. - Example command to register: ```bash zenml service-connector register gcp-demo-multi --type gcp --auto-configure ``` 2. **Single-Instance GCP Service Connectors**: - Each Stack Component has its own service connector for specific resources. - Example command to register a GCS bucket connector: ```bash zenml service-connector register gcs-zenml-bucket-sl --type gcp --resource-type gcs-bucket --resource-id gs://zenml-bucket-sl --auto-configure ``` #### Conclusion: The ZenML GCP Service Connector provides a streamlined way to connect and manage GCP resources securely, facilitating the integration of various Stack Components within ZenML. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/README.md === ### ZenML Service Connectors Overview **Purpose**: ZenML Service Connectors facilitate secure connections between your ZenML deployment and various cloud providers and infrastructure services (e.g., AWS, GCP, Azure, Kubernetes). #### Key Challenges - MLOps platforms often require multiple third-party libraries and services, necessitating secure and uninterrupted access to various resources. - Authentication and authorization mechanisms can be complex, especially when services need to interact with each other (e.g., Kubernetes accessing AWS S3). #### Solution: ZenML Service Connectors - Service Connectors abstract the complexity of authentication and authorization, allowing developers to focus on coding without managing security intricacies directly. - They handle credential validation and generate short-lived tokens, enhancing security by preventing long-lived credentials from being exposed. #### Use Case Example: Connecting to AWS S3 1. **List Available Service Connector Types**: ```bash zenml service-connector list-types ``` 2. **Register an AWS Service Connector**: Automatically configure using local AWS credentials: ```bash zenml service-connector register aws-s3 --type aws --auto-configure --resource-type s3-bucket ``` 3. **Connect an S3 Artifact Store**: ```bash zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles zenml artifact-store connect s3-zenfiles --connector aws-s3 ``` 4. **Example Pipeline**: ```python from zenml import step, pipeline @step def simple_step_one() -> str: return "Hello World!" @step def simple_step_two(msg: str) -> None: print(msg) @pipeline def simple_pipeline() -> None: message = simple_step_one() simple_step_two(msg=message) if __name__ == "__main__": simple_pipeline() ``` 5. **Run the Pipeline**: ```bash python run.py ``` #### Authentication Methods Supported by AWS Service Connector - Implicit, secret-key, STS token, IAM role, session token, federation token. - Automatically generates temporary STS tokens with minimal permissions for enhanced security. #### Security Considerations - Avoid embedding credentials directly in Stack Components; use Service Connectors to manage them securely. - Regularly rotate credentials and ensure proper permissions are set for the resources accessed. ### Additional Resources - [Service Connector Guide](./service-connectors-guide.md) - [Security Best Practices](./best-security-practices.md) - [Docker Service Connector](./docker-service-connector.md) - [Kubernetes Service Connector](./kubernetes-service-connector.md) - [AWS Service Connector](./aws-service-connector.md) - [GCP Service Connector](./gcp-service-connector.md) - [Azure Service Connector](./azure-service-connector.md) This summary encapsulates the essential details about ZenML Service Connectors, their purpose, usage, and security practices without losing critical information. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/best-security-practices.md === ### Summary of Best Practices for Authentication Methods in Service Connectors Service Connectors for cloud providers support various authentication methods, though no single standard exists. This documentation outlines best practices for selecting authentication methods based on identified patterns. #### General Recommendations - **Avoid Primary Account Passwords**: Use alternative credentials like session tokens, API keys, or API tokens instead of primary account passwords. If passwords must be used, limit their exposure to trusted environments. #### Authentication Methods 1. **Username and Password** - Least secure method; avoid sharing within teams or using for automated workloads. - Cloud platforms typically require exchanging passwords for long-lived credentials (e.g., API keys). 2. **Implicit Authentication** - Provides immediate access to cloud resources using locally stored credentials or environment variables. - Disabled by default; requires explicit enabling. - Not recommended for portability or reproducibility. 3. **Long-lived Credentials (API Keys, Account Keys)** - Preferred for production environments; allows for sharing without exposing primary credentials. - Different cloud providers have specific commands for generating these credentials: - AWS: `aws configure` - GCP: `gcloud auth application-default login` - Azure: `az login` - Use service credentials over user credentials to enforce the least-privilege principle. 4. **Generating Temporary and Down-scoped Credentials** - Temporary credentials limit exposure of long-lived credentials and are issued on a need basis. - Down-scoped credentials restrict permissions to only what is necessary for the task. 5. **Impersonating Accounts and Assuming Roles** - Provides flexibility and control but requires setup of multiple permission-bearing accounts. - Involves configuring a Service Connector with long-lived credentials and provisioning secondary entities (IAM roles or service accounts) for access. 6. **Short-lived Credentials** - Temporary credentials that expire quickly; impractical for long-term use but useful for granting temporary access without exposing long-lived credentials. ### Example Commands - **Registering a GCP Implicit Authentication Connector**: ```sh zenml service-connector register gcp-implicit --type gcp --auth-method implicit --project_id=zenml-core ``` - **Registering an AWS Service Connector with Federation Token**: ```sh zenml service-connector register aws-federation-multi --type aws --auth-method=federation-token --auto-configure ``` - **Using GCP Account Impersonation**: ```sh zenml service-connector register gcp-impersonate-sa --type gcp --auth-method impersonation --service_account_json=@empty-connectors@zenml-core.json --project_id=zenml-core --target_principal=zenml-bucket-sl@zenml-core.iam.gserviceaccount.com --resource-type gcs-bucket --resource-id gs://zenml-bucket-sl ``` ### Conclusion Choosing the right authentication method is crucial for security and usability in cloud environments. Prioritize long-lived credentials, use temporary credentials when necessary, and consider impersonation for enhanced security and flexibility. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/hyperai-service-connector.md === ### HyperAI Service Connector Documentation Summary The ZenML HyperAI Service Connector enables authentication with HyperAI instances for deploying pipeline runs. It provides pre-authenticated Paramiko SSH clients to linked Stack Components. #### Command to List Connector Types ```shell $ zenml service-connector list-types --type hyperai ``` #### Connector Details | Name | Type | Resource Types | Auth Methods | Local | Remote | |--------------------------|-----------|---------------------|----------------|-------|--------| | HyperAI Service Connector | 🤖 hyperai | 🤖 hyperai-instance | rsa-key | ✅ | ✅ | | | | | dsa-key | | | | | | | ecdsa-key | | | | | | | ed25519-key | | | #### Prerequisites - Install the HyperAI integration: ```shell zenml integration install hyperai ``` #### Resource Types - Supports HyperAI instances. #### Authentication Methods ZenML establishes an SSH connection to HyperAI instances using: 1. RSA key 2. DSA key 3. ECDSA key 4. ED25519 key **Warning:** SSH private keys are long-lived credentials that grant unrestricted access to HyperAI instances and will be shared with all clients running pipelines. #### Configuration Requirements - Must provide at least one `hostname` and `username`. - Optionally, an `ssh_passphrase` can be included. #### Usage Options 1. Create separate service connectors for each HyperAI instance with different SSH keys. 2. Use a single SSH key for multiple instances, selecting the instance during the HyperAI orchestrator component creation. #### Auto-configuration - The Service Connector does not support auto-discovery of authentication credentials. Feedback for this feature can be provided via [Slack](https://zenml.io/slack) or by creating an issue on [GitHub](https://github.com/zenml-io/zenml/issues). #### Stack Components Usage The HyperAI Service Connector is utilized by the HyperAI Orchestrator to deploy pipeline runs to HyperAI instances. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/aws-service-connector.md === ### AWS Service Connector Overview The **ZenML AWS Service Connector** enables authentication and access to AWS resources such as S3 buckets, EKS clusters, and ECR registries. It supports various authentication methods, including long-lived AWS secret keys, IAM roles, STS tokens, and implicit authentication. The connector generates temporary STS tokens with minimal permissions and can auto-configure credentials from the AWS CLI. #### Key Features: - **Resource Types Supported**: - **Generic AWS Resource**: Access any AWS service using a pre-configured boto3 session. - **S3 Bucket**: Requires specific IAM permissions for S3 actions. - **EKS Kubernetes Cluster**: Requires permissions to list and describe clusters. - **ECR Container Registry**: Requires permissions to manage ECR repositories. #### Authentication Methods: 1. **Implicit Authentication**: Uses environment variables or IAM roles. Requires enabling via `ZENML_ENABLE_IMPLICIT_AUTH_METHODS`. 2. **AWS Secret Key**: Long-lived credentials; not recommended for production. 3. **AWS STS Token**: Temporary tokens; requires manual refresh. 4. **AWS IAM Role**: Assumes a role to generate temporary STS tokens. 5. **AWS Session Token**: Generates temporary session tokens for IAM users. 6. **AWS Federation Token**: Generates tokens for federated users. #### Auto-Configuration: The connector can auto-discover credentials set up by the AWS CLI. The default profile is used unless specified otherwise. #### Local Client Configuration: The connector can configure local AWS CLI, Kubernetes `kubectl`, and Docker CLI with credentials extracted from the AWS Service Connector. ### Example Commands #### List AWS Service Connector Types ```shell $ zenml service-connector list-types --type aws ``` #### Register AWS Service Connector with Implicit Authentication ```shell AWS_PROFILE=connectors zenml service-connector register aws-implicit --type aws --auth-method implicit --region=us-east-1 ``` #### Verify Access to S3 Buckets ```shell AWS_PROFILE=connectors zenml service-connector verify aws-implicit --resource-type s3-bucket ``` #### Register an S3 Artifact Store ```shell zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles zenml artifact-store connect s3-zenfiles --connector aws-demo-multi ``` #### Register and Connect EKS Orchestrator ```shell zenml orchestrator register eks-zenml-zenhacks --flavor kubernetes --synchronous=true --kubernetes_namespace=zenml-workloads zenml orchestrator connect eks-zenml-zenhacks --connector aws-demo-multi ``` ### End-to-End Example 1. **Configure AWS CLI** with valid credentials. 2. **Register Multi-Type AWS Service Connector**: ```shell AWS_PROFILE=connectors zenml service-connector register aws-demo-multi --type aws --auto-configure ``` 3. **List Accessible Resources**: ```shell zenml service-connector list-resources --resource-type s3-bucket ``` 4. **Run a Simple Pipeline** to validate the setup. This concise documentation provides the essential steps and commands to configure and use the ZenML AWS Service Connector effectively while maintaining security and functionality. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/docker-service-connector.md === ### Docker Service Connector Overview The ZenML Docker Service Connector enables authentication with Docker/OCI container registries and manages Docker clients for these registries. It provides pre-authenticated Python clients for Stack Components. #### Command to List Docker Service Connector Types ```shell zenml service-connector list-types --type docker ``` **Output Example:** ``` ┏━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━┯━━━━━━━┯━━━━━━━━┓ ┃ NAME │ TYPE │ RESOURCE TYPES │ AUTH METHODS │ LOCAL │ REMOTE ┃ ┠──────────────────────────┼───────────┼────────────────────┼──────────────┼───────┼────────┨ ┃ Docker Service Connector │ 🐳 docker │ 🐳 docker-registry │ password │ ✅ │ ✅ ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━┷━━━━━━━┷━━━━━━━━┛ ``` ### Prerequisites - No additional Python packages are needed; all are included in the ZenML package. - Docker must be installed in environments where images are built and pushed. ### Resource Types The connector supports authentication to Docker/OCI registries, identified by the `docker-registry` resource type. Supported formats include: - DockerHub: `docker.io` or `https://index.docker.io/v1/` - Generic OCI registry: `https://host:port/` ### Authentication Methods Authentication uses a username and password or access token, with a preference for API tokens when available. #### Command to Register DockerHub Connector ```sh zenml service-connector register dockerhub --type docker -in ``` **Example Command Output:** ``` Please enter a name for the service connector [dockerhub]: ... Successfully registered service connector `dockerhub` with access to: ┏━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE │ RESOURCE NAMES ┃ ┠────────────────────┼────────────────┨ ┃ 🐳 docker-registry │ docker.io ┃ ┗━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛ ``` **Warning:** Credentials configured will be directly distributed to clients without generating short-lived credentials. ### Auto-configuration The connector does not support auto-discovery of credentials from local Docker clients. Feedback is welcome for this feature. ### Local Client Provisioning To configure the local Docker client with credentials: ```sh zenml service-connector login dockerhub ``` **Example Command Output:** ``` WARNING! Your password will be stored unencrypted in /home/stefan/.docker/config.json. ... The 'dockerhub' connector was used to successfully configure the local Docker client. ``` ### Stack Components Usage The Docker Service Connector can be utilized by all Container Registry stack components to authenticate with remote registries, allowing image building and publishing without explicit Docker credentials in the environment. **Warning:** ZenML currently does not support automatic Docker credentials configuration in container runtimes like Kubernetes. This feature will be added in a future release. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/azure-service-connector.md === ### Azure Service Connector for ZenML The **ZenML Azure Service Connector** enables authentication and access to Azure resources such as Blob storage, AKS clusters, and ACR registries. It supports automatic credential configuration via the Azure CLI and facilitates specialized authentication for various Azure services. #### Prerequisites - Install the Azure Service Connector: - `pip install "zenml[connectors-azure]"` (for Azure Service Connector only) - `zenml integration install azure` (for full Azure integration) - Azure CLI setup is recommended for auto-configuration but not mandatory. **Important Note:** Auto-configuration uses temporary access tokens, which do not support Azure Blob storage. For full functionality, configure an Azure service principal. #### Resource Types 1. **Generic Azure Resource**: Connects to any Azure service using generic credentials. 2. **Azure Blob Storage**: Requires permissions for read/write access and listing. Resource name formats: - URI: `{az|abfs}://{container-name}` - Name: `{container-name}` - Authentication methods: Implicit and service principal only. 3. **AKS Kubernetes Cluster**: Requires permissions to list AKS clusters. Resource name formats: - Resource group scoped: `[{resource-group}/]{cluster-name}` - Name: `{cluster-name}` 4. **ACR Container Registry**: Requires permissions for image pull/push and listing. Resource name formats: - URI: `[https://]{registry-name}.azurecr.io` - Name: `{registry-name}` #### Authentication Methods - **Implicit Authentication**: Uses environment variables or Azure CLI credentials. Requires explicit enabling due to security risks. - **Service Principal**: Uses Azure client ID and secret for authentication. Requires prior setup of an Azure service principal. - **Access Token**: Uses temporary tokens, not suitable for long-term use or Azure Blob storage. #### Example Commands - List Azure service connector types: ```sh zenml service-connector list-types --type azure ``` - Register an implicit authentication connector: ```sh zenml service-connector register azure-implicit --type azure --auth-method implicit --auto-configure ``` - Register a service principal connector: ```sh zenml service-connector register azure-service-principal --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= ``` #### Local Client Configuration - Configure local clients (e.g., Kubernetes CLI, Docker CLI) using credentials from the Azure Service Connector. - Example for Kubernetes: ```sh zenml service-connector login azure-service-principal --resource-type kubernetes-cluster --resource-id= ``` #### Stack Components - Connect Azure resources to ZenML Stack Components, such as: - **Artifact Store**: Connect to Azure Blob storage. - **Orchestrator**: Connect to AKS. - **Container Registry**: Connect to ACR. #### End-to-End Example 1. Set up Azure service principal with required permissions. 2. Register a multi-type Azure Service Connector. 3. Connect an Azure Blob Storage Artifact Store. 4. Connect an AKS Orchestrator. 5. Connect an ACR Container Registry. 6. Register a local Image Builder. 7. Combine components into a stack and run a pipeline. This concise overview captures the essential details of configuring and using the Azure Service Connector with ZenML, ensuring no critical information is lost while maintaining clarity and brevity. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/kubernetes-service-connector.md === ### Kubernetes Service Connector Overview The ZenML Kubernetes Service Connector enables authentication and connection to Kubernetes clusters, allowing access to any generic Kubernetes cluster via pre-authenticated Kubernetes Python clients. It also facilitates local Kubernetes CLI (`kubectl`) configuration. ### Prerequisites - Install the Kubernetes Service Connector: - For standalone installation: ```shell pip install "zenml[connectors-kubernetes]" ``` - For full integration: ```shell zenml integration install kubernetes ``` - Local `kubectl` configuration is not required for accessing Kubernetes clusters. ### Resource Types - Supports only `kubernetes-cluster` resource type, identified by a user-friendly name during registration. ### Authentication Methods 1. Username and password (not recommended for production). 2. Authentication token (with or without client certificates). For local K3D clusters, an empty token can be used. **Note:** The Service Connector does not generate short-lived credentials; use API tokens with client certificates when possible. ### Auto-configuration Fetch credentials from the local `kubectl` during registration. Example command to register with auto-configuration: ```sh zenml service-connector register kube-auto --type kubernetes --auto-configure ``` ### Example Command Outputs - Successful registration: ```text Successfully registered service connector `kube-auto` with access to the following resources: ┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE │ RESOURCE NAMES ┃ ┠───────────────────────┼────────────────┨ ┃ 🌀 kubernetes-cluster │ 35.185.95.223 ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛ ``` - Describe service connector: ```sh zenml service-connector describe kube-auto ``` Output includes details like ID, name, auth method, and resource name. ### Local Client Provisioning Configure the local Kubernetes client with: ```sh zenml service-connector login kube-auto ``` This updates the local kubeconfig and sets the current context. ### Stack Components Usage The Kubernetes Service Connector is applicable in Orchestrator and Model Deployer stack components, managing Kubernetes workloads without explicit `kubectl` configuration in the target environment. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md === ### Custom Stack Component Flavor in ZenML #### Overview ZenML allows for the creation of custom stack component flavors, enhancing composability and reusability in MLOps platforms. This guide outlines the process of developing and utilizing custom flavors. #### Component Flavors - **Component Type**: A broad category defining functionality (e.g., `artifact_store`). - **Flavors**: Specific implementations of a component type (e.g., `local`, `s3`). #### Core Abstractions 1. **StackComponent**: Defines core functionality. ```python from zenml.stack import StackComponent class BaseArtifactStore(StackComponent): @abstractmethod def open(self, path, mode="r"): pass @abstractmethod def exists(self, path): pass ``` 2. **StackComponentConfig**: Configures a stack component instance, separating configuration from implementation. ```python from zenml.stack import StackComponentConfig class BaseArtifactStoreConfig(StackComponentConfig): path: str SUPPORTED_SCHEMES: ClassVar[Set[str]] ``` 3. **Flavor**: Combines implementation and configuration, defining flavor properties. ```python from zenml.stack import Flavor from zenml.enums import StackComponentType class LocalArtifactStoreFlavor(Flavor): @property def name(self) -> str: return "local" @property def type(self) -> StackComponentType: return StackComponentType.ARTIFACT_STORE @property def config_class(self) -> Type[LocalArtifactStoreConfig]: return LocalArtifactStoreConfig @property def implementation_class(self) -> Type[LocalArtifactStore]: return LocalArtifactStore ``` #### Implementing a Custom Flavor To create a custom flavor (e.g., `S3ArtifactStore`): 1. **Configuration Class**: ```python from zenml.artifact_stores import BaseArtifactStoreConfig from zenml.utils.secret_utils import SecretField class MyS3ArtifactStoreConfig(BaseArtifactStoreConfig): SUPPORTED_SCHEMES: ClassVar[Set[str]] = {"s3://"} key: Optional[str] = SecretField(default=None) secret: Optional[str] = SecretField(default=None) token: Optional[str] = SecretField(default=None) client_kwargs: Optional[Dict[str, Any]] = None config_kwargs: Optional[Dict[str, Any]] = None s3_additional_kwargs: Optional[Dict[str, Any]] = None ``` 2. **Implementation Class**: ```python import s3fs from zenml.artifact_stores import BaseArtifactStore class MyS3ArtifactStore(BaseArtifactStore): _filesystem: Optional[s3fs.S3FileSystem] = None @property def filesystem(self) -> s3fs.S3FileSystem: if not self._filesystem: self._filesystem = s3fs.S3FileSystem( key=self.config.key, secret=self.config.secret, token=self.config.token, client_kwargs=self.config.client_kwargs, config_kwargs=self.config.config_kwargs, s3_additional_kwargs=self.config.s3_additional_kwargs, ) return self._filesystem def open(self, path, mode="r"): return self.filesystem.open(path=path, mode=mode) def exists(self, path): return self.filesystem.exists(path=path) ``` 3. **Flavor Class**: ```python from zenml.artifact_stores import BaseArtifactStoreFlavor class MyS3ArtifactStoreFlavor(BaseArtifactStoreFlavor): @property def name(self): return 'my_s3_artifact_store' @property def implementation_class(self): return MyS3ArtifactStore @property def config_class(self): return MyS3ArtifactStoreConfig ``` #### Registering the Flavor Register the flavor using the ZenML CLI: ```shell zenml artifact-store flavor register ``` #### Usage After registration, use the custom flavor in your stacks: ```shell zenml artifact-store register --flavor=my_s3_artifact_store --path='some-path' zenml stack register --artifact-store ``` #### Best Practices - Execute `zenml init` consistently. - Test flavors thoroughly before production use. - Keep code clean and well-documented. - Use existing flavors as references for new implementations. #### Additional Resources For specific stack component types, refer to the documentation for orchestrators, artifact stores, container registries, etc. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/README.md === # Managing Stacks & Components ## What is a Stack? A **stack** in the ZenML framework represents the configuration of infrastructure and tooling for pipeline execution. It consists of various components, each serving a specific role, such as: - **Container Registry** - **Kubernetes Cluster** (as an orchestrator) - **Artifact Store** - **Experiment Tracker** (e.g., MLflow) ## Organizing Execution Environments ZenML allows running pipelines across multiple stacks to facilitate testing in different environments, such as: 1. Local experimentation 2. Staging in the cloud 3. Production deployment Benefits of separate stacks include: - Preventing accidental production deployments - Cost management by using less powerful resources in staging - Controlled access through user permissions ## Managing Credentials Most stack components require credentials for infrastructure interaction. The recommended method for handling these is through **Service Connectors**, which abstract sensitive information. ### Recommended Roles - Limit Service Connector creation to individuals with direct cloud resource access to reduce credential leakage risk, enable instant revocation, and simplify auditing. ### Recommended Workflow 1. Designate a small group to create Service Connectors. 2. Create a connector for development/staging environments for data scientists. 3. Establish a separate connector for production to safeguard resources. ## Deploying and Managing Stacks Deploying MLOps stacks involves several challenges: - Each tool has specific requirements (e.g., Kubeflow needs a Kubernetes cluster). - Setting default infrastructure parameters can be complex. - Standard installations may require additional configurations for security. - Components must have appropriate permissions to communicate. - Resource cleanup post-experimentation is crucial to avoid unnecessary costs. ### Documentation Links - [Deploy a cloud stack with ZenML](./deploy-a-cloud-stack.md) - [Register a cloud stack](./register-a-cloud-stack.md) - [Deploy a cloud stack with Terraform](./deploy-a-cloud-stack-with-terraform.md) - [Export and install stack requirements](./export-stack-requirements.md) - [Reference secrets in stack configuration](./reference-secrets-in-stack-configuration.md) - [Implement a custom stack component](./implement-a-custom-stack-component.md) This summary captures the essential aspects of managing stacks and components within the ZenML framework, focusing on stack configuration, credential management, and deployment challenges. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/export-stack-requirements.md === ### Export Stack Requirements To obtain the `pip` requirements for your stack, use the following CLI command: ```bash zenml stack export-requirements --output-file stack_requirements.txt pip install -r stack_requirements.txt ``` This command exports the requirements to a file named `stack_requirements.txt`, which can then be used to install the necessary packages. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/reference-secrets-in-stack-configuration.md === ### Summary: Referencing Secrets in Stack Configuration In ZenML, components in your stack may require sensitive information (e.g., passwords, tokens) for infrastructure connections. Instead of hardcoding these values, you can reference secrets securely using the syntax `{{.}}`. #### Example: Registering and Using Secrets 1. **Register a Secret**: ```shell zenml secret create mlflow_secret \ --username=admin \ --password=abc123 ``` 2. **Reference in Experiment Tracker**: ```shell zenml experiment-tracker register mlflow \ --flavor=mlflow \ --tracking_username={{mlflow_secret.username}} \ --tracking_password={{mlflow_secret.password}} \ ... ``` #### Secret Validation ZenML validates the existence of referenced secrets and keys before running a pipeline to prevent runtime failures. The validation can be controlled using the `ZENML_SECRET_VALIDATION_LEVEL` environment variable: - `NONE`: Disables validation. - `SECRET_EXISTS`: Validates only the existence of secrets. - `SECRET_AND_KEY_EXISTS`: (default) Validates both the existence of secrets and their key-value pairs. #### Fetching Secret Values in Steps Using centralized secrets management, secrets can be accessed via the ZenML `Client` API within steps: ```python from zenml import step from zenml.client import Client @step def secret_loader() -> None: secret = Client().get_secret() authenticate_to_some_api( username=secret.secret_values["username"], password=secret.secret_values["password"], ) ``` ### Additional Resources - **Interact with Secrets**: Learn how to create, list, and delete secrets using the ZenML CLI and Python SDK. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack.md === # Deploy a Cloud Stack with a Single Click ZenML allows users to deploy a cloud stack, which represents the configuration of infrastructure, with a single click. This simplifies the process of setting up necessary infrastructure components and defining them as stack components in ZenML, especially in remote settings. ## 1-Click Deployment Tool Usage ### Prerequisites - A deployed instance of ZenML (not a local server via `zenml login --local`). Instructions for deployment can be found [here](../../../getting-started/deploying-zenml/README.md). ### Deployment via Dashboard 1. Navigate to the stacks page and click "+ New Stack". 2. Select "New Infrastructure". 3. Choose your cloud provider (AWS, GCP, or Azure) and configure the stack. #### AWS Deployment - Select a region and stack name. - Click "Deploy in AWS" to be redirected to AWS CloudFormation. - Log in, review, and confirm the configuration to create the stack. #### GCP Deployment - Select a region and stack name. - Click "Deploy in GCP" to start a Cloud Shell session. - Trust the ZenML GitHub repository and authenticate with GCP. - Follow prompts to configure deployment and run scripts to provision resources. #### Azure Deployment - Select a location and stack name. - Click "Deploy in Azure" to start a Cloud Shell session. - Paste the provided `main.tf` file into Cloud Shell and run `terraform init --upgrade` and `terraform apply`. ### Deployment via CLI Use the command: ```shell zenml stack deploy -p {aws|gcp|azure} ``` - Follow prompts for AWS, GCP, or Azure as described above. ## Deployed Resources Overview ### AWS - **Resources**: S3 bucket (Artifact Store), ECR (Container Registry), CloudBuild project (Image Builder), IAM roles. - **Permissions**: Various permissions for S3, ECR, CloudBuild, and SageMaker. ### GCP - **Resources**: GCS bucket (Artifact Store), GCP Artifact Registry (Container Registry), Vertex AI (Orchestrator), Cloud Build (Image Builder). - **Permissions**: Roles for GCS, Artifact Registry, Vertex AI, and Cloud Build. ### Azure - **Resources**: Resource Group, Azure Storage Account (Artifact Store), Azure Container Registry, AzureML Workspace (Orchestrator). - **Permissions**: Roles for Storage Account, Container Registry, and AzureML Workspace. With this streamlined process, users can efficiently deploy a cloud stack and begin running pipelines in a remote environment. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/register-a-cloud-stack.md === ### ZenML Cloud Stack Registration Documentation Summary **Overview:** ZenML's stack represents the configuration of your infrastructure. Traditionally, creating a stack involves deploying infrastructure and defining it in ZenML, which can be complex. The **stack wizard** simplifies this by allowing you to register a ZenML cloud stack using existing infrastructure. **Deployment Options:** - If infrastructure isn't deployed, use the **1-click deployment tool** or **Terraform modules** for more control. ### Using the Stack Wizard **Access:** - Available via CLI and dashboard. **Dashboard Steps:** 1. Navigate to the stacks page and click "+ New Stack". 2. Select "Use existing Cloud". 3. Choose your cloud provider and authentication method. 4. Fill in required fields. **CLI Command:** ```shell zenml stack register -p {aws|gcp|azure} ``` - Use `-sc ` to specify an existing service connector. ### Service Connector Configuration - The wizard checks for local environment credentials. If found, you can use them or configure manually. - If declined, a list of existing service connectors will be presented. ### Authentication Methods by Provider **AWS:** - Options include AWS Secret Key, STS Token, IAM Role, Session Token, and Federation Token. Each requires specific credentials such as `aws_access_key_id`, `aws_secret_access_key`, and `region`. **GCP:** - Options include User Account, Service Account, External Account, OAuth 2.0 Token, and Service Account Impersonation. Required fields include `user_account_json` or `service_account_json` and `project_id`. **Azure:** - Options include Service Principal and Access Token. Required fields include `client_secret`, `tenant_id`, and `client_id`. ### Defining Cloud Components You will define three essential components for your stack: 1. **Artifact Store** 2. **Orchestrator** 3. **Container Registry** For each component: - Choose to reuse existing components or create new ones from available resources. ### Conclusion The stack wizard streamlines the process of registering a cloud stack, enabling you to run pipelines in a remote environment efficiently. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack-with-terraform.md === ### Summary: Deploy a Cloud Stack Using Terraform with ZenML ZenML provides a collection of [Terraform modules](https://registry.terraform.io/modules/zenml-io/zenml-stack) to facilitate the provisioning of cloud resources and their integration with ZenML Stacks. This approach enhances the efficiency and scalability of machine learning infrastructure deployment. #### Prerequisites - A reachable ZenML server instance (not local). - Create a service account and API key for Terraform access: ```shell zenml service-account create ``` - Install Terraform (version 1.9 or higher) on your machine. - Authenticate with your cloud provider's CLI or SDK. #### Using Terraform Modules 1. Set up the ZenML provider using environment variables: ```shell export ZENML_SERVER_URL="https://your-zenml-server.com" export ZENML_API_KEY="" ``` 2. Create a `main.tf` configuration file with the following structure (replace `` with `aws`, `gcp`, or `azure`): ```hcl terraform { required_providers { aws = { source = "hashicorp/aws" } zenml = { source = "zenml-io/zenml" } } } provider "zenml" {} module "zenml_stack" { source = "zenml-io/zenml-stack/" zenml_stack_name = "" orchestrator = "" } output "zenml_stack_id" { value = module.zenml_stack.zenml_stack_id } output "zenml_stack_name" { value = module.zenml_stack.zenml_stack_name } ``` 3. Run the following commands: ```shell terraform init terraform apply ``` Confirm changes by typing `yes` when prompted. 4. After provisioning, install required integrations and set the ZenML stack: ```shell zenml integration install zenml stack set ``` #### Cloud Provider Specifics - **AWS**: Requires AWS CLI and `aws configure`. Example configuration includes S3, ECR, and various orchestrators. - **GCP**: Requires `gcloud` CLI. Example configuration includes GCS and Google Artifact Registry. - **Azure**: Requires Azure CLI. Example configuration includes Azure Storage and Azure Container Registry. #### Cleanup To remove all resources and delete the registered ZenML stack, run: ```shell terraform destroy ``` This concise guide provides the essential steps and configurations needed to deploy a cloud stack using Terraform with ZenML, ensuring that all critical information is retained while maintaining brevity. ================================================== === File: docs/book/how-to/configuring-zenml/configuring-zenml.md === ### Configuring ZenML's Default Behavior This guide outlines methods to configure ZenML's behavior in various scenarios. Users can adapt specific aspects of ZenML to meet their needs. For visual reference, an image related to ZenML is provided. **Key Points:** - ZenML allows customization of its default settings. - Configuration can be tailored to specific use cases. This documentation serves as a foundational resource for understanding ZenML's configuration capabilities. ================================================== === File: docs/book/how-to/project-setup-and-management/README.md === # Project Setup and Management This section details the setup and management of ZenML projects, covering essential aspects to ensure effective project organization and execution. ### Key Components 1. **Project Initialization**: - Use `zenml init` to create a new ZenML project. - This command sets up the necessary directory structure and configuration files. 2. **Configuration**: - ZenML uses a `.zenml` directory to store configurations. - Configuration files include `zenml.yaml`, which defines the project settings. 3. **Version Control**: - It is recommended to use Git for version control. - Ensure that the `.zenml` directory is included in your repository to track changes. 4. **Environment Management**: - Utilize virtual environments (e.g., `venv`, `conda`) to manage dependencies. - Activate the environment before running ZenML commands. 5. **Pipeline Management**: - Define pipelines using the `@pipeline` decorator. - Pipelines consist of steps defined using the `@step` decorator. 6. **Running Pipelines**: - Execute pipelines with `zenml run `. - Monitor execution status and logs for debugging. 7. **Artifact Storage**: - Configure artifact storage to persist pipeline outputs. - Supported storage backends include local file systems, cloud storage, etc. 8. **Collaboration**: - Share project configurations and pipelines with team members. - Use ZenML's collaboration features to manage access and permissions. By following these guidelines, users can effectively set up and manage ZenML projects, ensuring a streamlined workflow for machine learning operations. ================================================== === File: docs/book/how-to/project-setup-and-management/interact-with-secrets.md === # ZenML Secrets Management Documentation Summary ## Overview of ZenML Secrets ZenML secrets are secure groupings of **key-value pairs** stored in the ZenML secrets store, each identified by a **name** for easy reference in pipelines and stacks. ## Creating a Secret ### CLI Method To create a secret named `` with key-value pairs: ```shell zenml secret create --= --= ``` Alternatively, use JSON or YAML format: ```shell zenml secret create --values='{"key1":"value1","key2":"value2"}' ``` For interactive creation: ```shell zenml secret create -i ``` For large values or special characters, read from a file: ```bash zenml secret create --key=@path/to/file.txt zenml secret create --values=@path/to/file.txt ``` Use CLI commands to list, update, and delete secrets. For interactive registration of missing secrets in a stack: ```shell zenml stack register-secrets [] ``` ### Python SDK Method Using the ZenML client API: ```python from zenml.client import Client client = Client() client.create_secret(name="my_secret", values={"username": "admin", "password": "abc123"}) ``` Other methods include `get_secret`, `update_secret`, `list_secrets`, and `delete_secret`. ## Setting Secret Scope Secrets can be scoped to a user, defaulting to the active user. To create a user-scoped secret: ```shell zenml secret create --scope user --= ``` Scopes act as namespaces, allowing secret references to be user-specific. ## Accessing Registered Secrets ### Referencing Secrets Components in a stack can reference secrets securely: ```shell zenml experiment-tracker register mlflow --flavor=mlflow --tracking_username={{mlflow_secret.username}} --tracking_password={{mlflow_secret.password}} ``` ZenML validates the existence of referenced secrets and keys before running a pipeline. Control validation with the `ZENML_SECRET_VALIDATION_LEVEL` environment variable: - `NONE`: disables validation. - `SECRET_EXISTS`: checks only for secret existence. - `SECRET_AND_KEY_EXISTS`: checks both (default). ### Fetching Secret Values in Steps Access secrets in steps using the ZenML `Client` API: ```python from zenml import step from zenml.client import Client @step def secret_loader() -> None: secret = Client().get_secret() authenticate_to_some_api( username=secret.secret_values["username"], password=secret.secret_values["password"], ) ``` This summary encapsulates the key points and technical details necessary for understanding and utilizing ZenML secrets management effectively. ================================================== === File: docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/README.md === # Setting up a Well-Architected ZenML Project This guide outlines best practices for structuring ZenML projects to enhance scalability, maintainability, and team collaboration. ## Importance of a Well-Architected Project A well-architected ZenML project is essential for effective MLOps, providing a solid foundation for developing, deploying, and maintaining ML models. ## Key Components ### Repository Structure - Organize folders for pipelines, steps, and configurations. - Maintain clear separation of concerns and consistent naming conventions. - Refer to the [Set up repository guide](./set-up-repository.md) for details. ### Version Control and Collaboration - Integrate with Git for efficient collaboration and code management. - Benefits include faster pipeline builds and easy change tracking. - Learn to connect your Git repository in the [Set up a repository guide](./set-up-repository.md). ### Stacks, Pipelines, Models, and Artifacts - **Stacks**: Infrastructure and tool configurations. - **Models**: ML models and metadata. - **Pipelines**: ML workflows. - **Artifacts**: Data and model outputs. - Organize these components as detailed in the [Organizing Stacks, Pipelines, Models, and Artifacts guide](../collaborate-with-team/stacks-pipelines-models.md). ### Access Management and Roles - Define roles (e.g., data scientists, MLOps engineers). - Set up [service connectors](../../infrastructure-deployment/auth-management/README.md) and manage authorizations. - Use [Teams in ZenML Pro](../../../getting-started/zenml-pro/teams.md) for role assignments. - Explore strategies in the [Access Management and Roles guide](../collaborate-with-team/access-management.md). ### Shared Components and Libraries - Promote code reuse with shared components like custom flavors and steps. - Use shared private wheels for internal distribution. - Learn about sharing code in the [Shared Libraries and Logic for Teams guide](../collaborate-with-team/shared-components-for-teams.md). ### Project Templates - Utilize pre-made or custom templates for consistency. - Refer to the [Project Templates guide](../collaborate-with-team/project-templates/README.md) for usage. ### Migration and Maintenance - Strategies for migrating legacy code and upgrading ZenML servers. - Discover best practices in the [Migration and Maintenance guide](../../advanced-topics/manage-zenml-server/best-practices-upgrading-zenml.md#upgrading-your-code). ## Getting Started Explore the guides in this section to begin building your ZenML project. Regularly review and refine your project structure and processes to adapt to your team's needs. Following these guidelines will help create a robust and collaborative MLOps environment. ================================================== === File: docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/connect-your-git-repository.md === ### Summary of Connecting a Git Repository in ZenML **Overview**: Connecting a code repository in ZenML allows tracking of code versions used in pipeline runs and can speed up Docker image builds by preventing unnecessary rebuilds. #### Registering a Code Repository 1. **Install Integration**: ```shell zenml integration install ``` 2. **Register Repository**: ```shell zenml code-repository register --type= [--CODE_REPOSITORY_OPTIONS] ``` #### Available Implementations - **Built-in Support**: ZenML supports `GitHub` and `GitLab` as code repositories, with options for custom implementations. ##### GitHub 1. **Install Integration**: ```shell zenml integration install github ``` 2. **Register Repository**: ```shell zenml code-repository register --type=github \ --owner= --repository= --token= ``` - For GitHub Enterprise, include: ```shell --api_url= --host= ``` 3. **Secure Token Storage**: ```shell zenml secret create github_secret --pa_token= zenml code-repository register ... --token={{github_secret.pa_token}} ``` ##### GitLab 1. **Install Integration**: ```shell zenml integration install gitlab ``` 2. **Register Repository**: ```shell zenml code-repository register --type=gitlab \ --group= --project= --token= ``` - For self-hosted GitLab, include: ```shell --instance_url= --host= ``` 3. **Secure Token Storage**: ```shell zenml secret create gitlab_secret --pa_token= zenml code-repository register ... --token={{gitlab_secret.pa_token}} ``` #### Developing a Custom Code Repository 1. **Subclass BaseCodeRepository**: Implement the following methods: - `login()` - `download_files(commit: str, directory: str, repo_sub_directory: Optional[str])` - `get_local_context(path: str)` 2. **Register Custom Repository**: ```shell zenml code-repository register --type=custom --source=my_module.MyRepositoryClass [--CODE_REPOSITORY_OPTIONS] ``` This documentation provides the necessary steps to connect and manage code repositories in ZenML, ensuring efficient pipeline execution and version control. ================================================== === File: docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/set-up-repository.md === ### Recommended Repository Structure and Best Practices for ZenML #### Project Structure A recommended structure for ZenML projects is as follows: ``` . ├── .dockerignore ├── Dockerfile ├── steps │ ├── loader_step │ │ ├── loader_step.py │ │ └── requirements.txt (optional) │ └── training_step ├── pipelines │ ├── training_pipeline │ │ ├── training_pipeline.py │ │ └── requirements.txt (optional) │ └── deployment_pipeline ├── notebooks │ └── *.ipynb ├── requirements.txt ├── .zen └── run.py ``` - The `steps` and `pipelines` folders contain the respective components of your project. Simpler projects can keep steps directly in the `steps` folder. - Registering your repository as a code repository helps ZenML track code versions and can speed up Docker image builds. #### Steps - Store each step in separate Python files to manage utils and dependencies efficiently. - Use the `logging` module to log messages, which will be recorded in the ZenML dashboard: ```python from zenml.logger import get_logger logger = get_logger(__name__) @step def training_data_loader(): logger.info("My logs") ``` #### Pipelines - Keep pipelines in separate Python files and separate execution from definition to avoid immediate execution upon import. - Avoid naming pipelines or instances "pipeline" to prevent conflicts with the ZenML decorator. #### .dockerignore - Use `.dockerignore` to exclude unnecessary files (e.g., data, virtual environments) from Docker images to optimize size and build time. #### Dockerfile (optional) - ZenML uses an official Docker image by default. You can customize this by providing your own `Dockerfile`. #### Notebooks - Organize all notebooks in a dedicated folder. #### .zen - Initialize the project with `zenml init` to define the project scope. It is crucial to have a `.zen` directory in the project root or a parent directory to ensure proper import paths. #### run.py - Place pipeline runners in the project root to ensure all imports resolve correctly. If no `.zen` is defined, this file also establishes the implicit source root. This structure and these practices will help maintain a clean and efficient ZenML project. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/README.md === It seems there is no documentation text provided for summarization. Please provide the text you would like me to summarize, and I'll be happy to assist! ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/access-management.md === # Access Management and Roles in ZenML This guide outlines user roles and access management in ZenML, essential for project security and efficiency. ## Typical Roles in an ML Project - **Data Scientists**: Develop and run pipelines. - **MLOps Platform Engineers**: Manage infrastructure and components. - **Project Owners**: Oversee ZenML deployment and user access. Roles may vary in your team, but responsibilities are generally consistent. ### Creating Roles You can create roles in ZenML Pro with specific permissions and assign them to Users or Teams. [Sign up for a free trial](https://cloud.zenml.io/). ## Service Connectors Service connectors integrate cloud services with ZenML, abstracting credentials and configurations. Only MLOps Platform Engineers should manage these connectors, while other team members can use them to create stack components without access to credentials. ### Example Permissions - **Data Scientist Role**: Can use connectors but cannot create, update, or delete them. - **MLOps Platform Engineer Role**: Can create, update, delete connectors, and read secret values. ### RBAC Features RBAC is available only in ZenML Pro. Learn more about roles [here](../../../getting-started/zenml-pro/roles.md). ## Server Upgrade Responsibilities Project Owners decide when to upgrade the ZenML server, considering team needs. MLOps Platform Engineers typically perform the upgrade, ensuring data backup and no service disruption. For best practices, refer to the [Best Practices for Upgrading ZenML Servers](../../../how-to/manage-zenml-server/best-practices-upgrading-zenml.md). ## Pipeline Migration and Maintenance Data Scientists own the pipeline code, while Platform Engineers ensure compatibility with new ZenML versions. Both should review release notes and migration guides during upgrades. ## Best Practices for Access Management - **Regular Audits**: Review user access and permissions periodically. - **Role-Based Access Control (RBAC)**: Streamline permission management. - **Least Privilege**: Grant minimal necessary permissions. - **Documentation**: Maintain clear role and access policy documentation. RBAC and permission assignment are exclusive to ZenML Pro users. Following these practices ensures a secure and collaborative ZenML environment. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/stacks-pipelines-models.md === # Organizing Stacks, Pipelines, Models, and Artifacts in ZenML This guide provides an overview of organizing key components in ZenML: Stacks, Pipelines, Models, and Artifacts, which are essential for structuring your ML workflows. ## Key Concepts - **Stacks**: Configuration of tools and infrastructure for running pipelines, including orchestrators, container registries, and artifact stores. Stacks enable consistent environments across local, staging, and production setups. - **Pipelines**: Sequences of tasks in the ML workflow, automating processes like data preparation, model training, and evaluation. Separate pipelines for training and inference enhance modularity and manageability. - **Models**: Collections of related pipelines, artifacts, and metadata, representing a project or workspace. Models facilitate data transfer between pipelines. - **Artifacts**: Outputs from pipeline steps that can be tracked and reused, such as datasets and trained models. Each pipeline run generates new artifact versions for traceability. ## Stack Management - A single stack can support multiple pipelines, reducing configuration overhead and promoting reproducibility. - Stacks should be reused across users and pipelines to minimize errors and maintain consistency. ## Organizing Pipelines, Models, and Artifacts ### Pipelines - Use separate pipelines for distinct tasks (e.g., training vs. inference) to allow independent execution and easier management. - Modular pipelines enable collaboration and better organization of runs. ### Models - Use Models to connect related pipelines and facilitate data transfer. - The Model Control Plane helps manage model versions and stages. ### Artifacts - Name artifacts clearly for easy identification and reuse. - Artifacts can be associated with Models for improved organization and visibility. ## Example Workflow 1. Team members create pipelines for feature engineering, training, and inference. 2. They use a shared stack for local testing, allowing quick iterations. 3. Artifacts from the training pipeline are used in the inference pipeline. 4. The Model Control Plane tracks model versions, enabling comparisons and promotions to production. ## Guidelines for Organization ### Models - One Model per ML use-case. - Use Models to group related resources. - Manage versions and stages with the Model Control Plane. ### Stacks - Separate stacks for different environments. - Share production and staging stacks for consistency. - Keep local stacks simple for rapid development. ### Naming and Organization - Maintain consistent naming conventions. - Use tags for resource organization. - Document configurations and dependencies. - Ensure modular and reusable pipeline code. Following these guidelines will help maintain a clean and scalable MLOps workflow as projects evolve. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/shared-components-for-teams.md === # Shared Libraries and Logic for Teams ## Overview This guide outlines how teams can share code and libraries using ZenML, focusing on what can be shared and how to distribute shared components. ## What Can Be Shared ZenML supports several types of custom components for sharing: ### Custom Flavors - Create in a shared repository. - Implement as per [ZenML documentation](../../infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md#implementing-a-custom-stack-component-flavor). - Register using ZenML CLI: ```bash zenml artifact-store flavor register ``` ### Custom Steps - Create and share via a separate repository, referenced like Python modules. ### Custom Materializers - Create in a shared repository. - Implement as per [ZenML documentation](https://docs.zenml.io/how-to/data-artifact-management/handle-data-artifacts/handle-custom-data-types). - Import and use in projects. ## How to Distribute Shared Components ### Shared Private Wheels - Package Python code for internal distribution. - **Benefits**: Easy installation, version and dependency management, privacy, and smooth integration. - **Setup**: 1. Create a private PyPI server (e.g., [AWS CodeArtifact](https://aws.amazon.com/codeartifact/)). 2. Build code into wheel format. 3. Upload to the private PyPI server. 4. Configure pip to use the private server. 5. Install packages using pip. ### Using Shared Libraries with `DockerSettings` - Specify shared libraries in the `Dockerfile` using `DockerSettings`. **Installing Shared Libraries**: 1. Using a list of requirements: ```python import os from zenml.config import DockerSettings from zenml import pipeline docker_settings = DockerSettings( requirements=["my-simple-package==0.1.0"], environment={'PIP_EXTRA_INDEX_URL': f"https://{os.environ.get('PYPI_TOKEN', '')}@my-private-pypi-server.com/{os.environ.get('PYPI_USERNAME', '')}/"} ) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 2. Using a requirements file: ```python docker_settings = DockerSettings(requirements="/path/to/requirements.txt") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` **Example `requirements.txt`**: ``` --extra-index-url https://YOURTOKEN@my-private-pypi-server.com/YOURUSERNAME/ my-simple-package==0.1.0 ``` ## Best Practices - Use version control (e.g., Git) for shared repositories. - Implement access controls for private PyPI servers. - Maintain clear documentation for shared components. - Regularly update shared libraries and communicate changes. - Set up continuous integration for shared libraries to ensure quality. By following these guidelines, teams can enhance collaboration, maintain consistency, and accelerate development within the ZenML framework. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/project-templates/create-your-own-template.md === ### Creating Your Own ZenML Template To standardize and share ML workflows across projects or teams, you can create a ZenML template using the Copier library. Here’s a concise guide: 1. **Create a Repository**: Set up a new repository for your template code and configuration files. 2. **Define ML Workflows**: Use existing ZenML templates (e.g., [starter template](https://github.com/zenml-io/template-starter)) as a base to define your ML steps and pipelines. 3. **Create `copier.yml`**: This file defines the template's parameters and default values. Refer to the [Copier documentation](https://copier.readthedocs.io/en/stable/creating/) for details. 4. **Test Your Template**: Generate a new project using the Copier CLI: ```bash copier copy https://github.com/your-username/your-template.git your-project ``` 5. **Use Your Template with ZenML**: Initialize a new ZenML project with your template: ```bash zenml init --template https://github.com/your-username/your-template.git ``` For a specific version, use: ```bash zenml init --template https://github.com/your-username/your-template.git --template-tag v1.0.0 ``` ### Additional Notes - Keep your template updated with best practices. - For practical examples, install the `e2e_batch` template: ```bash mkdir e2e_batch cd e2e_batch zenml init --template e2e_batch --template-with-defaults ``` This guide helps you create and utilize ZenML templates effectively for ML projects. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/project-templates/README.md === # ZenML Project Templates Overview ## Purpose ZenML project templates provide a quick start for building ML pipelines, featuring a collection of steps, pipelines, and a CLI. ## Available Project Templates | Project Template [Short name] | Tags | Description | |-------------------------------|------|-------------| | [Starter template](https://github.com/zenml-io/template-starter) [*starter*] | *basic, scikit-learn* | Basic ML setup with parameterized steps, a model training pipeline, flexible configuration, and a simple CLI using scikit-learn. | | [E2E Training with Batch Predictions](https://github.com/zenml-io/template-e2e-batch) [*e2e_batch*] | *etl, hp-tuning, model-promotion, drift-detection, batch-prediction, scikit-learn* | Two pipelines for data loading, preprocessing, hyperparameter tuning, model training, evaluation, promotion to production, data drift detection, and batch inference. | | [NLP Training Pipeline](https://github.com/zenml-io/template-nlp) [*nlp*] | *nlp, hp-tuning, model-promotion, training, pytorch, gradio, huggingface* | Simple NLP pipeline for tokenization, training, hyperparameter tuning, evaluation, and deployment of BERT or GPT-2 models, tested locally with Gradio. | ## Contribution Invitation ZenML invites users to share personal projects as templates for better understanding real-world MLOps scenarios. Interested users can join the [ZenML Slack](https://zenml.io/slack/) for collaboration. ## Using a Project Template To use templates, install ZenML with templates extras: ```bash pip install zenml[templates] ``` **Note:** These templates differ from 'Run Templates' used for triggering pipelines. More info can be found [here](https://docs.zenml.io/how-to/trigger-pipelines). To generate a project from a template: ```bash zenml init --template # Example: zenml init --template e2e_batch ``` For default values, add `--template-with-defaults`: ```bash zenml init --template --template-with-defaults # Example: zenml init --template e2e_batch --template-with-defaults ``` ================================================== === File: docs/book/how-to/trigger-pipelines/use-templates-rest-api.md === ## ZenML REST API: Running a Pipeline Template **Note:** This feature is available only in [ZenML Pro](https://zenml.io/pro). [Sign up here](https://cloud.zenml.io) for access. ### Prerequisites To trigger a pipeline via the REST API, you must have: 1. At least one run template for the pipeline. 2. The name of the pipeline. ### Steps to Trigger a Pipeline 1. **Get Pipeline ID:** - Call the endpoint to retrieve the pipeline ID. ```shell curl -X 'GET' \ '/api/v1/pipelines?name=' \ -H 'accept: application/json' \ -H 'Authorization: Bearer ' ``` 2. **Get Template ID:** - Use the pipeline ID to get the run templates. ```shell curl -X 'GET' \ '/api/v1/run_templates?pipeline_id=' \ -H 'accept: application/json' \ -H 'Authorization: Bearer ' ``` 3. **Run the Pipeline:** - Trigger the pipeline using the template ID. ```shell curl -X 'POST' \ '/api/v1/run_templates//runs' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -d '{ "steps": {"model_trainer": {"parameters": {"model_type": "rf"}}} }' ``` ### Example To re-run a pipeline named `training`: 1. Retrieve the pipeline ID: ```shell curl -X 'GET' \ '/api/v1/pipelines?name=training' \ -H 'accept: application/json' \ -H 'Authorization: Bearer ' ``` - Example response includes ``: `c953985e-650a-4cbf-a03a-e49463f58473`. 2. Get the template ID: ```shell curl -X 'GET' \ '/api/v1/run_templates?pipeline_id=c953985e-650a-4cbf-a03a-e49463f58473' \ -H 'accept: application/json' \ -H 'Authorization: Bearer ' ``` - Example response includes ``: `b826b714-a9b3-461c-9a6e-1bde3df3241d`. 3. Trigger the pipeline: ```shell curl -X 'POST' \ '/api/v1/run_templates/b826b714-a9b3-461c-9a6e-1bde3df3241d/runs' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -d '{ "steps": {"model_trainer": {"parameters": {"model_type": "rf"}}} }' ``` - A successful response indicates the pipeline has been re-triggered with the specified configuration. For more details on obtaining a bearer token, refer to the API documentation. ================================================== === File: docs/book/how-to/trigger-pipelines/use-templates-cli.md === ### ZenML CLI: Create a Run Template **Feature Access**: This functionality is exclusive to [ZenML Pro](https://zenml.io/pro). [Sign up here](https://cloud.zenml.io) for access. #### Command to Create a Template You can create a run template using the ZenML CLI with the following command: ```bash zenml pipeline create-run-template --name= ``` - Replace `` with `run.my_pipeline` if your pipeline is named `my_pipeline` in `run.py`. **Note**: An active **remote stack** is required to execute this command. Alternatively, specify a stack using the `--stack` option. ================================================== === File: docs/book/how-to/trigger-pipelines/README.md === # Trigger a Pipeline (Run Templates) In ZenML, you can execute a pipeline using the pipeline function. Here's a concise example: ```python from zenml import step, pipeline @step def load_data() -> dict: return {'features': [[1, 2], [3, 4], [5, 6]], 'labels': [0, 1, 0]} @step def train_model(data: dict) -> None: print(f"Trained model using {len(data['features'])} data points.") @pipeline def simple_ml_pipeline(): train_model(load_data()) if __name__ == "__main__": simple_ml_pipeline() ``` ## Run Templates Run Templates are pre-defined, parameterized configurations for ZenML pipelines, allowing easy execution from the ZenML dashboard or via the Client/REST API. They serve as customizable blueprints for pipeline runs. **Note:** This feature is exclusive to ZenML Pro users. Sign up [here](https://cloud.zenml.io) for access. ### Additional Resources - Use templates: [Python SDK](use-templates-python.md) - Use templates: [CLI](use-templates-cli.md) - Use templates: [Dashboard](use-templates-dashboard.md) - Use templates: [REST API](use-templates-rest-api.md) ================================================== === File: docs/book/how-to/trigger-pipelines/use-templates-dashboard.md === ### ZenML Dashboard: Create and Run a Template **Note**: This feature is exclusive to [ZenML Pro](https://zenml.io/pro). [Sign up here](https://cloud.zenml.io) for access. #### Create a Template 1. Navigate to a pipeline run executed on a remote stack (requires a remote orchestrator, artifact store, and container registry). 2. Click `+ New Template`, enter a name, and click `Create`. #### Run a Template - To run a template: - Click `Run a Pipeline` on the main `Pipelines` page, or - Go to a specific template page and click `Run Template`. You will be directed to the `Run Details` page, where you can: - Upload a `.yaml` configurations file or - Modify the configuration using the editor. Upon running the template, a new run will execute on the same stack as the original. ================================================== === File: docs/book/how-to/trigger-pipelines/use-templates-python.md === ### ZenML Template Creation and Execution **Feature Access**: This functionality is available only in [ZenML Pro](https://zenml.io/pro). Sign up [here](https://cloud.zenml.io) for access. #### Create a Template To create a run template using the ZenML client, follow these methods: 1. **From an Existing Pipeline Run**: ```python from zenml.client import Client run = Client().get_pipeline_run() Client().create_run_template(name=, deployment_id=run.deployment_id) ``` - **Note**: The pipeline run must be executed on a remote stack (with a remote orchestrator, artifact store, and container registry). 2. **From Pipeline Definition**: ```python from zenml import pipeline @pipeline def my_pipeline(): ... template = my_pipeline.create_run_template(name=) ``` #### Run a Template To run a created template: ```python from zenml.client import Client template = Client().get_run_template() config = template.config_template # [OPTIONAL] Modify the config here Client().trigger_pipeline(template_id=template.id, run_configuration=config) ``` - Triggering the template executes a new run on the same stack as the original. #### Advanced Usage: Run a Template from Another Pipeline You can trigger one pipeline from another using the following structure: ```python import pandas as pd from zenml import pipeline, step from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact from zenml.artifacts.utils import load_artifact from zenml.client import Client from zenml.config.pipeline_run_configuration import PipelineRunConfiguration @step def trainer(data_artifact_id: str): df = load_artifact(data_artifact_id) @pipeline def training_pipeline(): trainer() @step def load_data() -> pd.DataFrame: ... @step def trigger_pipeline(df: UnmaterializedArtifact): run_config = PipelineRunConfiguration( steps={"trainer": {"parameters": {"data_artifact_id": df.id}}} ) Client().trigger_pipeline("training_pipeline", run_configuration=run_config) @pipeline def loads_data_and_triggers_training(): df = load_data() trigger_pipeline(df) # Triggers the training pipeline ``` For more details, refer to the [PipelineRunConfiguration](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.pipeline_run_configuration.PipelineRunConfiguration) and [`trigger_pipeline`](https://sdkdocs.zenml.io/latest/core_code_docs/core-client/#zenml.client.Client) documentation, as well as information on [Unmaterialized Artifacts](../data-artifact-management/complex-usecases/unmaterialized-artifacts.md). ================================================== === File: docs/book/how-to/contribute-to-zenml/README.md === # Contribute to ZenML Thank you for considering contributing to ZenML! We welcome contributions such as new features, documentation improvements, integrations, or bug reports. ## How to Contribute Refer to the [ZenML contribution guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) for best practices and conventions for contributing features, including custom integrations. ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/how-to/contribute-to-zenml/implement-a-custom-integration.md === # Creating an External Integration and Contributing to ZenML ZenML aims to organize the MLOps landscape by providing numerous integrations with popular tools and allowing users to implement custom stack components. This guide outlines the steps to contribute your integration to ZenML. ### Step 1: Plan Your Integration Identify the categories relevant to your integration from the list available in the ZenML documentation. Note that an integration can belong to multiple categories (e.g., cloud integrations like AWS/GCP/Azure). ### Step 2: Create Stack Component Flavors Develop individual stack component flavors corresponding to the selected categories. Test them as custom flavors before packaging. For example, to register a custom orchestrator flavor: ```shell zenml orchestrator flavor register flavors.my_flavor.MyOrchestratorFlavor ``` Ensure ZenML is initialized at the root of your repository for proper resolution. ### Step 3: Create an Integration Class Once your flavors are ready, package them into your integration: 1. **Clone the ZenML Repository**: Set up your local environment following the [contributing guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). 2. **Create the Integration Directory**: Structure your integration under `src/zenml/integrations//` with subdirectories for artifact stores and flavors. 3. **Define Integration Name**: In `zenml/integrations/constants.py`, add: ```python EXAMPLE_INTEGRATION = "" ``` 4. **Create Integration Class**: In `src/zenml/integrations//__init__.py`, define your integration class: ```python from zenml.integrations.constants import EXAMPLE_INTEGRATION from zenml.integrations.integration import Integration from zenml.stack import Flavor class ExampleIntegration(Integration): NAME = EXAMPLE_INTEGRATION REQUIREMENTS = [""] @classmethod def flavors(cls): from zenml.integrations. import return [] ExampleIntegration.check_installation() ``` 5. **Import the Integration**: Ensure your integration is imported in `src/zenml/integrations/__init__.py`. ### Step 4: Create a PR Submit a pull request to ZenML and await review from core maintainers. Thank you for your contribution! ================================================== === File: docs/book/how-to/control-logging/disable-rich-traceback.md === ### Disabling Rich Traceback Output in ZenML ZenML uses the `rich` library for enhanced traceback output, which aids in debugging pipelines. To disable this feature, set the following environment variable: ```bash export ZENML_ENABLE_RICH_TRACEBACK=false ``` This change will result in plain text traceback output. Note that setting this variable in the client environment (e.g., local machine) does not affect remote pipeline runs. To disable rich tracebacks for remote runs, set the environment variable in the pipeline's environment: ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings(environment={"ZENML_ENABLE_RICH_TRACEBACK": "false"}) # Add to the decorator @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() # Or configure pipeline options my_pipeline = my_pipeline.with_options( settings={"docker": docker_settings} ) ``` This ensures that both local and remote pipeline runs will display plain text tracebacks. ================================================== === File: docs/book/how-to/control-logging/view-logs-on-the-dasbhoard.md === # Viewing Logs on the Dashboard ZenML captures logs during step execution using a logging handler. Users can utilize the Python logging module or print statements, which ZenML will capture. ```python import logging from zenml import step @step def my_step() -> None: logging.warning("`Hello`") # Use the logging module. print("World.") # Use print statements. ``` Logs are stored in the artifact store of your stack and can be viewed on the dashboard only if the ZenML server has access to this store. Access conditions include: - **Local ZenML Server**: Both local and remote artifact stores may be accessible based on client configuration. - **Deployed ZenML Server**: Logs from runs on a local artifact store are not accessible. Logs from a remote artifact store may be accessible if configured with a service connector. For configuration details, refer to the production guide on [remote artifact stores](../../user-guide/production-guide/remote-storage.md) and [service connectors](../../how-to/infrastructure-deployment/auth-management/service-connectors-guide.md). If configured correctly, logs will appear on the dashboard. **Note**: To disable log storage for performance or storage reasons, follow [these instructions](./enable-or-disable-logs-storing.md). ================================================== === File: docs/book/how-to/control-logging/README.md === ### Configuring ZenML's Default Logging Behavior ZenML generates different types of logs across various environments: 1. **ZenML Server Logs**: Generated by the ZenML server, similar to any FastAPI server. 2. **Client or Runner Logs**: Produced during pipeline runs, capturing events before, after, and during pipeline execution. 3. **Execution Environment Logs**: Created at the orchestrator level during the execution of each pipeline step, typically utilizing Python's `logging` module. This section outlines how users can manage logging behavior in these environments. ================================================== === File: docs/book/how-to/control-logging/set-logging-verbosity.md === ### Setting Logging Verbosity in ZenML By default, ZenML sets logging verbosity to `INFO`. To change this, set the environment variable: ```bash export ZENML_LOGGING_VERBOSITY=INFO ``` Available levels are `INFO`, `WARN`, `ERROR`, `CRITICAL`, and `DEBUG`. Note that setting this variable in the client environment (e.g., local machine) does not affect remote pipeline runs. For remote control, set `ZENML_LOGGING_VERBOSITY` in the pipeline run environment: ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings(environment={"ZENML_LOGGING_VERBOSITY": "DEBUG"}) @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() # Alternatively, configure options my_pipeline = my_pipeline.with_options(settings={"docker": docker_settings}) ``` ================================================== === File: docs/book/how-to/control-logging/enable-or-disable-logs-storing.md === # ZenML Logging Configuration ZenML captures logs during step execution using a logging handler. Users can utilize the Python logging module or print statements, which ZenML will store. ## Example Code ```python import logging from zenml import step @step def my_step() -> None: logging.warning("`Hello`") print("World.") ``` Logs are stored in the artifact store of your stack and can be displayed on the dashboard. **Note**: Logs will not be visible if not connected to a cloud artifact store with a service connector. For more details, refer to the [log viewing documentation](./view-logs-on-the-dasbhoard.md). ## Disabling Log Storage To prevent logs from being stored, you can: 1. Use the `enable_step_logs` parameter in the `@pipeline` or `@step` decorators: ```python from zenml import pipeline, step @step(enable_step_logs=False) def my_step() -> None: ... @pipeline(enable_step_logs=False) def my_pipeline(): ... ``` 2. Set the environmental variable `ZENML_DISABLE_STEP_LOGS_STORAGE` to `true` in the execution environment, which takes precedence over decorator parameters: ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings(environment={"ZENML_DISABLE_STEP_LOGS_STORAGE": "true"}) @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() # Or configure pipeline options my_pipeline = my_pipeline.with_options(settings={"docker": docker_settings}) ``` This configuration allows for flexible management of log storage in ZenML workflows. ================================================== === File: docs/book/how-to/control-logging/disable-colorful-logging.md === ### Disabling Colorful Logging in ZenML ZenML enables colorful logging by default for better readability. To disable this feature, set the following environment variable: ```bash ZENML_LOGGING_COLORS_DISABLED=true ``` Setting this variable in the client environment (e.g., local machine) will also disable colorful logging for remote pipeline runs. To disable it locally while keeping it enabled for remote runs, set the variable in the pipeline run's environment: ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings(environment={"ZENML_LOGGING_COLORS_DISABLED": "false"}) # Add to the decorator @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() # Or configure pipeline options my_pipeline = my_pipeline.with_options( settings={"docker": docker_settings} ) ``` This approach allows for flexible logging configurations based on the execution environment. ================================================== === File: docs/book/how-to/control-logging/set-logging-format.md === ### Summary: Setting the Logging Format in ZenML To change the default logging format in ZenML, set the following environment variable: ```bash export ZENML_LOGGING_FORMAT='%(asctime)s %(message)s' ``` The format must use `%`-style string formatting. For available attributes, refer to the [Python logging documentation](https://docs.python.org/3/library/logging.html#logrecord-attributes). **Important Note:** Changing this variable in the client environment (e.g., local machine) does not affect remote pipeline runs. To set the logging format for remote runs, configure it in the pipeline's environment as shown below: ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings(environment={"ZENML_LOGGING_FORMAT": "%(asctime)s %(message)s"}) @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() # Alternatively, configure with options my_pipeline = my_pipeline.with_options(settings={"docker": docker_settings}) ``` This configuration ensures the logging format is applied both locally and remotely. ================================================== === File: docs/book/how-to/model-management-metrics/README.md === # Model Management and Metrics This section addresses model management and metric tracking in ZenML. Key points include: - **Model Management**: ZenML provides tools for versioning, storing, and deploying machine learning models. It supports different model formats and integrates with various storage backends. - **Metrics Tracking**: Users can track performance metrics throughout the model lifecycle. ZenML allows logging of metrics during training and evaluation, facilitating performance comparison and monitoring. - **Integration**: ZenML integrates with popular ML frameworks and tools, enabling seamless workflows for model training and evaluation. - **Version Control**: Models can be versioned to maintain a history of changes, ensuring reproducibility and traceability. - **Deployment**: ZenML supports deployment to various environments, allowing users to serve models in production. By utilizing these features, users can effectively manage their machine learning models and monitor their performance metrics throughout the development lifecycle. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/grouping-metadata.md === ### Grouping Metadata in the Dashboard To group key-value pairs in the ZenML dashboard, use a dictionary of dictionaries in the `metadata` parameter. This organizes metadata into cards for better visualization. **Example of Grouping Metadata:** ```python from zenml import log_metadata from zenml.metadata.metadata_types import StorageSize log_metadata( metadata={ "model_metrics": { "accuracy": 0.95, "precision": 0.92, "recall": 0.90 }, "data_details": { "dataset_size": StorageSize(1500000), "feature_columns": ["age", "income", "score"] } }, artifact_name="my_artifact", artifact_version="my_artifact_version", ) ``` In the ZenML dashboard, "model_metrics" and "data_details" will be displayed as separate cards, each containing their respective key-value pairs. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/README.md === # Tracking and Comparing Metrics and Metadata with ZenML ## Overview ZenML provides the `log_metadata` function for logging and managing metrics and metadata across models, artifacts, steps, and runs through a unified interface. You can also configure automatic logging for related entities. ## Logging Metadata ### Basic Use Case To log metadata within a step: ```python from zenml import step, log_metadata @step def my_step() -> ...: log_metadata(metadata={"accuracy": 0.91}) ``` This logs the `accuracy` for the step and its associated pipeline run. ### Real-World Example Here’s a more detailed example of logging various metadata types in a machine learning pipeline: ```python from zenml import step, pipeline, log_metadata @step def process_engine_metrics() -> float: log_metadata({ "engine_temperature": 3650, # Kelvin "fuel_consumption_rate": 245, # kg/s "thrust_efficiency": 0.92, }) return 0.92 @step def analyze_flight_telemetry(efficiency: float) -> None: log_metadata({ "altitude": 220000, # meters "velocity": 7800, # m/s "fuel_remaining": 2150, # kg "mission_success_prob": 0.9985, }) @pipeline def telemetry_pipeline(): efficiency = process_engine_metrics() analyze_flight_telemetry(efficiency) ``` This data can be visualized in the ZenML Pro dashboard. ## Visualizing and Comparing Metadata (Pro) Once metadata is logged, use the Experiment Comparison tool in the ZenML Pro dashboard to analyze and compare metrics across runs. ### Comparison Views The tool provides: 1. **Table View**: Compare metadata with automatic change tracking. 2. **Parallel Coordinates Plot**: Visualize relationships between metrics. You can compare up to 20 pipeline runs and any numerical metadata (`float` or `int`). ### Additional Use Cases The `log_metadata` function supports various use cases by specifying the target entity (e.g., model, artifact, step, or run). For more details, refer to: - [Log metadata to a step](attach-metadata-to-a-step.md) - [Log metadata to a run](attach-metadata-to-a-run.md) - [Log metadata to an artifact](attach-metadata-to-an-artifact.md) - [Log metadata to a model](attach-metadata-to-a-model.md) **Note**: Older methods like `log_model_metadata`, `log_artifact_metadata`, and `log_step_metadata` are deprecated. Use `log_metadata` for future implementations. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-metadata-to-a-run.md === ### Attach Metadata to a Run in ZenML In ZenML, you can log metadata to a pipeline run using the `log_metadata` function, which accepts a dictionary of key-value pairs. Values can be any JSON-serializable type, including ZenML custom types like `Uri`, `Path`, `DType`, and `StorageSize`. #### Logging Metadata Within a Run To log metadata from within a pipeline step, use `log_metadata`. The metadata key will follow the `step_name::metadata_key` pattern, allowing reuse of keys across different steps during execution. ```python from typing import Annotated import pandas as pd from sklearn.base import ClassifierMixin from sklearn.ensemble import RandomForestClassifier from zenml import step, log_metadata, ArtifactConfig @step def train_model(dataset: pd.DataFrame) -> Annotated[ ClassifierMixin, ArtifactConfig(name="sklearn_classifier", is_model_artifact=True) ]: """Train a model and log run-level metadata.""" classifier = RandomForestClassifier().fit(dataset) accuracy, precision, recall = ... log_metadata( metadata={ "run_metrics": { "accuracy": accuracy, "precision": precision, "recall": recall } } ) return classifier ``` #### Manually Logging Metadata You can also log metadata to a specific pipeline run using its run ID, which is useful for post-execution metrics. ```python from zenml import log_metadata log_metadata( metadata={"post_run_info": {"some_metric": 5.0}}, run_id_name_or_prefix="run_id_name_or_prefix" ) ``` #### Fetching Logged Metadata To retrieve logged metadata, use the ZenML Client: ```python from zenml.client import Client client = Client() run = client.get_pipeline_run("run_id_name_or_prefix") print(run.run_metadata["metadata_key"]) ``` **Note:** When fetching metadata by key, the returned value reflects the latest entry. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/fetch-metadata-within-pipeline.md === ### Fetching Metadata During Pipeline Composition To access pipeline configuration during composition, use the `zenml.get_pipeline_context()` function to retrieve the `PipelineContext`. #### Example Code ```python from zenml import get_pipeline_context, pipeline @pipeline( extra={ "complex_parameter": [ ("sklearn.tree", "DecisionTreeClassifier"), ("sklearn.ensemble", "RandomForestClassifier"), ] } ) def my_pipeline(): context = get_pipeline_context() after = [] search_steps_prefix = "hp_tuning_search_" for i, model_search_configuration in enumerate(context.extra["complex_parameter"]): step_name = f"{search_steps_prefix}{i}" cross_validation( model_package=model_search_configuration[0], model_class=model_search_configuration[1], id=step_name ) after.append(step_name) select_best_model(search_steps_prefix=search_steps_prefix, after=after) ``` For more details on the attributes and methods of `PipelineContext`, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-new/#zenml.pipelines.pipeline_context.PipelineContext). ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/fetch-metadata-within-steps.md === ### Accessing Meta Information in Real-Time #### Fetch Metadata Within Steps To access information about the currently running pipeline or step, utilize the `zenml.get_step_context()` function to obtain the `StepContext`. **Example: Fetching Pipeline and Step Information** ```python from zenml import step, get_step_context @step def my_step(): step_context = get_step_context() pipeline_name = step_context.pipeline.name run_name = step_context.pipeline_run.name step_name = step_context.step_run.name ``` You can also determine where the outputs of the current step will be stored and which Materializer class will be used. **Example: Accessing Output Storage and Materializer** ```python from zenml import step, get_step_context @step def my_step(): step_context = get_step_context() uri = step_context.get_output_artifact_uri() # Output storage URI materializer = step_context.get_output_materializer() # Materializer for output ``` For more details on the attributes and methods of `StepContext`, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-new/#zenml.steps.step_context.StepContext). ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-metadata-to-an-artifact.md === ### Summary: Attaching Metadata to Artifacts in ZenML In ZenML, metadata enhances artifacts by providing context such as size, structure, and performance metrics. This metadata is viewable in the ZenML dashboard, aiding in artifact inspection and comparison across pipeline runs. #### Logging Metadata for Artifacts Artifacts are outputs from pipeline steps (e.g., datasets, models). To log metadata, use the `log_metadata` function with the artifact name, version, or ID. Metadata can be any JSON-serializable value, including ZenML custom types like `Uri`, `Path`, `DType`, and `StorageSize`. **Example of Logging Metadata:** ```python import pandas as pd from zenml import step, log_metadata from zenml.metadata.metadata_types import StorageSize @step def process_data_step(dataframe: pd.DataFrame) -> pd.DataFrame: processed_dataframe = ... log_metadata( metadata={ "row_count": len(processed_dataframe), "columns": list(processed_dataframe.columns), "storage_size": StorageSize(processed_dataframe.memory_usage().sum()) }, infer_artifact=True, ) return processed_dataframe ``` #### Selecting the Artifact for Metadata Logging - **Using `infer_artifact`**: Automatically infers output artifacts from the step context. If multiple outputs exist, specify an `artifact_name`. - **Name and Version**: Provide both to identify a specific artifact version. - **Artifact Version ID**: Directly fetches and attaches metadata to the specified version. #### Fetching Logged Metadata To retrieve logged metadata, use the ZenML Client: ```python from zenml.client import Client client = Client() artifact = client.get_artifact_version("my_artifact", "my_version") print(artifact.run_metadata["metadata_key"]) ``` *Note: Fetching metadata with a specific key returns the latest entry.* #### Grouping Metadata in the Dashboard To organize metadata into logical sections in the dashboard, pass a dictionary of dictionaries to the `metadata` parameter: ```python log_metadata( metadata={ "model_metrics": { "accuracy": 0.95, "precision": 0.92, "recall": 0.90 }, "data_details": { "dataset_size": StorageSize(1500000), "feature_columns": ["age", "income", "score"] } }, artifact_name="my_artifact", artifact_version="version", ) ``` This will display `model_metrics` and `data_details` as separate cards in the ZenML dashboard. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-metadata-to-a-step.md === ### Summary: Attaching Metadata to a Step in ZenML In ZenML, metadata can be logged for a specific step using the `log_metadata` function, which accepts a dictionary of key-value pairs. This metadata can include any JSON-serializable values, such as custom classes like `Uri`, `Path`, `DType`, and `StorageSize`. #### Logging Metadata Within a Step When `log_metadata` is invoked within a step, it attaches the metadata to the currently executing step and its associated pipeline run, making it suitable for logging metrics available during execution. **Example:** ```python from typing import Annotated import pandas as pd from sklearn.base import ClassifierMixin from sklearn.ensemble import RandomForestClassifier from zenml import step, log_metadata, ArtifactConfig @step def train_model(dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactConfig(name="sklearn_classifier")]: classifier = RandomForestClassifier().fit(dataset) accuracy, precision, recall = ... log_metadata(metadata={"evaluation_metrics": {"accuracy": accuracy, "precision": precision, "recall": recall}}) return classifier ``` **Note:** If a pipeline execution is cached, the cached step run will copy the original step's metadata, excluding any manually generated entries post-execution. #### Manually Logging Metadata After Execution Metadata can also be logged after a step's execution using identifiers for the pipeline, step, and run. **Example:** ```python from zenml import log_metadata log_metadata(metadata={"additional_info": {"a_number": 3}}, step_name="step_name", run_id_name_or_prefix="run_id_name_or_prefix") # or log_metadata(metadata={"additional_info": {"a_number": 3}}, step_id="step_id") ``` #### Fetching Logged Metadata To retrieve logged metadata, use the ZenML Client: **Example:** ```python from zenml.client import Client client = Client() step = client.get_pipeline_run("pipeline_id").steps["step_name"] print(step.run_metadata["metadata_key"]) ``` **Note:** Fetching metadata by key returns the latest entry. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-metadata-to-a-model.md === ### Summary: Attaching Metadata to a Model in ZenML ZenML enables logging of metadata for models, providing context beyond individual artifact details. This metadata can include evaluation results, deployment information, and customer-specific details, aiding in the management and interpretation of model performance across versions. #### Logging Metadata To log metadata, use the `log_metadata` function, which allows attaching key-value pairs, including metrics and JSON-serializable values (e.g., `Uri`, `Path`, `StorageSize`). **Example: Logging Metadata for a Model** ```python from typing import Annotated import pandas as pd from sklearn.base import ClassifierMixin from sklearn.ensemble import RandomForestClassifier from zenml import step, log_metadata, ArtifactConfig @step def train_model(dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactConfig(name="sklearn_classifier")]: classifier = RandomForestClassifier().fit(dataset) accuracy, precision, recall = ... # Assume these are calculated log_metadata( metadata={ "evaluation_metrics": { "accuracy": accuracy, "precision": precision, "recall": recall } }, infer_model=True, ) return classifier ``` In this example, metadata is associated with the model rather than the classifier artifact, allowing for aggregation of various pipeline steps. #### Selecting Models with `log_metadata` ZenML offers flexible options for attaching metadata to model versions: 1. **Using `infer_model`**: Automatically infers the model from the step context. 2. **Model Name and Version**: Attach metadata to a specific model version by providing its name and version. 3. **Model Version ID**: Directly attach metadata using a specific model version ID. #### Fetching Logged Metadata Once metadata is attached, it can be retrieved using the ZenML Client: ```python from zenml.client import Client client = Client() model = client.get_model_version("my_model", "my_version") print(model.run_metadata["metadata_key"]) ``` **Note**: When fetching metadata by key, the returned value reflects the latest entry. This concise overview captures the essential technical details for attaching and retrieving metadata in ZenML. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/logging-metadata.md === ### Summary of Metadata Tracking in ZenML ZenML supports special metadata types to capture specific information, including `Uri`, `Path`, `DType`, and `StorageSize`. Below is an example of how to implement these types: ```python from zenml import log_metadata from zenml.metadata.metadata_types import StorageSize, DType, Uri, Path log_metadata( metadata={ "dataset_source": Uri("gs://my-bucket/datasets/source.csv"), "preprocessing_script": Path("/scripts/preprocess.py"), "column_types": { "age": DType("int"), "income": DType("float"), "score": DType("int") }, "processed_data_size": StorageSize(2500000) }, ) ``` **Key Points:** - **Uri**: Indicates the source URI of the dataset. - **Path**: Specifies the filesystem path to the preprocessing script. - **DType**: Describes the data types of specific columns (e.g., `int`, `float`). - **StorageSize**: Indicates the size of processed data in bytes. These metadata types standardize logging, ensuring consistency and interpretability. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/load-artifacts-from-model.md === # Summary of Loading Artifacts from a Model This documentation outlines how to load artifacts from a model in a two-pipeline project, where the first pipeline handles training and the second performs batch inference using the trained model artifacts. ## Key Points - **Model Context**: Use `get_pipeline_context().model` to access the model context during pipeline execution. This context is evaluated at runtime, ensuring the correct model version is used. - **Artifact Loading**: Artifacts, such as the trained model, are loaded using `model.get_model_artifact("trained_model")`. This retrieval occurs during the step execution, allowing for delayed materialization. - **Alternative Method**: An alternative approach utilizes the `Client` class to directly fetch the model version: ```python from zenml.client import Client @pipeline def do_predictions(): model = Client().get_model_version("iris_classifier", ModelStages.PRODUCTION) inference_data = load_data() predict( model=model.get_model_artifact("trained_model"), data=inference_data, ) ``` - **Execution Timing**: The actual artifact evaluation occurs only when the step is executed, ensuring that the most current model version is utilized. This concise overview provides the essential details for understanding how to load model artifacts in a ZenML pipeline setup. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/model-versions.md === # Model Versions Overview Model versions allow tracking of different iterations in the machine learning training process, providing dashboard and API functionality for the ML lifecycle. You can associate model versions with stages and promote them to production. Versions are created automatically during training, but can also be explicitly named via the `version` argument in the `Model` object. ## Explicitly Naming Model Versions To explicitly name a model version: ```python from zenml import Model, step, pipeline model = Model(name="my_model", version="1.0.5") @step(model=model) def svc_trainer(...) -> ...: ... @pipeline(model=model) def training_pipeline(...): # training happens here ``` If the model version exists, it is automatically associated with the pipeline. ## Using Name Templates for Model Versions For continuous projects, use templated naming for unique and semantic model versioning: ```python from zenml import Model, step, pipeline model = Model(name="{team}_my_model", version="experiment_with_phi_3_{date}_{time}") @step(model=model) def llm_trainer(...) -> ...: ... @pipeline(model=model, substitutions={"team": "Team_A"}) def training_pipeline(...): # training happens here ``` This will generate names like `experiment_with_phi_3_2024_08_30_12_42_53`. Substitutions can be set in the `@pipeline` or `@step` decorators. ### Standard Substitutions - `{date}`: Current date (e.g., `2024_11_27`) - `{time}`: Current UTC time (e.g., `11_07_09_326492`) ## Fetching Model Versions by Stage Assign stages (e.g., `production`, `staging`) to model versions for easier retrieval: ```shell zenml model version update MODEL_NAME --stage=STAGE ``` You can fetch a model version by its stage: ```python from zenml import Model, step, pipeline model = Model(name="my_model", version="production") @step(model=model) def svc_trainer(...) -> ...: ... @pipeline(model=model) def training_pipeline(...): # training happens here ``` ## Autonumbering of Versions ZenML automatically numbers model versions. If no version is specified, it generates one: ```python from zenml import Model, step model = Model(name="my_model", version="even_better_version") @step(model=model) def svc_trainer(...) -> ...: ... ``` This creates a new version, incrementing the sequence. For example: ```python from zenml import Model earlier_version = Model(name="my_model", version="really_good_version").number # == 5 updated_version = Model(name="my_model", version="even_better_version").number # == 6 ``` This ensures proper version tracking throughout the model's lifecycle. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/README.md === # Use the Model Control Plane A `Model` in ZenML is an entity that consolidates pipelines, artifacts, metadata, and essential business data, effectively representing your ML products' business logic. It can be viewed as a "project" or "workspace." **Key Points:** - The technical model, which includes the model file(s) containing weights and parameters, is a primary artifact associated with a ZenML Model. Other relevant artifacts include training data and production predictions. - Models are first-class citizens in ZenML, accessible through the ZenML API, client, and the [ZenML Pro](https://zenml.io/pro) dashboard. - Models capture lineage information and support version staging, allowing you to manage predictions based on specific stages (e.g., `Production`) and apply business rules for model promotion. - The Model Control Plane provides a unified interface for managing models, integrating pipeline logic, artifacts, and business data with the technical model. For a complete example, refer to the [starter guide](../../../user-guide/starter-guide/track-ml-models.md). ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/register-a-model.md === # Model Registration in ZenML Models can be registered in ZenML using various methods: CLI, Python SDK, or implicitly during a pipeline run. ZenML Pro users can also register models via a dashboard interface. ## Explicit CLI Registration To register a model using the CLI, use the following command: ```bash zenml model register iris_logistic_regression --license=... --description=... ``` Run `zenml model register --help` for available options. You can also add tags using the `--tag` option. ## Explicit Dashboard Registration ZenML Pro users can register models directly from the [cloud dashboard](https://zenml.io/pro). ## Explicit Python SDK Registration To register a model using the Python SDK: ```python from zenml import Model from zenml.client import Client Client().create_model( name="iris_logistic_regression", license="Copyright (c) ZenML GmbH 2023", description="Logistic regression model trained on the Iris dataset.", tags=["regression", "sklearn", "iris"], ) ``` ## Implicit Registration by ZenML Models can also be registered implicitly during a pipeline run by specifying a `Model` object in the `@pipeline` decorator. Here’s an example of a training pipeline: ```python from zenml import pipeline from zenml import Model @pipeline( enable_cache=False, model=Model( name="demo", license="Apache", description="Showcase Model Control Plane.", ), ) def train_and_promote_model(): ... ``` Running this pipeline creates a new model version and links it to the artifacts. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/linking-model-binaries-data-to-models.md === # Linking Model Binaries/Data to Models in ZenML ZenML allows linking artifacts generated during pipeline runs to models, enabling lineage tracking and transparency for training, evaluation, and inference processes. ## Configuring the Model at a Pipeline Level You can link artifacts by configuring the `model` parameter in the `@pipeline` or `@step` decorator: ```python from zenml import Model, pipeline model = Model(name="my_model", version="1.0.0") @pipeline(model=model) def my_pipeline(): ... ``` This setup automatically links all artifacts from the pipeline run to the specified model. ## Saving Intermediate Artifacts To save intermediate results, use the `save_artifact` utility function. If the step has a Model context configured, it will automatically link to the model: ```python from zenml import step, Model from zenml.artifacts.utils import save_artifact import pandas as pd from typing_extensions import Annotated from zenml.artifacts.artifact_config import ArtifactConfig @step(model=Model(name="MyModel", version="1.2.42")) def trainer(trn_dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactConfig("trained_model")]: for epoch in epochs: checkpoint = model.train(epoch) save_artifact(data=checkpoint, name="training_checkpoint", version=f"1.2.42_{epoch}") return model ``` ## Linking Artifacts Explicitly To link an artifact to a model outside of a step context, use the `link_artifact_to_model` function: ```python from zenml import step, Model, link_artifact_to_model, save_artifact from zenml.client import Client @step def f_() -> None: new_artifact = save_artifact(data="Hello, World!", name="manual_artifact") link_artifact_to_model(artifact_version_id=new_artifact.id, model=Model(name="MyModel", version="0.0.42")) existing_artifact = Client().get_artifact_version(name_id_or_prefix="existing_artifact") link_artifact_to_model(artifact_version_id=existing_artifact.id, model=Model(name="MyModel", version="0.2.42")) ``` This allows for flexible linking of artifacts to models, enhancing the management of model artifacts in ZenML. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/connecting-artifacts-via-a-model.md === ### Summary: Structuring an MLOps Project An MLOps project typically consists of multiple pipelines, each serving distinct purposes: - **Feature Engineering Pipeline**: Prepares raw data for training. - **Training Pipeline**: Trains models using data from the feature engineering pipeline. - **Inference Pipeline**: Runs predictions using the trained model, often incorporating preprocessing from the training pipeline. - **Deployment Pipeline**: Deploys the trained model to a production environment. The structure of these pipelines can vary based on project requirements, and information (artifacts, models, metadata) often needs to be shared between them. #### Common Patterns for Artifact Exchange 1. **Artifact Exchange via `Client`**: - Use the ZenML Client to facilitate data transfer between pipelines. - Example code for feature engineering and training pipelines: ```python from zenml import pipeline from zenml.client import Client @pipeline def feature_engineering_pipeline(): train_data, test_data = prepare_data() @pipeline def training_pipeline(): client = Client() train_data = client.get_artifact_version(name="iris_training_dataset") test_data = client.get_artifact_version(name="iris_testing_dataset", version="raw_2023") sklearn_classifier = model_trainer(train_data) model_evaluator(model, sklearn_classifier) ``` Note: Artifacts are referenced, not materialized in memory during the pipeline function. 2. **Artifact Exchange via `Model`**: - Use ZenML Model as a reference point for artifacts. - Example code for a training pipeline (`train_and_promote`) and an inference pipeline (`do_predictions`): ```python from zenml import step, get_step_context @step(enable_cache=False) def predict(data: pd.DataFrame) -> pd.Series: model = get_step_context().model.get_model_artifact("trained_model") predictions = pd.Series(model.predict(data)) return predictions ``` Alternatively, resolve the artifact at the pipeline level: ```python from zenml import get_pipeline_context, pipeline, Model from zenml.enums import ModelStages @step def predict(model: ClassifierMixin, data: pd.DataFrame) -> pd.Series: return pd.Series(model.predict(data)) @pipeline(model=Model(name="iris_classifier", version=ModelStages.PRODUCTION)) def do_predictions(): model = get_pipeline_context().model.get_model_artifact("trained_model") inference_data = load_data() predict(model=model, data=inference_data) if __name__ == "__main__": do_predictions() ``` Both artifact exchange approaches are valid; the choice depends on user preference. For further details on project repository structure, refer to the best practices section. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/associate-a-pipeline-with-a-model.md === # Associate a Pipeline with a Model To associate a pipeline with a model in ZenML, use the following code structure: ```python from zenml import pipeline from zenml import Model from zenml.enums import ModelStages @pipeline( model=Model( name="ClassificationModel", # Unique model name tags=["MVP", "Tabular"], # Tags for filtering version=ModelStages.LATEST # Specify model version or stage ) ) def my_pipeline(): ... ``` This code associates the pipeline with the specified model. If the model exists, a new version is created. To attach the pipeline to an existing model version, specify it accordingly. Model configuration can also be moved to a configuration file: ```yaml model: name: text_classifier description: A breast cancer classifier tags: ["classifier", "sgd"] ``` This allows for better organization and management of model settings. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/delete-a-model.md === ### Delete a Model Deleting a model or a specific version removes all links between the Model entity and its artifacts and pipeline runs, along with all associated metadata. #### Deleting All Versions of a Model **CLI:** ```shell zenml model delete ``` **Python SDK:** ```python from zenml.client import Client Client().delete_model() ``` #### Delete a Specific Version of a Model **CLI:** ```shell zenml model version delete ``` **Python SDK:** ```python from zenml.client import Client Client().delete_model_version() ``` ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/promote-a-model.md === # Model Promotion in ZenML ## Stages Model versions in ZenML can progress through various lifecycle stages, which serve as metadata to indicate their state. The available stages are: - **staging**: Prepared for production. - **production**: Actively running in production. - **latest**: Represents the most recent version (not a promotion target). - **archived**: No longer relevant, typically after moving out of other stages. ## Promotion Methods ### CLI Promote a model version using the ZenML CLI: ```bash zenml model version update iris_logistic_regression --stage=... ``` ### Cloud Dashboard Promotion via the ZenML Pro dashboard is forthcoming. ### Python SDK The most common method for promoting models: ```python from zenml import Model from zenml.enums import ModelStages MODEL_NAME = "iris_logistic_regression" model = Model(name=MODEL_NAME, version="1.2.3") model.set_stage(stage=ModelStages.PRODUCTION) latest_model = Model(name=MODEL_NAME, version=ModelStages.LATEST) latest_model.set_stage(stage=ModelStages.STAGING) ``` In a pipeline context, retrieve the model from the step context: ```python from zenml import get_step_context, step, pipeline from zenml.enums import ModelStages @step def promote_to_staging(): model = get_step_context().model model.set_stage(ModelStages.STAGING, force=True) @pipeline(...) def train_and_promote_model(): ... promote_to_staging(after=["train_and_evaluate"]) ``` ## Fetching Model Versions by Stage Load the appropriate model version by specifying the `version`: ```python from zenml import Model, step, pipeline model = Model(name="my_model", version="production") @step(model=model) def svc_trainer(...) -> ...: ... @pipeline(model=model) def training_pipeline(...): # training happens here ``` This configuration ensures the correct model version is used across steps and pipelines. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/load-a-model-in-code.md === # Summary of ZenML Model Loading Documentation ## Loading a Model in Code ### 1. Load the Active Model in a Pipeline You can load the active model to access its metadata and artifacts. ```python from zenml import step, pipeline, get_step_context, Model @pipeline(model=Model(name="my_model")) def my_pipeline(): ... @step def my_step(): mv = get_step_context().model # Get model from active step context print(mv.run_metadata["metadata_key"].value) # Get metadata output = mv.get_artifact("my_dataset", "my_version") # Fetch artifact output.run_metadata["accuracy"].value ``` ### 2. Load Any Model via the Client You can also load models using the `Client` class. ```python from zenml import step from zenml.client import Client from zenml.enums import ModelStages @step def model_evaluator_step(): try: staging_zenml_model = Client().get_model_version( model_name_or_id="", model_version_name_or_number_or_id=ModelStages.STAGING, ) except KeyError: staging_zenml_model = None ``` This documentation outlines two methods for loading ZenML models: through the active model in a pipeline and via the Client. Each method provides access to model metadata and artifacts. ================================================== === File: docs/book/how-to/advanced-topics/README.md === # Advanced Topics in ZenML This section discusses advanced features and configurations in ZenML, focusing on enhancing the functionality and customization of the platform. ## Key Features 1. **Custom Components**: Users can create and integrate custom components into their pipelines, allowing for tailored data processing and model training. 2. **Pipeline Configuration**: Advanced configurations enable users to optimize pipeline execution, including parallel execution, retries, and timeouts. 3. **Artifact Management**: ZenML supports versioning and tracking of artifacts, ensuring reproducibility and traceability of experiments. 4. **Integrations**: The platform offers integrations with various tools and services (e.g., cloud providers, ML frameworks) to streamline workflows. 5. **Secrets Management**: Securely manage sensitive information (API keys, credentials) using built-in secrets management features. ## Example Code Snippet ```python from zenml.pipelines import pipeline from zenml.steps import step @step def data_preprocessing(): # Data preprocessing logic pass @step def model_training(): # Model training logic pass @pipeline def my_pipeline(): data = data_preprocessing() model_training(data) # Run the pipeline my_pipeline() ``` ## Important Considerations - **Performance Tuning**: Adjust configurations for optimal performance based on workload and resource availability. - **Monitoring and Logging**: Implement monitoring and logging to track pipeline performance and troubleshoot issues. - **Documentation and Community**: Leverage official documentation and community resources for support and best practices. This summary encapsulates the advanced capabilities of ZenML, focusing on customization, configuration, and integration to enhance machine learning workflows. ================================================== === File: docs/book/how-to/data-artifact-management/README.md === # Data and Artifact Management in ZenML This section addresses the management of data and artifacts within ZenML, focusing on key processes and functionalities. ## Key Concepts - **Data Management**: Involves the organization, storage, and retrieval of datasets used in machine learning workflows. - **Artifact Management**: Refers to the handling of outputs generated during the ML pipeline, such as models, metrics, and visualizations. ## Core Components 1. **Data Sources**: ZenML supports various data sources for ingestion, including databases, cloud storage, and local files. 2. **Artifact Storage**: Artifacts are stored in a centralized repository, allowing easy access and versioning. Supported storage backends include: - Local file systems - Cloud storage (e.g., AWS S3, Google Cloud Storage) 3. **Versioning**: Both data and artifacts can be versioned to maintain a history of changes and facilitate reproducibility. ## Example Code ```python from zenml import pipeline from zenml.steps import step @step def load_data(): # Load data from a specified source pass @step def process_data(data): # Process the loaded data pass @pipeline def data_pipeline(): data = load_data() processed_data = process_data(data) ``` ## Best Practices - Regularly back up data and artifacts. - Use version control for datasets and models to ensure reproducibility. - Implement clear naming conventions for easy identification of artifacts. This overview provides a foundational understanding of data and artifact management in ZenML, emphasizing its importance in maintaining efficient ML workflows. ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/types-of-visualizations.md === ### Types of Visualizations in ZenML ZenML automatically saves and displays visualizations for various data types in the ZenML dashboard. These visualizations can also be viewed in Jupyter notebooks using the `artifact.visualize()` method. #### Examples of Default Visualizations: - **Pandas DataFrame**: Statistical representation saved as a PNG image. - **Drift Detection Reports**: Generated by tools like Evidently, Great Expectations, and whylogs. - **Hugging Face Datasets**: Displayed as an HTML iframe. #### Visualization Methods: - **Dashboard**: Access visualizations directly in the ZenML dashboard. - **Jupyter Notebooks**: Use the method: ```python artifact.visualize() ``` These features enhance data analysis and monitoring within ZenML projects. ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/README.md === ### ZenML Visualization Configuration **Overview**: ZenML allows for easy visualization of data and artifacts within its dashboard. **Key Features**: - **Artifact Visualizations**: Users can associate visualizations directly with data artifacts, enhancing data interpretation and insights. **Example Visualization**: ![ZenML Artifact Visualizations](../../../.gitbook/assets/artifact_visualization_dashboard.png) This setup facilitates a more intuitive understanding of data relationships and trends within the ZenML framework. ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/creating-custom-visualizations.md === # Creating Custom Visualizations in ZenML ZenML allows you to associate custom visualizations with artifacts using supported types: - **HTML:** Embedded HTML visualizations (e.g., data validation reports) - **Image:** Visualizations of image data (e.g., Pillow images) - **CSV:** Tables (e.g., pandas DataFrame `.describe()`) - **Markdown:** Markdown strings or pages - **JSON:** JSON strings or objects ## Methods to Add Custom Visualizations 1. **Special Return Types:** Cast HTML, Markdown, CSV, or JSON data to specific types in your step. 2. **Custom Materializers:** Define visualization logic for all artifacts of a certain data type. 3. **Custom Return Type Class:** Create a custom class with a corresponding materializer to return from steps. ### Visualization via Special Return Types You can return visualizations by casting to specific types: - `zenml.types.HTMLString` - `zenml.types.MarkdownString` - `zenml.types.CSVString` - `zenml.types.JSONString` **Example: Returning CSV Visualization** ```python from zenml.types import CSVString @step def my_step() -> CSVString: return CSVString("a,b,c\n1,2,3") ``` **Example: Returning Matplotlib Visualization as HTML** ```python import matplotlib.pyplot as plt import base64 import io from zenml.types import HTMLString from zenml import step, pipeline @step def create_matplotlib_visualization() -> HTMLString: fig, ax = plt.subplots() ax.plot([1, 2, 3, 4], [1, 4, 2, 3]) ax.set_title('Sample Plot') buf = io.BytesIO() fig.savefig(buf, format='png', bbox_inches='tight', dpi=300) plt.close(fig) image_base64 = base64.b64encode(buf.getvalue()).decode('utf-8') html = f'
' return HTMLString(html) @pipeline def visualization_pipeline(): create_matplotlib_visualization() ``` ## Visualization via Materializers To visualize all artifacts of a certain type, override the `save_visualizations()` method in a custom materializer. ### Example: Matplotlib Figure Visualization 1. **Custom Class** ```python from pydantic import BaseModel class MatplotlibVisualization(BaseModel): figure: Any ``` 2. **Materializer** ```python class MatplotlibMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (MatplotlibVisualization,) def save_visualizations(self, data: MatplotlibVisualization) -> Dict[str, VisualizationType]: visualization_path = os.path.join(self.uri, "visualization.png") with fileio.open(visualization_path, 'wb') as f: data.figure.savefig(f, format='png', bbox_inches='tight') return {visualization_path: VisualizationType.IMAGE} ``` 3. **Step** ```python @step def create_matplotlib_visualization() -> MatplotlibVisualization: fig, ax = plt.subplots() ax.plot([1, 2, 3, 4], [1, 4, 2, 3]) ax.set_title('Sample Plot') return MatplotlibVisualization(figure=fig) ``` ### Pipeline Execution When you run the pipeline, ZenML: 1. Creates and returns a `MatplotlibVisualization`. 2. Calls `save_visualizations()` in the `MatplotlibMaterializer`. 3. Saves the figure as a PNG in the artifact store. 4. Displays the PNG in the dashboard. For more examples, refer to the Hugging Face datasets materializer for dataset visualizations. ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/disabling-visualizations.md === ### Disabling Visualizations To disable artifact visualization, set `enable_artifact_visualization` at the pipeline or step level: ```python @step(enable_artifact_visualization=False) def my_step(): ... @pipeline(enable_artifact_visualization=False) def my_pipeline(): ... ``` This configuration prevents visualizations from being generated for both steps and pipelines. ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/visualizations-in-dashboard.md === ### Displaying Visualizations in the ZenML Dashboard To display visualizations on the ZenML dashboard, the following steps are necessary: #### 1. Configuring a Service Connector - Visualizations are stored with artifacts in the [artifact store](../../../component-guide/artifact-stores/artifact-stores.md). - Users must configure a [service connector](../../infrastructure-deployment/auth-management/README.md) to allow the ZenML server to access the artifact store. - For AWS S3 specifics, refer to the [AWS S3 artifact store documentation](../../../component-guide/artifact-stores/s3.md). > **Note:** When using the default/local artifact store with a deployed ZenML, the server cannot access local files, resulting in visualizations not being displayed. Use a service connector with a remote artifact store to view visualizations. #### 2. Configuring Artifact Stores - If visualizations from a pipeline run do not appear, check if the ZenML server has the necessary dependencies or permissions for the artifact store. - For further details, consult the [custom artifact store documentation](../../../component-guide/artifact-stores/custom.md#enabling-artifact-visualizations-with-custom-artifact-stores). ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/README.md === It seems that the text you provided is incomplete or missing. Please provide the full documentation text you would like summarized, and I'll be happy to assist you! ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/unmaterialized-artifacts.md === ### Skip Materialization of Artifacts **Overview** In ZenML, a pipeline is data-centric, where each step reads and writes to an artifact store. **Materializers** manage how artifacts are serialized, stored, and retrieved. However, there may be cases where you want to **skip materialization** and use a reference to an artifact instead. **Warning** Skipping materialization can have unintended consequences for downstream tasks that depend on materialized artifacts. Use this feature cautiously. ### Skipping Materialization To skip materialization, use the `UnmaterializedArtifact` class, which allows you to access the unique storage path of an artifact. You can specify `UnmaterializedArtifact` as the type in your step function. **Example Code:** ```python from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact from zenml import step @step def my_step(my_artifact: UnmaterializedArtifact): pass ``` ### Code Example The following pipeline demonstrates the use of unmaterialized artifacts. Steps `s1` and `s2` produce identical artifacts, but `s3` consumes materialized artifacts while `s4` consumes unmaterialized artifacts. **Pipeline Structure:** ``` s1 -> s3 s2 -> s4 ``` **Example Code:** ```python from typing_extensions import Annotated from typing import Dict, List, Tuple from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact from zenml import pipeline, step @step def step_1() -> Tuple[Annotated[Dict[str, str], "dict_"], Annotated[List[str], "list_"]]: return {"some": "data"}, [] @step def step_2() -> Tuple[Annotated[Dict[str, str], "dict_"], Annotated[List[str], "list_"]]: return {"some": "data"}, [] @step def step_3(dict_: Dict, list_: List) -> None: assert isinstance(dict_, dict) assert isinstance(list_, list) @step def step_4(dict_: UnmaterializedArtifact, list_: UnmaterializedArtifact) -> None: print(dict_.uri) print(list_.uri) @pipeline def example_pipeline(): step_3(*step_1()) step_4(*step_2()) example_pipeline() ``` For additional use cases of `UnmaterializedArtifact`, refer to the documentation on triggering pipelines from another pipeline. ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/registering-existing-data.md === ### Summary of ZenML Artifact Registration Documentation This documentation explains how to register external data as ZenML artifacts for future use, focusing on folders and files, as well as Pytorch Lightning training checkpoints. #### Register Existing Data as ZenML Artifacts 1. **Register Existing Folder**: - You can register a folder containing data as a ZenML artifact. - Example code to create a folder and register it: ```python import os from uuid import uuid4 from pathlib import Path from zenml.client import Client from zenml import register_artifact prefix = Client().active_stack.artifact_store.path folder_path = os.path.join(prefix, f"my_test_folder_{uuid4()}") os.mkdir(folder_path) with open(os.path.join(folder_path, "test_file.txt"), "w") as f: f.write("test") register_artifact(folder_path, name="my_folder_artifact") loaded_folder = Client().get_artifact_version("my_folder_artifact").load() assert isinstance(loaded_folder, Path) and os.path.isdir(loaded_folder) ``` 2. **Register Existing File**: - Similar to folders, you can register individual files. - Example code: ```python import os from uuid import uuid4 from pathlib import Path from zenml.client import Client from zenml import register_artifact prefix = Client().active_stack.artifact_store.path file_path = os.path.join(prefix, f"my_test_file_{uuid4()}.txt") with open(file_path, "w") as f: f.write("test") register_artifact(file_path, name="my_file_artifact") loaded_file = Client().get_artifact_version("my_file_artifact").load() assert isinstance(loaded_file, Path) and not os.path.isdir(loaded_file) ``` #### Register Checkpoints of a Pytorch Lightning Training Run - You can register all checkpoints from a Pytorch Lightning training run as ZenML artifacts. - Example code for registering checkpoints: ```python from zenml.client import Client from zenml import register_artifact from pytorch_lightning import Trainer from pytorch_lightning.callbacks import ModelCheckpoint from uuid import uuid4 prefix = Client().active_stack.artifact_store.path default_root_dir = os.path.join(prefix, uuid4().hex) trainer = Trainer( default_root_dir=default_root_dir, callbacks=[ModelCheckpoint(every_n_epochs=1, save_top_k=-1)] ) trainer.fit(model) # Assuming 'model' is defined register_artifact(default_root_dir, name="all_my_model_checkpoints") ``` #### Custom ModelCheckpoint for ZenML - Extend the `ModelCheckpoint` to register each checkpoint as a separate artifact version: ```python from zenml import get_step_context from zenml.exceptions import StepContextError from pytorch_lightning.callbacks import ModelCheckpoint class ZenMLModelCheckpoint(ModelCheckpoint): def __init__(self, artifact_name: str, *args, **kwargs): zenml_model = get_step_context().model self.artifact_name = artifact_name super().__init__(*args, **kwargs) def on_train_epoch_end(self, trainer, pl_module): super().on_train_epoch_end(trainer, pl_module) register_artifact(self.dirpath, self.artifact_name) ``` #### Example Pipeline with Pytorch Lightning A complete example of a pipeline that trains a Pytorch Lightning model and registers checkpoints: ```python from zenml import step, pipeline from pytorch_lightning import Trainer, LightningModule @step def get_data(): # Load data pass @step def get_model(): # Define model pass @step def train_model(model: LightningModule): # Train model and register checkpoints pass @pipeline def train_pipeline(): data = get_data() model = get_model() train_model(model) if __name__ == "__main__": train_pipeline() ``` This documentation provides a comprehensive guide on registering external data and managing checkpoints in ZenML, ensuring that artifacts can be utilized effectively in future workflows. ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/manage-big-data.md === # Scaling Strategies for Big Data in ZenML ## Overview This documentation outlines strategies for scaling ZenML pipelines to manage large datasets, categorized by size thresholds: small, medium, and large. ### Dataset Size Thresholds 1. **Small datasets (up to a few GB)**: Handled in-memory via pandas. 2. **Medium datasets (up to tens of GB)**: Require chunking or out-of-core processing. 3. **Large datasets (hundreds of GB or more)**: Necessitate distributed processing frameworks. ## Strategies for Small Datasets 1. **Efficient Data Formats**: Use formats like Parquet instead of CSV. ```python import pyarrow.parquet as pq class ParquetDataset(Dataset): def __init__(self, data_path: str): self.data_path = data_path def read_data(self) -> pd.DataFrame: return pq.read_table(self.data_path).to_pandas() def write_data(self, df: pd.DataFrame): table = pa.Table.from_pandas(df) pq.write_table(table, self.data_path) ``` 2. **Data Sampling**: Implement sampling methods. ```python class SampleableDataset(Dataset): def sample_data(self, fraction: float = 0.1) -> pd.DataFrame: df = self.read_data() return df.sample(frac=fraction) @step def analyze_sample(dataset: SampleableDataset) -> Dict[str, float]: sample = dataset.sample_data() return {"mean": sample["value"].mean(), "std": sample["value"].std()} ``` 3. **Optimize Pandas Operations**: Use efficient operations to minimize memory usage. ```python @step def optimize_processing(df: pd.DataFrame) -> pd.DataFrame: df['new_column'] = df['column1'] + df['column2'] df['mean_normalized'] = df['value'] - np.mean(df['value']) return df ``` ## Strategies for Medium Datasets ### Chunking for CSV Datasets Implement chunking in your Dataset classes. ```python class ChunkedCSVDataset(Dataset): def __init__(self, data_path: str, chunk_size: int = 10000): self.data_path = data_path self.chunk_size = chunk_size def read_data(self): for chunk in pd.read_csv(self.data_path, chunksize=self.chunk_size): yield chunk @step def process_chunked_csv(dataset: ChunkedCSVDataset) -> pd.DataFrame: processed_chunks = [process_chunk(chunk) for chunk in dataset.read_data()] return pd.concat(processed_chunks) def process_chunk(chunk: pd.DataFrame) -> pd.DataFrame: return chunk ``` ### Leveraging Data Warehouses Use data warehouses like Google BigQuery for distributed processing. ```python @step def process_big_query_data(dataset: BigQueryDataset) -> BigQueryDataset: client = bigquery.Client() query = f""" SELECT column1, AVG(column2) as avg_column2 FROM `{dataset.table_id}` GROUP BY column1 """ result_table_id = f"{dataset.project}.{dataset.dataset}.processed_data" job_config = bigquery.QueryJobConfig(destination=result_table_id) query_job = client.query(query, job_config=job_config) query_job.result() return BigQueryDataset(table_id=result_table_id) ``` ## Strategies for Very Large Datasets ### Using Distributed Computing Frameworks #### Apache Spark Integrate Spark into ZenML pipelines. ```python from pyspark.sql import SparkSession @step def process_with_spark(input_data: str) -> None: spark = SparkSession.builder.appName("ZenMLSparkStep").getOrCreate() df = spark.read.format("csv").option("header", "true").load(input_data) result = df.groupBy("column1").agg({"column2": "mean"}) result.write.csv("output_path", header=True, mode="overwrite") spark.stop() ``` #### Ray Use Ray for distributed processing. ```python import ray @step def process_with_ray(input_data: str) -> None: ray.init() @ray.remote def process_partition(partition): return processed_partition data = load_data(input_data) partitions = split_data(data) results = ray.get([process_partition.remote(part) for part in partitions]) combined_results = combine_results(results) save_results(combined_results, "output_path") ray.shutdown() ``` #### Dask Integrate Dask for parallel computing. ```python import dask.dataframe as dd @step def create_dask_dataframe(): df = dd.from_pandas(pd.DataFrame({'A': range(1000), 'B': range(1000, 2000)}), npartitions=4) return df @step def process_dask_dataframe(df: dd.DataFrame) -> dd.DataFrame: return df.map_partitions(lambda x: x ** 2) @step def compute_result(df: dd.DataFrame) -> pd.DataFrame: return df.compute() ``` #### Numba Use Numba for JIT compilation to speed up numerical operations. ```python from numba import jit @jit(nopython=True) def numba_function(x): return x * x + 2 * x - 1 @step def load_data() -> np.ndarray: return np.arange(1000000) @step def apply_numba_function(data: np.ndarray) -> np.ndarray: return numba_function(data) ``` ## Important Considerations 1. **Environment Setup**: Ensure necessary frameworks are installed. 2. **Resource Management**: Coordinate resource allocation with ZenML orchestration. 3. **Error Handling**: Implement cleanup for Spark and Ray sessions. 4. **Data I/O**: Use intermediate storage for large datasets. 5. **Scaling**: Ensure infrastructure supports the scale of computation. ## Choosing the Right Scaling Strategy Consider dataset size, processing complexity, infrastructure, update frequency, and team expertise when selecting a strategy. ZenML's architecture allows for evolving data processing strategies as projects grow, ensuring efficient management of machine learning workflows. For more on custom Dataset classes, refer to [custom dataset classes](datasets.md). ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/passing-artifacts-between-pipelines.md === ### Summary: Structuring an MLOps Project An MLOps project typically consists of multiple pipelines, including: - **Feature Engineering Pipeline**: Prepares raw data for training. - **Training Pipeline**: Trains models using data from the feature engineering pipeline. - **Inference Pipeline**: Runs batch predictions on the trained model, often using pre-processed data. - **Deployment Pipeline**: Deploys the trained model to a production endpoint. The structure of these pipelines can vary based on project requirements, and sharing artifacts (models, datasets, metadata) between them is essential. #### Artifact Exchange Patterns **Pattern 1: Artifact Exchange via `Client`** In this pattern, the ZenML Client facilitates the exchange of datasets between pipelines. For example, a feature engineering pipeline produces datasets that are fetched by a training pipeline: ```python from zenml import pipeline from zenml.client import Client @pipeline def feature_engineering_pipeline(): train_data, test_data = prepare_data() @pipeline def training_pipeline(): client = Client() train_data = client.get_artifact_version(name="iris_training_dataset") test_data = client.get_artifact_version(name="iris_testing_dataset", version="raw_2023") sklearn_classifier = model_trainer(train_data) model_evaluator(model, sklearn_classifier) ``` *Note: Artifacts are references stored in the artifact store, not materialized in memory during the pipeline function.* **Pattern 2: Artifact Exchange via `Model`** This pattern uses a ZenML Model as a reference point for artifact exchange. In a training pipeline (`train_and_promote`), models are created and promoted based on accuracy. An inference pipeline (`do_predictions`) retrieves the latest promoted model without needing artifact IDs or names. Example of fetching the model in a prediction step: ```python from zenml import step, get_step_context @step(enable_cache=False) def predict(data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: model = get_step_context().model.get_model_artifact("trained_model") predictions = pd.Series(model.predict(data)) return predictions ``` Alternatively, resolve the artifact at the pipeline level: ```python from zenml import get_pipeline_context, pipeline, Model from zenml.enums import ModelStages import pandas as pd from sklearn.base import ClassifierMixin @step def predict(model: ClassifierMixin, data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: return pd.Series(model.predict(data)) @pipeline(model=Model(name="iris_classifier", version=ModelStages.PRODUCTION)) def do_predictions(): model = get_pipeline_context().model.get_model_artifact("trained_model") inference_data = load_data() predict(model=model, data=inference_data) if __name__ == "__main__": do_predictions() ``` Both artifact exchange patterns are valid; the choice depends on user preference. ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/datasets.md === # Custom Dataset Classes and Complex Data Flows in ZenML ## Overview ZenML allows for the creation of custom Dataset classes to manage data loading, processing, and saving across various data sources, which is essential for complex machine learning projects. ## Custom Dataset Classes Custom Dataset classes encapsulate data handling logic and are useful when: 1. Working with multiple data sources (e.g., CSV, databases). 2. Managing complex data structures. 3. Implementing custom processing logic. ### Example Implementation A base `Dataset` class is created, with specific implementations for CSV and BigQuery: ```python from abc import ABC, abstractmethod import pandas as pd from google.cloud import bigquery from typing import Optional class Dataset(ABC): @abstractmethod def read_data(self) -> pd.DataFrame: pass class CSVDataset(Dataset): def __init__(self, data_path: str, df: Optional[pd.DataFrame] = None): self.data_path = data_path self.df = df def read_data(self) -> pd.DataFrame: if self.df is None: self.df = pd.read_csv(self.data_path) return self.df class BigQueryDataset(Dataset): def __init__(self, table_id: str, project: Optional[str] = None): self.table_id = table_id self.project = project self.client = bigquery.Client(project=self.project) def read_data(self) -> pd.DataFrame: query = f"SELECT * FROM `{self.table_id}`" return self.client.query(query).to_dataframe() def write_data(self) -> None: job_config = bigquery.LoadJobConfig(write_disposition="WRITE_TRUNCATE") job = self.client.load_table_from_dataframe(self.df, self.table_id, job_config=job_config) job.result() ``` ## Custom Materializers Materializers handle serialization and deserialization of artifacts. Custom Materializers are necessary for custom Dataset classes. ### CSVDataset Materializer ```python from zenml.materializers import BaseMaterializer from zenml.io import fileio import json import tempfile import pandas as pd class CSVDatasetMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (CSVDataset,) def load(self, data_type: Type[CSVDataset]) -> CSVDataset: with tempfile.NamedTemporaryFile(delete=False, suffix='.csv') as temp_file: with fileio.open(os.path.join(self.uri, "data.csv"), "rb") as source_file: temp_file.write(source_file.read()) return CSVDataset(temp_file.name) def save(self, dataset: CSVDataset) -> None: df = dataset.read_data() with tempfile.NamedTemporaryFile(delete=False, suffix='.csv') as temp_file: df.to_csv(temp_file.name, index=False) with open(temp_file.name, "rb") as source_file: with fileio.open(os.path.join(self.uri, "data.csv"), "wb") as target_file: target_file.write(source_file.read()) ``` ### BigQueryDataset Materializer ```python class BigQueryDatasetMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (BigQueryDataset,) def load(self, data_type: Type[BigQueryDataset]) -> BigQueryDataset: with fileio.open(os.path.join(self.uri, "metadata.json"), "r") as f: metadata = json.load(f) return BigQueryDataset(table_id=metadata["table_id"], project=metadata["project"]) def save(self, bq_dataset: BigQueryDataset) -> None: metadata = {"table_id": bq_dataset.table_id, "project": bq_dataset.project} with fileio.open(os.path.join(self.uri, "metadata.json"), "w") as f: json.dump(metadata, f) bq_dataset.write_data() if bq_dataset.df is not None else None ``` ## Pipeline Structure Design flexible pipelines to handle multiple data sources: ```python from zenml import step, pipeline @step(output_materializer=CSVDatasetMaterializer) def extract_data_local(data_path: str) -> CSVDataset: return CSVDataset(data_path) @step(output_materializer=BigQueryDatasetMaterializer) def extract_data_remote(table_id: str) -> BigQueryDataset: return BigQueryDataset(table_id) @step def transform(dataset: Dataset) -> pd.DataFrame: df = dataset.read_data() return df.copy() # Apply transformations here @pipeline def etl_pipeline(mode: str): raw_data = extract_data_local() if mode == "develop" else extract_data_remote(table_id="project.dataset.raw_table") return transform(raw_data) ``` ## Best Practices 1. **Use a common base class**: This allows consistent handling of different data sources. 2. **Create specialized steps**: Implement separate steps for loading different datasets. 3. **Implement flexible pipelines**: Use configuration parameters to adapt to different data sources. 4. **Modular step design**: Create steps that perform specific tasks for better code reuse and maintenance. By following these practices, ZenML pipelines can efficiently manage complex data flows and adapt to changing requirements. For scaling strategies, refer to [scaling strategies for big data](manage-big-data.md). ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/get-arbitrary-artifacts-in-a-step.md === ### Summary of Documentation on Fetching Artifacts in Steps Artifacts do not always need to be accessed through direct upstream steps. You can fetch artifacts from other upstream steps or different pipelines using the ZenML client. #### Code Example ```python from zenml.client import Client from zenml import step @step def my_step(): client = Client() output = client.get_artifact_version("my_dataset", "my_version") accuracy = output.run_metadata["accuracy"].value ``` This method allows you to access artifacts stored in the artifact store, which is beneficial for utilizing artifacts from various sources. ### Related Resources - [Managing artifacts](../../../user-guide/starter-guide/manage-artifacts.md): Information on the `ExternalArtifact` type and artifact transfer between steps. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/artifacts-naming.md === ### ZenML Artifact Naming Overview In ZenML, managing artifact names is crucial for tracking outputs, especially when reusing steps with different inputs. Artifacts can be named either statically or dynamically, leveraging type annotations for naming conventions. Artifacts with the same name are saved with incremented version numbers. #### Naming Strategies 1. **Static Naming**: Defined directly as string literals. ```python @step def static_single() -> Annotated[str, "static_output_name"]: return "null" ``` 2. **Dynamic Naming**: Generated at runtime using string templates. - **Standard Placeholders**: - `{date}`: Current date (e.g., `2024_11_18`) - `{time}`: Current time (e.g., `11_07_09_326492`) ```python @step def dynamic_single_string() -> Annotated[str, "name_{date}_{time}"]: return "null" ``` - **Custom Placeholders**: Defined via the `substitutions` parameter. ```python @step(substitutions={"custom_placeholder": "some_substitute"}) def dynamic_single_string() -> Annotated[str, "name_{custom_placeholder}_{time}"]: return "null" ``` - **Using `with_options`**: Redefine placeholders dynamically. ```python @step def extract_data(source: str) -> Annotated[str, "{stage}_dataset"]: return "my data" @pipeline def extraction_pipeline(): extract_data.with_options(substitutions={"stage": "train"})(source="s3://train") extract_data.with_options(substitutions={"stage": "test"})(source="s3://test") ``` **Substitutions Scope**: - Set in `@pipeline`, `pipeline.with_options`, `@step`, or `step.with_options`. 3. **Multiple Output Handling**: Combine naming options for steps returning multiple artifacts. ```python @step def mixed_tuple() -> Tuple[ Annotated[str, "static_output_name"], Annotated[str, "name_{date}_{time}"], ]: return "static_namer", "str_namer" ``` #### Caching Behavior When caching is enabled, the names of output artifacts remain consistent across runs. The following example demonstrates this: ```python from typing_extensions import Annotated from typing import Tuple from zenml import step, pipeline from zenml.models import PipelineRunResponse @step(substitutions={"custom_placeholder": "resolution"}) def demo() -> Tuple[ Annotated[int, "name_{date}_{time}"], Annotated[int, "name_{custom_placeholder}"], ]: return 42, 43 @pipeline def my_pipeline(): demo() if __name__ == "__main__": run_without_cache: PipelineRunResponse = my_pipeline.with_options(enable_cache=False)() run_with_cache: PipelineRunResponse = my_pipeline.with_options(enable_cache=True)() assert set(run_without_cache.steps["demo"].outputs.keys()) == set(run_with_cache.steps["demo"].outputs.keys()) print(list(run_without_cache.steps["demo"].outputs.keys())) ``` **Output Example**: ``` ['name_2024_11_21_14_27_33_750134', 'name_resolution'] ``` This documentation provides a comprehensive overview of naming strategies for ZenML artifacts, emphasizing flexibility and clarity in managing outputs across pipeline runs. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/handle-custom-data-types.md === ### Summary: Using Materializers to Pass Custom Data Types in ZenML Pipelines #### Overview ZenML pipelines are data-centric, where step outputs and inputs define their connections and execution order. **Materializers** manage how artifacts are serialized, stored, and retrieved from the artifact store. #### Built-In Materializers ZenML provides built-in materializers for common data types, which operate in the background without user interaction. Key materializers include: | Materializer | Handled Data Types | Storage Format | |--------------|---------------------|----------------| | BuiltInMaterializer | `bool`, `float`, `int`, `str`, `None` | `.json` | | BytesMaterializer | `bytes` | `.txt` | | BuiltInContainerMaterializer | `dict`, `list`, `set`, `tuple` | Directory | | NumpyMaterializer | `np.ndarray` | `.npy` | | PandasMaterializer | `pd.DataFrame`, `pd.Series` | `.csv` or `.gzip` | | PydanticMaterializer | `pydantic.BaseModel` | `.json` | | ServiceMaterializer | `zenml.services.service.BaseService` | `.json` | | StructuredStringMaterializer | `zenml.types.CSVString`, `zenml.types.HTMLString`, `zenml.types.MarkdownString` | `.csv`, `.html`, `.md` | **Warning:** The `CloudpickleMaterializer` can handle any object but is not production-ready due to compatibility issues across Python versions. #### Integration Materializers ZenML offers integration-specific materializers activated by installing respective integrations. Examples include: | Integration | Materializer | Handled Data Types | Storage Format | |-------------|--------------|---------------------|----------------| | bentoml | BentoMaterializer | `bentoml.Bento` | `.bento` | | deepchecks | DeepchecksResultMaterializer | `deepchecks.CheckResult`, `deepchecks.SuiteResult` | `.json` | | huggingface | HFDatasetMaterializer | `datasets.Dataset`, `datasets.DatasetDict` | Directory | | tensorflow | KerasMaterializer | `tf.keras.Model` | Directory | **Warning:** For Docker-based orchestrators, specify required integrations in `DockerSettings` to access integration materializers. #### Custom Materializers To use a custom materializer: 1. **Define the Materializer**: Subclass `BaseMaterializer`, set `ASSOCIATED_TYPES`, and implement `load()` and `save()` methods. ```python class MyMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (MyObj,) ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA def load(self, data_type: Type[MyObj]) -> MyObj: # Load logic here ... def save(self, my_obj: MyObj) -> None: # Save logic here ... ``` 2. **Configure Steps**: Use the materializer in steps, either at the decorator level or via the `configure()` method. ```python @step(output_materializers=MyMaterializer) def my_first_step() -> MyObj: return MyObj("my_object") ``` 3. **Global Materializer**: Register a custom materializer globally using the materializer registry. ```python materializer_registry.register_and_overwrite_type(key=pd.DataFrame, type_=FastPandasMaterializer) ``` #### Developing a Custom Materializer - **Base Implementation**: Implement `BaseMaterializer` methods for loading and saving artifacts. - **Metadata Extraction**: Override `extract_metadata()` to track custom metadata. - **Visualization**: Optionally implement `save_visualizations()` to create visual representations of artifacts. #### Example Code Here is a basic example of using a custom materializer with a custom class: ```python class MyObj: def __init__(self, name: str): self.name = name class MyMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (MyObj,) ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA def load(self, data_type: Type[MyObj]) -> MyObj: with self.artifact_store.open(os.path.join(self.uri, 'data.txt'), 'r') as f: return MyObj(name=f.read()) def save(self, my_obj: MyObj) -> None: with self.artifact_store.open(os.path.join(self.uri, 'data.txt'), 'w') as f: f.write(my_obj.name) @step def my_first_step() -> MyObj: return MyObj("my_object") my_first_step.configure(output_materializers=MyMaterializer) @step def my_second_step(my_obj: MyObj) -> None: logging.info(f"The following object was passed: `{my_obj.name}`") @pipeline def first_pipeline(): output_1 = my_first_step() my_second_step(output_1) first_pipeline() ``` This setup allows ZenML to effectively manage custom data types throughout the pipeline while ensuring robust serialization and deserialization. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/delete-an-artifact.md === ### Delete an Artifact Artifacts cannot be deleted directly to avoid breaking the ZenML database due to dangling references. However, you can delete artifacts that are no longer referenced by any pipeline runs using the following command: ```shell zenml artifact prune ``` This command removes artifacts from both the underlying artifact store and the database. You can modify this behavior with the flags: - `--only-artifact`: Deletes only the artifact. - `--only-metadata`: Deletes only the metadata entry. If you encounter errors while pruning (often due to locally stored artifacts that no longer exist), you can use the `--ignore-errors` flag to continue the process, though warnings will still be displayed in the terminal. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/return-multiple-outputs-from-a-step.md === ### Summary of Documentation on Using `Annotated` for Multiple Outputs The `Annotated` type allows for returning multiple outputs from a step in a pipeline, enhancing artifact retrieval and dashboard readability. #### Key Points: - **Purpose**: Use `Annotated` to name outputs for easy access and improved dashboard clarity. - **Functionality**: The `clean_data` step processes a pandas DataFrame and returns four named outputs: `x_train`, `x_test`, `y_train`, and `y_test`. #### Code Example: ```python from typing import Annotated, Tuple import pandas as pd from zenml import step from sklearn.model_selection import train_test_split @step def clean_data(data: pd.DataFrame) -> Tuple[ Annotated[pd.DataFrame, "x_train"], Annotated[pd.DataFrame, "x_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"], ]: x = data.drop("target", axis=1) y = data["target"] return train_test_split(x, y, test_size=0.2, random_state=42) ``` #### Explanation: - The function `clean_data` splits the input DataFrame into features (`x`) and target (`y`). - It uses `train_test_split` to create training and testing datasets. - Outputs are returned as a tuple, with each element annotated for identification in future steps and dashboard displays. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/README.md === ### Summary of ZenML Step Outputs and Pipeline **Overview**: In ZenML, step outputs are stored in an artifact store, facilitating caching, lineage, and auditability. Using type annotations enhances transparency, aids in data passing between steps, and allows for serialization/deserialization (termed 'materialize'). **Key Points**: - **Type Annotations**: Recommended for outputs to improve code clarity and data handling. - **Steps**: Defined using the `@step` decorator. - **Pipeline**: Steps are connected in a `@pipeline` function. **Code Example**: ```python @step def load_data(parameter: int) -> Dict[str, Any]: training_data = [[1, 2], [3, 4], [5, 6]] labels = [0, 1, 0] return {'features': training_data, 'labels': labels} @step def train_model(data: Dict[str, Any]) -> None: total_features = sum(map(sum, data['features'])) total_labels = sum(data['labels']) print(f"Trained model using {len(data['features'])} data points. " f"Feature sum is {total_features}, label sum is {total_labels}") @pipeline def simple_ml_pipeline(parameter: int): dataset = load_data(parameter) train_model(dataset) ``` **Functionality**: - `load_data`: Takes an integer and returns a dictionary of training data and labels. - `train_model`: Accepts the dictionary, computes totals, and simulates model training. - `simple_ml_pipeline`: Chains the two steps, passing output from `load_data` to `train_model`. This structure illustrates data flow within a ZenML pipeline. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/tagging.md === ### Organizing Data with Tags in ZenML ZenML allows users to organize machine learning artifacts and models using tags, enhancing workflow and discoverability. This guide covers how to assign tags to artifacts and models. #### Assigning Tags to Artifacts To tag artifact versions in a step or pipeline, use the `tags` property of `ArtifactConfig`: **Python SDK Example:** ```python from zenml import step, ArtifactConfig @step def training_data_loader() -> ( Annotated[pd.DataFrame, ArtifactConfig(tags=["sklearn", "pre-training"])] ): ... ``` **CLI Example:** ```shell # Tag the artifact zenml artifacts update iris_dataset -t sklearn # Tag the artifact version zenml artifacts versions update iris_dataset raw_2023 -t sklearn ``` This assigns the tags `sklearn` and `pre-training` to all artifacts created by the step. ZenML Pro users can tag artifacts directly in the cloud dashboard. #### Assigning Tags to Models Models can also be tagged for semantic organization. Tags can be specified as key-value pairs when creating a model version: **Python SDK Example:** ```python from zenml.models import Model # Define tags tags = ["experiment", "v1", "classification-task"] # Create a model version with tags model = Model(name="iris_classifier", version="1.0.0", tags=tags) @pipeline(model=model) def my_pipeline(...): ... ``` You can also create or register models with tags using the SDK: ```python from zenml.client import Client # Create a new model with tags Client().create_model(name="iris_logistic_regression", tags=["classification", "iris-dataset"]) # Create a new model version with tags Client().create_model_version(model_name_or_id="iris_logistic_regression", name="2", tags=["version-1", "experiment-42"]) ``` To add tags to existing models using the CLI: ```shell # Tag an existing model zenml model update iris_logistic_regression --tag "classification" # Tag a specific model version zenml model version update iris_logistic_regression 2 --tag "experiment3" ``` #### Important Notes - Tags enhance the organization and filtering of ML artifacts and models. - During a pipeline run, models can be implicitly created without tags from the `Model` class. Tags can be managed through the SDK or ZenML Pro UI. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/artifact-versioning.md === ### How ZenML Stores Data ZenML integrates data versioning and lineage into its core functionality, automatically tracking artifacts generated during pipeline executions. Users can view the lineage of artifacts and interact with them through a dashboard, enhancing insights, reproducibility, and reliability in machine learning workflows. #### Artifact Creation and Caching Upon each pipeline run, ZenML checks for changes in inputs, outputs, parameters, or configurations. Each step creates a new directory in the artifact store. If a step is new or modified, a unique directory structure is created, and data is stored using appropriate materializers. If unchanged, ZenML may cache the step to save time and computational resources, allowing focus on experimentation without rerunning unchanged pipeline parts. This lineage tracking enables tracing artifacts back to their origins, providing transparency in data processing and transformation. It is crucial for reproducibility and identifying issues in machine learning pipelines. For artifact management details, refer to the documentation on [artifact versioning and configuration](../../../user-guide/starter-guide/manage-artifacts.md). #### Saving and Loading Artifacts with Materializers Materializers are essential for artifact management, handling serialization and deserialization to ensure consistent storage and retrieval. Each materializer stores data in unique directories within the artifact store. ZenML provides built-in materializers for common data types and uses `cloudpickle` for objects without a default materializer. Custom materializers can be created by extending the `BaseMaterializer` class for specific data types or storage systems. Note that the built-in `CloudpickleMaterializer` is not production-ready due to compatibility issues across Python versions and potential security risks. For robust artifact saving, consider building a custom materializer. During pipeline execution, ZenML uses materializers to save and load artifacts via the `fileio` system, facilitating artifact caching and lineage tracking. An example of a default materializer, the `numpy` materializer, can be found [here](https://github.com/zenml-io/zenml/blob/main/src/zenml/materializers/numpy_materializer.py). ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/load-artifacts-into-memory.md === # Loading Artifacts into Memory ZenML pipelines typically consume artifacts produced by one another directly. However, for external data, such as artifacts from non-ZenML code, use the `ExternalArtifact`. For exchanging data between ZenML pipelines, late materialization is essential, allowing the passing of not-yet-existing artifacts as step inputs. ## Use Cases for Artifact Exchange 1. Grouping data products using ZenML Models. 2. Using the ZenML Client to consolidate components. **Recommendation:** Use models for grouping and accessing artifacts across pipelines. For loading artifacts from a ZenML Model, refer to the relevant documentation. ## Client Methods for Artifact Exchange You can exchange data between pipelines using late materialization without the Model Control Plane. Here’s an updated version of the `do_predictions` pipeline code: ```python from typing import Annotated from zenml import step, pipeline from zenml.client import Client import pandas as pd from sklearn.base import ClassifierMixin @step def predict(model1: ClassifierMixin, model2: ClassifierMixin, model1_metric: float, model2_metric: float, data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: predictions = pd.Series(model1.predict(data)) if model1_metric < model2_metric else pd.Series(model2.predict(data)) return predictions @step def load_data() -> pd.DataFrame: # Load inference data ... @pipeline def do_predictions(): model_42 = Client().get_artifact_version("trained_model", version="42") metric_42 = model_42.run_metadata["MSE"].value model_latest = Client().get_artifact_version("trained_model") metric_latest = model_latest.run_metadata["MSE"].value inference_data = load_data() predict(model1=model_42, model2=model_latest, model1_metric=metric_42, model2_metric=metric_latest, data=inference_data) if __name__ == "__main__": do_predictions() ``` In this code, the `predict` step compares models based on their MSE metrics, ensuring predictions are made using the best model. The `load_data` step is included to load inference data. Calls to `Client().get_artifact_version(...)` and accessing `run_metadata` are evaluated at execution time, ensuring the latest artifacts are used. ================================================== === File: docs/book/how-to/popular-integrations/README.md === ### ZenML Integrations Guide ZenML allows seamless integration with popular tools in the data science and machine learning ecosystem. This guide provides instructions on how to connect ZenML with various tools effectively. ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ### Key Points: - ZenML is designed for compatibility with various data science and ML tools. - The guide focuses on popular integrations to enhance your workflow. For specific integration instructions, refer to the detailed sections in the full documentation. ================================================== === File: docs/book/how-to/popular-integrations/skypilot.md === ### Summary of ZenML SkyPilot VM Orchestrator Documentation **Overview:** The ZenML SkyPilot VM Orchestrator enables provisioning and management of VMs across cloud providers (AWS, GCP, Azure, Lambda Labs) for ML pipelines, offering cost efficiency and GPU availability. **Prerequisites:** - Install ZenML SkyPilot integration for your cloud provider: ```bash zenml integration install skypilot_ ``` - Docker must be installed and running. - A remote artifact store and container registry in your ZenML stack. - A remote ZenML deployment. - Permissions to provision VMs on your cloud provider. - A service connector configured for authentication (not required for Lambda Labs). **Configuration Steps:** *For AWS, GCP, Azure:* 1. Install SkyPilot integration and connectors. 2. Register a service connector with necessary permissions. 3. Register the orchestrator and connect it to the service connector. 4. Register and activate a stack with the orchestrator. ```bash zenml service-connector register -skypilot-vm -t --auto-configure zenml orchestrator register --flavor vm_ zenml orchestrator connect --connector -skypilot-vm zenml stack register -o ... --set ``` *For Lambda Labs:* 1. Install SkyPilot Lambda integration. 2. Register a secret with your API key. 3. Register the orchestrator with the API key secret. 4. Register and activate a stack with the orchestrator. ```bash zenml secret create lambda_api_key --scope user --api_key= zenml orchestrator register --flavor vm_lambda --api_key={{lambda_api_key.api_key}} zenml stack register -o ... --set ``` **Running a Pipeline:** Once configured, run any ZenML pipeline using the SkyPilot VM Orchestrator, with each step executed in a Docker container on a provisioned VM. **Additional Configuration:** You can configure the orchestrator with cloud-specific `Settings` objects to specify VM size, spot usage, region, etc. ```python from zenml.integrations.skypilot_.flavors.skypilot_orchestrator__vm_flavor import SkypilotOrchestratorSettings skypilot_settings = SkypilotOrchestratorSettings( cpus="2", memory="16", accelerators="V100:2", use_spot=True, region=, ) @pipeline(settings={"orchestrator": skypilot_settings}) ``` You can also configure resources per step: ```python @step(settings={"orchestrator": high_resource_settings}) def resource_intensive_step(): ... ``` For more details, refer to the [full SkyPilot VM Orchestrator documentation](../../component-guide/orchestrators/skypilot-vm.md). ================================================== === File: docs/book/how-to/popular-integrations/kubeflow.md === ### Summary: ZenML Kubeflow Orchestrator The ZenML Kubeflow Orchestrator enables running ML pipelines on Kubeflow without writing Kubeflow code. #### Prerequisites - Install ZenML `kubeflow` integration: `zenml integration install kubeflow` - Docker installed and running - `kubectl` installed (optional) - Kubernetes cluster with Kubeflow Pipelines - Remote artifact store and container registry in ZenML stack - Remote ZenML server deployed - Kubernetes context name (optional) #### Configuring the Orchestrator 1. **Using a Service Connector** (recommended for cloud-managed clusters): ```bash zenml orchestrator register --flavor kubeflow zenml service-connector list-resources --resource-type kubernetes-cluster -e zenml orchestrator connect --connector zenml stack update -o ``` 2. **Using `kubectl` Context**: ```bash zenml orchestrator register --flavor=kubeflow --kubernetes_context= zenml stack update -o ``` #### Running a Pipeline Execute any ZenML pipeline with: ```bash python your_pipeline.py ``` This creates a Kubernetes pod for each pipeline step, viewable in the Kubeflow UI. #### Additional Configuration Configure the orchestrator with `KubeflowOrchestratorSettings`: ```python from zenml.integrations.kubeflow.flavors.kubeflow_orchestrator_flavor import KubeflowOrchestratorSettings kubeflow_settings = KubeflowOrchestratorSettings( client_args={}, user_namespace="my_namespace", pod_settings={ "affinity": {...}, "tolerations": [...] } ) @pipeline(settings={"orchestrator": kubeflow_settings}) ``` #### Multi-Tenancy Deployments For multi-tenant setups, register the orchestrator with: ```bash zenml orchestrator register --flavor=kubeflow --kubeflow_hostname= ``` Provide credentials in the settings: ```python kubeflow_settings = KubeflowOrchestratorSettings( client_username="admin", client_password="abc123", user_namespace="namespace_name" ) @pipeline(settings={"orchestrator": kubeflow_settings}) ``` For more details, refer to the [full documentation](../../component-guide/orchestrators/kubeflow.md). ================================================== === File: docs/book/how-to/popular-integrations/azure-guide.md === # Azure Stack Setup for ZenML Pipelines This guide provides a concise process to set up a minimal production stack on Azure for running ZenML pipelines. ## Prerequisites - Active Azure account - ZenML installed - ZenML Azure integration: `zenml integration install azure` ## Steps to Set Up Azure Stack ### 1. Set Up Credentials Create a service principal: 1. Go to Azure Portal > App Registrations > `+ New registration`. 2. Name it and register. 3. Note the Application ID and Tenant ID. 4. Under `Certificates & secrets`, create a client secret and note the secret value. ### 2. Create Resource Group and AzureML Instance 1. Go to Azure Portal > Resource Groups > `+ Create`. 2. After creation, click `+ Create` in the new resource group overview. 3. Select `Azure Machine Learning` from the marketplace to create an AzureML workspace. Consider creating a container registry as well. ### 3. Create Role Assignments 1. In your resource group, go to `Access control (IAM)` > `+ Add` a new role assignment. 2. Assign the following roles to your app: - AzureML Compute Operator - AzureML Data Scientist - AzureML Registry User ### 4. Create a Service Connector Register the ZenML Azure Service Connector: ```bash zenml service-connector register azure_connector --type azure \ --auth-method service-principal \ --client_secret= \ --tenant_id= \ --client_id= ``` ### 5. Create Stack Components #### Artifact Store (Azure Blob Storage) 1. Create a container in your AzureML workspace's storage account. 2. Register the artifact store: ```bash zenml artifact-store register azure_artifact_store -f azure \ --path= \ --connector azure_connector ``` #### Orchestrator (AzureML) Register the orchestrator: ```bash zenml orchestrator register azure_orchestrator -f azureml \ --subscription_id= \ --resource_group= \ --workspace= \ --connector azure_connector ``` #### Container Registry (Azure Container Registry) Register the container registry: ```bash zenml container-registry register azure_container_registry -f azure \ --uri= \ --connector azure_connector ``` ### 6. Create a Stack Register the Azure ZenML stack: ```shell zenml stack register azure_stack \ -o azure_orchestrator \ -a azure_artifact_store \ -c azure_container_registry \ --set ``` ### 7. Run a Pipeline Define and run a simple ZenML pipeline: ```python from zenml import pipeline, step @step def hello_world() -> str: return "Hello from Azure!" @pipeline def azure_pipeline(): hello_world() if __name__ == "__main__": azure_pipeline() ``` Save as `run.py` and execute: ```shell python run.py ``` ## Next Steps - Explore ZenML's production guide for best practices. - Investigate ZenML integrations with other tools. - Join the ZenML community for support and networking. ================================================== === File: docs/book/how-to/popular-integrations/gcp-guide.md === # Minimal GCP Stack Setup Guide This guide provides steps to set up a minimal production stack on Google Cloud Platform (GCP) for ZenML. ## Steps to Set Up ### 1. Choose a GCP Project Select or create a GCP project in the Google Cloud console. Ensure a billing account is attached. ```bash gcloud projects create --billing-project= ``` ### 2. Enable GCloud APIs Enable the following APIs in your GCP project: - Cloud Functions API - Cloud Run Admin API - Cloud Build API - Artifact Registry API - Cloud Logging API ### 3. Create a Dedicated Service Account Create a service account with the following roles: - AI Platform Service Agent - Storage Object Admin ### 4. Create a JSON Key for Your Service Account Generate a JSON key for the service account. ```bash export JSON_KEY_FILE_PATH= ``` ### 5. Create a Service Connector in ZenML Authenticate ZenML with GCP using the service account. ```bash zenml integration install gcp \ && zenml service-connector register gcp_connector \ --type gcp \ --auth-method service-account \ --service_account_json=@${JSON_KEY_FILE_PATH} \ --project_id= ``` ### 6. Create Stack Components #### Artifact Store Create a GCS bucket and register it as an artifact store. ```bash export ARTIFACT_STORE_NAME=gcp_artifact_store zenml artifact-store register ${ARTIFACT_STORE_NAME} --flavor gcp --path=gs:// zenml artifact-store connect ${ARTIFACT_STORE_NAME} -i ``` #### Orchestrator Register Vertex AI as the orchestrator. ```bash export ORCHESTRATOR_NAME=gcp_vertex_orchestrator zenml orchestrator register ${ORCHESTRATOR_NAME} --flavor=vertex --project= --location=europe-west2 zenml orchestrator connect ${ORCHESTRATOR_NAME} -i ``` #### Container Registry Register a GCP Container Registry. ```bash export CONTAINER_REGISTRY_NAME=gcp_container_registry zenml container-registry register ${CONTAINER_REGISTRY_NAME} --flavor=gcp --uri= zenml container-registry connect ${CONTAINER_REGISTRY_NAME} -i ``` ### 7. Create Stack Register the stack with the created components. ```bash export STACK_NAME=gcp_stack zenml stack register ${STACK_NAME} -o ${ORCHESTRATOR_NAME} -a ${ARTIFACT_STORE_NAME} -c ${CONTAINER_REGISTRY_NAME} --set ``` ## Cleanup To remove all resources, delete the project. ```bash gcloud project delete ``` ## Best Practices - **IAM and Least Privilege**: Grant minimum permissions necessary for ZenML. - **Resource Labeling**: Use labels for GCP resources for better tracking. ```bash gcloud storage buckets update gs://your-bucket-name --update-labels=project=zenml,environment=production ``` - **Cost Management**: Monitor spending using GCP Cost Management tools and set up budget alerts. ```bash gcloud billing budgets create --billing-account=BILLING_ACCOUNT_ID --display-name="ZenML Monthly Budget" --budget-amount=1000 --threshold-rule=percent=90 ``` - **Backup Strategy**: Regularly back up data and enable versioning on GCS. ```bash gsutil versioning set on gs://your-bucket-name ``` By following these steps and best practices, you can efficiently set up and manage a GCP stack for ZenML projects. ================================================== === File: docs/book/how-to/popular-integrations/kubernetes.md === ### Summary: Deploying ZenML Pipelines on Kubernetes The ZenML Kubernetes Orchestrator enables running ML pipelines on a Kubernetes cluster without needing to write Kubernetes code, serving as a simpler alternative to orchestrators like Airflow or Kubeflow. #### Prerequisites To use the Kubernetes Orchestrator, ensure you have: - ZenML `kubernetes` integration installed: `zenml integration install kubernetes` - Docker and `kubectl` installed - A remote artifact store and container registry in your ZenML stack - A deployed Kubernetes cluster - Optionally, a configured `kubectl` context for the cluster #### Deploying the Orchestrator You need a Kubernetes cluster to run the orchestrator. Various deployment methods exist, which can be explored in the [cloud guide](../../user-guide/cloud-guide/cloud-guide.md). #### Configuring the Orchestrator Configuration can be done in two ways: 1. **Using a Service Connector** (recommended for cloud-managed clusters): ```bash zenml orchestrator register --flavor kubernetes zenml service-connector list-resources --resource-type kubernetes-cluster -e zenml orchestrator connect --connector zenml stack register -o ... --set ``` 2. **Using `kubectl` Context**: ```bash zenml orchestrator register --flavor=kubernetes --kubernetes_context= zenml stack register -o ... --set ``` #### Running a Pipeline To execute a ZenML pipeline with the Kubernetes Orchestrator: ```bash python your_pipeline.py ``` This command will create a Kubernetes pod for each pipeline step, and you can manage the pods using `kubectl` commands. For more details, refer to the [full Kubernetes Orchestrator documentation](../../component-guide/orchestrators/kubernetes.md). ================================================== === File: docs/book/how-to/popular-integrations/mlflow.md === # MLflow Experiment Tracker with ZenML ## Overview The ZenML MLflow Experiment Tracker integration allows logging and visualizing pipeline step information using MLflow without additional code. ## Prerequisites - Install ZenML MLflow integration: ```bash zenml integration install mlflow -y ``` - MLflow deployment: local or remote with proxied artifact storage. ## Configuring the Experiment Tracker ### 1. Local Deployment - Suitable for local ZenML runs. No extra configuration needed. ```bash zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow zenml stack register custom_stack -e mlflow_experiment_tracker ... --set ``` ### 2. Remote Deployment - Requires authentication (recommended: ZenML secrets). ```bash zenml secret create mlflow_secret --username= --password= zenml experiment-tracker register mlflow --flavor=mlflow \ --tracking_username={{mlflow_secret.username}} \ --tracking_password={{mlflow_secret.password}} ... ``` ## Using the Experiment Tracker To log information in a pipeline step: 1. Enable the experiment tracker with the `@step` decorator. 2. Use MLflow logging or auto-logging. Example: ```python import mlflow @step(experiment_tracker="") def train_step(...): mlflow.tensorflow.autolog() mlflow.log_param(...) mlflow.log_metric(...) mlflow.log_artifact(...) ``` ## Viewing Results Retrieve the MLflow experiment URL for a ZenML run: ```python last_run = client.get_pipeline("").last_run tracking_url = last_run.get_step("").run_metadata["experiment_tracker_url"].value ``` ## Additional Configuration Further configure the experiment tracker using `MLFlowExperimentTrackerSettings`: ```python from zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor import MLFlowExperimentTrackerSettings mlflow_settings = MLFlowExperimentTrackerSettings(nested=True, tags={"key": "value"}) @step( experiment_tracker="", settings={"experiment_tracker": mlflow_settings} ) ``` For more details, refer to the full MLflow Experiment Tracker documentation. ================================================== === File: docs/book/how-to/popular-integrations/aws-guide.md === # AWS Stack Setup for ZenML Pipelines ## Overview This guide provides a streamlined process to set up a minimal production stack on AWS for running ZenML pipelines. It includes steps for creating an IAM role with appropriate permissions for ZenML to authenticate with AWS resources. ## Prerequisites - Active AWS account with permissions for S3, SageMaker, ECR, and ECS. - ZenML installed. - AWS CLI installed and configured. ## Steps to Set Up AWS Stack ### 1. Set Up Credentials and Local Environment 1. **Choose AWS Region**: Select the region for your ZenML stack (e.g., `us-east-1`). 2. **Create IAM Role**: - Get your AWS account ID: ```shell aws sts get-caller-identity --query Account --output text ``` - Create `assume-role-policy.json`: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam:::root", "Service": "sagemaker.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } ``` - Create IAM role: ```shell aws iam create-role --role-name zenml-role --assume-role-policy-document file://assume-role-policy.json ``` - Attach necessary policies: ```shell aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess ``` 3. **Install ZenML Integrations**: ```shell zenml integration install aws s3 -y ``` ### 2. Create a Service Connector in ZenML Register an AWS Service Connector: ```shell zenml service-connector register aws_connector \ --type aws \ --auth-method iam-role \ --role_arn= \ --region= \ --aws_access_key_id= \ --aws_secret_access_key= ``` ### 3. Create Stack Components #### Artifact Store (S3) - Create an S3 bucket: ```shell aws s3api create-bucket --bucket your-bucket-name ``` - Register S3 Artifact Store: ```shell zenml artifact-store register cloud_artifact_store -f s3 --path=s3://your-bucket-name --connector aws_connector ``` #### Orchestrator (SageMaker Pipelines) - Create a SageMaker domain (if not already created). - Register SageMaker Pipelines orchestrator: ```shell zenml orchestrator register sagemaker-orchestrator --flavor=sagemaker --region= --execution_role= ``` #### Container Registry (ECR) - Create ECR repository: ```shell aws ecr create-repository --repository-name zenml --region ``` - Register ECR container registry: ```shell zenml container-registry register ecr-registry --flavor=aws --uri=.dkr.ecr..amazonaws.com --connector aws_connector ``` ### 4. Create Stack ```shell export STACK_NAME=aws_stack zenml stack register ${STACK_NAME} -o sagemaker-orchestrator -a cloud_artifact_store -c ecr-registry --set ``` ### 5. Run a Pipeline Define and run a ZenML pipeline: ```python from zenml import pipeline, step @step def hello_world() -> str: return "Hello from SageMaker!" @pipeline def aws_sagemaker_pipeline(): hello_world() if __name__ == "__main__": aws_sagemaker_pipeline() ``` Execute: ```shell python run.py ``` ## Cleanup To avoid charges, delete AWS resources: ```shell # Delete S3 bucket aws s3 rm s3://your-bucket-name --recursive aws s3api delete-bucket --bucket your-bucket-name # Delete SageMaker domain aws sagemaker delete-domain --domain-id # Delete ECR repository aws ecr delete-repository --repository-name zenml --force # Detach policies from IAM role aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess # Delete IAM role aws iam delete-role --role-name zenml-role ``` ## Conclusion This guide outlines the steps to set up an AWS stack for ZenML, enabling scalable and efficient machine learning pipelines. Key components include IAM roles, S3 for artifact storage, SageMaker for orchestration, and ECR for container management. Following best practices, such as using IAM roles with least privilege and implementing cost management strategies, will enhance security and efficiency in your AWS stack. ================================================== === File: docs/book/getting-started/core-concepts.md === # ZenML Core Concepts Summary **ZenML** is an open-source MLOps framework designed for creating portable, production-ready MLOps pipelines, facilitating collaboration among data scientists, ML engineers, and MLOps developers. The core concepts of ZenML are categorized into three threads: 1. **Development**: Focuses on designing ML workflows. 2. **Execution**: Involves utilizing MLOps tools and infrastructure during workflow execution. 3. **Management**: Centers on establishing and maintaining production-grade solutions. ## 1. Development ### Steps - Steps are defined using the `@step` decorator. - Example: ```python @step def step_1() -> str: return "world" ``` - Steps can have typed inputs and outputs: ```python @step(enable_cache=False) def step_2(input_one: str, input_two: str) -> str: return f"{input_one} {input_two}" ``` ### Pipelines - Pipelines consist of steps and are defined using decorators or classes. - Example: ```python @pipeline def my_pipeline(): output_step_one = step_1() step_2(input_one="hello", input_two=output_step_one) if __name__ == "__main__": my_pipeline() ``` ### Artifacts - Artifacts are data inputs/outputs tracked and stored by ZenML, serialized/deserialized by **Materializers**. ### Models - Models represent training outputs and associated metadata, managed centrally in ZenML. ### Materializers - Materializers handle artifact serialization/deserialization, using the `BaseMaterializer` class. ### Parameters & Settings - Steps can take parameters, which ZenML tracks for reproducibility. Runtime configurations can be set using **Settings**. ### Model Versions - ZenML allows tracking multiple versions of a model, linking them to a centralized view. ## 2. Execution ### Stacks & Components - A **Stack** is a collection of components (e.g., orchestrators, artifact stores) for executing pipelines. ### Orchestrator - The orchestrator coordinates pipeline execution, managing dependencies between steps. ### Artifact Store - The artifact store tracks and versions artifacts, enabling data caching. ### Flavor - ZenML provides base abstractions for stack components, allowing for custom **Flavors** tailored to specific use cases. ### Stack Switching - Users can switch between local and cloud stacks easily, enhancing flexibility in deployment. ## 3. Management ### ZenML Server - A ZenML Server is required for remote stack components, managing pipelines and metadata. ### Server Deployment - Users can deploy ZenML servers via a SaaS offering or self-hosting. ### Metadata Tracking - The server tracks metadata for pipeline runs, aiding in experiment management. ### Secrets Management - ZenML Server securely stores sensitive data like credentials, supporting various backends (e.g., AWS Secrets Manager). ### Collaboration - The server facilitates team collaboration, allowing users to share resources like pipelines and stacks. ### Dashboard - The ZenML Dashboard visualizes pipelines and components, enhancing user interaction. ### VS Code Extension - A VS Code extension allows direct interaction with ZenML stacks, runs, and servers from the editor. This concise overview captures the essential technical details and concepts of ZenML, enabling effective Q&A regarding its functionalities and architecture. ================================================== === File: docs/book/getting-started/installation.md === # ZenML Installation and Getting Started ## Installation **ZenML** is a Python package installable via `pip`: ```shell pip install zenml ``` **Supported Python Versions:** ZenML supports **Python 3.9, 3.10, 3.11, and 3.12**. ## Dashboard Installation To use the web dashboard locally, install the ZenML Server with optional dependencies: ```shell pip install "zenml[server]" ``` **Recommendation:** Use a virtual environment (e.g., [virtualenvwrapper](https://virtualenvwrapper.readthedocs.io/en/latest/) or [pyenv-virtualenv](https://github.com/pyenv/pyenv-virtualenv)). ## MacOS Installation (Apple Silicon) Set the following environment variable for proper server connections: ```bash export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES ``` This is necessary for local server use on Apple Silicon Macs. It is not required if using ZenML as a client. ## Nightly Builds Install nightly builds (unstable) with: ```shell pip install zenml-nightly ``` ## Verifying Installation Check installation success via Bash: ```bash zenml version ``` Or in Python: ```python import zenml print(zenml.__version__) ``` For more details, visit the [PyPi package page](https://pypi.org/project/zenml). ## Running with Docker ZenML is available as a Docker image. Start a bash environment with: ```shell docker run -it zenmldocker/zenml /bin/bash ``` To run the ZenML server: ```shell docker run -it -d -p 8080:8080 zenmldocker/zenml-server ``` ## Deploying the Server ZenML can run locally with the dashboard: ```shell pip install "zenml[server]" zenml login --local # opens the dashboard locally ``` For advanced features, deploy a centrally-accessible ZenML server. Options include [self-hosting](deploying-zenml/README.md) or registering for a free [ZenML Pro](https://cloud.zenml.io/signup?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link) account. ================================================== === File: docs/book/getting-started/system-architectures.md === # ZenML System Architecture Overview This guide outlines the deployment options for ZenML, including ZenML OSS (self-hosted), ZenML Pro (SaaS or self-hosted), and their respective components. ## ZenML OSS (Self-hosted) ZenML OSS consists of: - **ZenML OSS Server**: A FastAPI app managing metadata for pipelines, artifacts, and stacks. - **OSS Metadata Store**: Stores ML metadata for tracking and versioning. - **OSS Dashboard**: A ReactJS app displaying pipelines and runs. - **Secrets Store**: Secure storage for credentials, accessible by ZenML Pro API. ZenML OSS is free under the Apache 2.0 license. For deployment instructions, refer to the [deployment guide](./deploying-zenml/README.md). ## ZenML Pro (SaaS or Self-hosted) ZenML Pro enhances OSS with: - **ZenML Pro Control Plane**: Central management for all tenants. - **Pro Dashboard**: An advanced version of the OSS dashboard. - **Pro Metadata Store**: PostgreSQL database for roles, permissions, and tenant management. - **Pro Add-ons**: Python modules for additional functionality. - **Identity Provider**: Supports flexible authentication options, integrating with Auth0 for cloud deployments or custom OIDC for self-hosted. ZenML Pro can be easily integrated with existing ZenML OSS deployments. ### ZenML Pro SaaS Architecture In SaaS deployments: - All ZenML services are hosted by ZenML. - Customer secrets are managed by the ZenML Pro Control Plane. - ML metadata is stored on ZenML infrastructure, while actual ML data artifacts are stored on customer cloud. A hybrid option allows customers to store secrets on their side while connecting to ZenML services. ### ZenML Pro Self-Hosted Architecture For self-hosted deployments: - All services, data, and secrets are managed on the customer's cloud, ensuring maximum security. For detailed architecture diagrams and further information, refer to the respective sections in the documentation. Interested users can sign up for a free 14-day trial of ZenML Pro [here](https://cloud.zenml.io/?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link). ================================================== === File: docs/book/getting-started/deploying-zenml/README.md === # Deploying ZenML ## Overview Deploying ZenML to a production environment provides benefits such as: 1. **Scalability**: Handles large workloads for faster processing. 2. **Reliability**: High availability and fault tolerance minimize downtime. 3. **Collaboration**: Enables teamwork and sharing of insights. ## Components A ZenML deployment includes: - **FastAPI server** with SQLite or MySQL database - **Python Client** for server interaction - **ReactJS dashboard** (optional) - **ZenML Pro API + Database + Dashboard** (optional) For more details on the architecture, refer to the [system architecture documentation](../system-architectures.md). ### ZenML Python Client The ZenML client is a Python package for server interaction, installable via `pip`. It provides a command-line interface (`zenml`) for managing stacks and pipelines. For advanced control, use the Python SDK to access metadata. Full documentation is available [here](https://sdkdocs.zenml.io/latest/). ### Deployment Scenarios Initially, ZenML runs locally with an SQLite database for basic features. Use `zenml login --local` to start a local server. For production, deploy the ZenML server centrally to allow team collaboration and access to cloud components. ## How to Deploy ZenML Deploying the ZenML Server is essential for production-level machine learning projects, enabling remote stacks, centralized tracking, and collaboration. There are two deployment options: 1. **Managed Deployment**: Use ZenML Pro to create managed servers (tenants) with server maintenance handled by ZenML. 2. **Self-hosted Deployment**: Deploy ZenML on your infrastructure using methods like Docker, Helm, or HuggingFace Spaces. The Pro version is also available for self-hosted solutions. ### Deployment Options Refer to the following guides for deployment strategies: - [Deploying ZenML using ZenML Pro](../zenml-pro/README.md) - [Deploy with Docker](./deploy-with-docker.md) - [Deploy with Helm](./deploy-with-helm.md) - [Deploy with HuggingFace Spaces](./deploy-using-huggingface-spaces.md) This concise overview captures the essential information for deploying ZenML while maintaining clarity on the components and deployment strategies. ================================================== === File: docs/book/getting-started/deploying-zenml/deploy-using-huggingface-spaces.md === ### Deploying ZenML on Hugging Face Spaces **Overview**: Hugging Face Spaces allows for quick deployment of ZenML, enabling users to host and share ML projects without infrastructure overhead. For production use, ensure persistent storage is enabled to avoid data loss. **Deployment Steps**: 1. **Create a Space**: - Click [here](https://huggingface.co/new-space?template=zenml/zenml) to start. - Specify the Owner, Space name, and set Visibility to 'Public' for local connections. 2. **Select Machine Type**: - Choose a higher-tier CPU instance to avoid auto-shutdowns and consider setting up a MySQL database for persistence. 3. **Customize Appearance**: - Modify the `README.md` for title, emojis, and colors. Refer to the [Hugging Face configuration guide](https://huggingface.co/docs/hub/spaces-config-reference) for details. 4. **Access Your Space**: - After creation, wait for the status to change from 'Building' to 'Running'. Refresh if the ZenML login UI is not visible. - Use the "Embed this Space" option to copy the "Direct URL" (format: `https://-.hf.space`) for server initialization. **Connecting to ZenML Server**: - Use the following command in your CLI (replace with your URL): ```shell zenml login '' ``` - Access the ZenML dashboard directly via the URL. **Configuration Options**: - By default, ZenML uses an SQLite database. For persistence, modify the `Dockerfile` in your Space's root directory. Refer to [advanced configuration documentation](deploy-with-docker.md#advanced-server-configuration-options) for more options. - For secrets management, utilize Hugging Face's 'Repository secrets' and update your ZenML server password via the Dashboard settings. **Troubleshooting**: - Check server logs by clicking "Open Logs" in your Space for issues. For further assistance, reach out on the [Slack channel](https://zenml.io/slack/). **Upgrading ZenML**: - The default space uses the latest ZenML version. To update, select 'Factory reboot' in the 'Settings' tab (note: this will wipe data unless using a MySQL persistent database). To revert to an earlier version, modify the `FROM` statement in the `Dockerfile`. ================================================== === File: docs/book/getting-started/deploying-zenml/deploy-with-helm.md === ### Summary: Deploying ZenML in a Kubernetes Cluster with Helm #### Overview ZenML can be deployed in a Kubernetes cluster using Helm. The Helm chart is available on the [ArtifactHub repository](https://artifacthub.io/packages/helm/zenml/zenml). This documentation outlines prerequisites, configuration, and deployment scenarios. #### Prerequisites - **Kubernetes Cluster**: A running cluster is required. - **Database**: A MySQL-compatible database (recommended version 8.0+) is optional but preferred for production. ZenML defaults to an embedded SQLite database, which is not persistent or scalable. - **Tools**: - [Kubernetes client (kubectl)](https://kubernetes.io/docs/tasks/tools/#kubectl) - [Helm](https://helm.sh/docs/intro/install/) - **Secrets Manager**: Optional external secrets management service (e.g., AWS Secrets Manager, GCP Secrets Manager, Azure Key Vault). #### ZenML Helm Configuration 1. Review the [`values.yaml` file](https://artifacthub.io/packages/helm/zenml/zenml?modal=values) for customizable settings. 2. Collect necessary information for database and secrets management configuration. ##### Database Configuration - Hostname, port, username, password, and database name are required for external MySQL databases. - SSL certificates may be required if using SSL. ##### Secrets Management Configuration - For AWS: AWS region, access key ID, and secret access key. - For GCP: Project ID and service account with access to Secrets Manager. - For Azure: Key Vault name, tenant ID, client ID, and client secret. - For HashiCorp Vault: Vault server URL and access token. #### Optional Cluster Services - **Ingress Service**: Recommended for exposing HTTP services. Use `nginx-ingress` for HTTPS. - **Cert-Manager**: For managing TLS certificates. #### ZenML Helm Installation 1. **Pull the Helm Chart**: ```bash helm pull oci://public.ecr.aws/zenml/zenml --version --untar ``` 2. **Customize the Helm Chart**: Create `custom-values.yaml` from `values.yaml` and modify necessary configurations (e.g., database URL, SSL certificates, Ingress settings). 3. **Install the Helm Chart**: ```bash helm -n install zenml-server . --create-namespace --values custom-values.yaml ``` #### Post-Installation - Activate the ZenML server by visiting the provided URL and following the instructions. - Connect your local ZenML client: ```bash zenml login https://zenml.example.com:8080 --no-verify-ssl ``` - To disconnect: ```bash zenml logout ``` #### Deployment Scenarios 1. **Minimal Deployment**: Uses SQLite and ClusterIP service. ```yaml zenml: ingress: enabled: false ``` Access via port-forwarding: ```bash kubectl -n zenml-server port-forward svc/zenml-server 8080:8080 zenml login http://localhost:8080 ``` 2. **Basic Deployment with Local Database**: Uses Ingress with TLS. Install `cert-manager` and `nginx-ingress`: ```bash helm repo add jetstack https://charts.jetstack.io helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true helm install nginx-ingress ingress-nginx/ingress-nginx --namespace nginx-ingress --create-namespace ``` 3. **Shared Ingress Controller**: Solutions for using a dedicated hostname or URL path. #### Secrets Store Configuration - Default is SQL database; configure for external services as needed. - Backup secrets store can be configured for high availability. #### Database Backup and Recovery - Automated backups are created before upgrades. The backup strategy can be configured (e.g., `in-memory`, `database`, `dump-file`). #### Custom CA Certificates and Proxy Configuration - Custom CA certificates can be injected directly or referenced from Kubernetes secrets. - Proxy settings can be configured for external connections. This summary captures the essential steps and configurations for deploying ZenML in a Kubernetes environment using Helm, ensuring no critical information is lost. ================================================== === File: docs/book/getting-started/deploying-zenml/deploy-with-docker.md === ### Summary: Deploying ZenML in a Docker Container **Overview** ZenML can be deployed using the Docker container image `zenmldocker/zenml-server`. This guide outlines configuration options and deployment scenarios. **Local Deployment** For a quick local deployment, use the ZenML CLI: ```bash zenml login --local --docker ``` This command sets up a local ZenML server with a shared SQLite database. **Configuration Options** When deploying a custom ZenML server, configure environment variables for database connections and other settings. Key variables include: - **ZENML_STORE_URL**: Connect to SQLite or MySQL databases. - SQLite: `sqlite:////path/to/zenml.db` - MySQL: `mysql://username:password@host:port/database` - **ZENML_STORE_SSL_CA, ZENML_STORE_SSL_CERT, ZENML_STORE_SSL_KEY**: SSL configuration for MySQL connections. - **ZENML_LOGGING_VERBOSITY**: Set log level (default is `INFO`). - **ZENML_SERVER_RATE_LIMIT_ENABLED**: Enable rate limiting for API requests. - **ZENML_SECRETS_STORE_TYPE**: Set to `sql` for SQL database secrets management or other supported types (AWS, GCP, Azure, HashiCorp). **Running ZenML Server** To run the ZenML server with Docker: ```bash docker run -it -d -p 8080:8080 --name zenml zenmldocker/zenml-server ``` For persistent storage, mount a directory: ```bash mkdir zenml-server docker run -it -d -p 8080:8080 --name zenml \ --mount type=bind,source=$PWD/zenml-server,target=/zenml/.zenconfig/local_stores/default_zen_store \ zenmldocker/zenml-server ``` **MySQL Database Setup** To use MySQL: ```bash docker run --name mysql -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password mysql:8.0 ``` Connect ZenML to MySQL: ```bash docker run -it -d -p 8080:8080 --name zenml \ --env ZENML_STORE_URL=mysql://root:password@host.docker.internal/zenml \ zenmldocker/zenml-server ``` **Docker Compose** For multi-container setups, use Docker Compose: ```yaml version: "3.9" services: mysql: image: mysql:8.0 environment: - MYSQL_ROOT_PASSWORD=password zenml: image: zenmldocker/zenml-server environment: - ZENML_STORE_URL=mysql://root:password@host.docker.internal/zenml ``` Start with: ```bash docker compose -p zenml up -d ``` **Backup and Recovery** ZenML supports automated database backup strategies. Configure with `ZENML_STORE_BACKUP_STRATEGY` (options: `disabled`, `in-memory`, `database`, `dump-file`). **Troubleshooting** Check logs using: ```bash docker logs zenml -f ``` or for Docker Compose: ```bash docker compose -p zenml logs -f ``` This summary captures the essential steps and configurations for deploying ZenML in a Docker container while maintaining critical details. ================================================== === File: docs/book/getting-started/deploying-zenml/secret-management.md === ### Secret Store Configuration and Management #### Centralized Secrets Store ZenML offers a centralized secrets management system to securely register and manage secrets. Metadata (name, ID, owner, scope) is stored in the ZenML server database, while actual secret values are managed through the ZenML Secrets Store. In local deployments, secrets are stored in an SQLite database; for remote servers, they are stored in the configured secrets management back-end. Supported back-ends include: - Default SQL database - AWS Secrets Manager - GCP Secret Manager - Azure Key Vault - HashiCorp Vault - Custom implementations #### Configuration and Deployment Secrets store back-end configuration occurs at deployment. Choose a back-end and authentication mechanism, and configure the ZenML server with necessary credentials. The ZenML secrets store uses the same authentication mechanisms as the ZenML Service Connector. Follow the principle of least privilege for credentials. The secrets store can be updated anytime by modifying the server configuration and redeploying. For migration, refer to the documented strategy to minimize downtime. #### Backup Secrets Store ZenML can connect to a secondary Secrets Store for high availability, backup, and disaster recovery. Ensure the backup store is in a different location or type than the primary store to avoid issues. The server prioritizes the primary store for read/write operations and falls back to the backup if necessary. Use the CLI commands: - `zenml secret backup`: Backs up secrets from the primary to the backup store. - `zenml secret restore`: Restores secrets from the backup to the primary store. #### Secrets Migration Strategy To change the external provider or location of secrets, follow a migration strategy to ensure existing secrets are transferred with minimal downtime. The process involves: 1. Configure the ZenML server to use the new store as secondary. 2. Redeploy the server. 3. Use `zenml secret backup` to transfer secrets to the new store. 4. Set the new store as primary and remove the old one. 5. Redeploy the server. This strategy is unnecessary if only credentials or authentication methods change without altering the secrets' location. For additional details on deployment, refer to the deployment guide. ================================================== === File: docs/book/getting-started/deploying-zenml/custom-secret-stores.md === ### Custom Secret Stores The secrets store is essential for managing secret values in ZenML, responsible for storing, updating, and deleting secrets while metadata is stored in an SQL database. The interface for all secrets store back-ends is defined in `zenml.zen_stores.secrets_stores.secrets_store_interface` and includes the following key methods: ```python class SecretsStoreInterface(ABC): @abstractmethod def _initialize(self) -> None: """Initialize the secrets store.""" @abstractmethod def store_secret_values(self, secret_id: UUID, secret_values: Dict[str, str]) -> None: """Store secret values for a new secret.""" @abstractmethod def get_secret_values(self, secret_id: UUID) -> Dict[str, str]: """Retrieve secret values for an existing secret.""" @abstractmethod def update_secret_values(self, secret_id: UUID, secret_values: Dict[str, str]) -> None: """Update secret values for an existing secret.""" @abstractmethod def delete_secret_values(self, secret_id: UUID) -> None: """Delete secret values for an existing secret.""" ``` ### Building a Custom Secrets Store To create a custom secrets store: 1. Inherit from `zenml.zen_stores.secrets_stores.base_secrets_store.BaseSecretsStore` and implement the abstract methods from the interface. Use `SecretsStoreType.CUSTOM` as the `TYPE`. 2. If configuration is needed, create a class inheriting from `SecretsStoreConfiguration` to define parameters, using it as the `CONFIG_TYPE`. 3. Ensure your code is included in the ZenML server container image. Configure the server to use your custom secrets store via environment variables or helm chart values, as detailed in the deployment guide. For the complete interface definition, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-zen_stores/#zenml.zen_stores.secrets_stores.secrets_store_interface.SecretsStoreInterface). ================================================== === File: docs/book/getting-started/deploying-zenml/deploy-with-custom-image.md === ### Summary: Deploying ZenML with Custom Docker Images Deploying ZenML typically uses the default `zenmlhub/zenml-server` Docker image, but custom images may be necessary for: - Custom artifact stores requiring artifact visualizations or step logs. - Forked ZenML repositories with modified server/database logic. **Important Note:** Custom Docker images can only be deployed using [Docker](deploy-with-docker.md) or [Helm](deploy-with-helm.md). ### Building and Pushing a Custom ZenML Server Docker Image 1. **Set Up a Container Registry:** Create an account on a registry like [Docker Hub](https://hub.docker.com/). 2. **Clone ZenML Repository:** Checkout the desired branch: ```bash git checkout release/0.41.0 ``` 3. **Copy the Base Dockerfile:** ```bash cp docker/base.Dockerfile docker/custom.Dockerfile ``` 4. **Modify the Dockerfile:** - Add dependencies: ```bash RUN pip install ``` - (For forks) Install local files: ```bash RUN pip install -e .[server,secrets-aws,secrets-gcp,secrets-azure,secrets-hashicorp,s3fs,gcsfs,adlfs,connectors-aws,connectors-gcp,connectors-azure] ``` 5. **Build and Push the Image:** ```bash docker build -f docker/custom.Dockerfile . -t /: --platform linux/amd64 docker push /: ``` **Verification:** To verify your custom image locally, refer to the [Deploy a custom ZenML image via Docker](deploy-with-custom-image.md#deploy-a-custom-zenml-image-via-docker) section. ### Deploying ZenML with Your Custom Image #### Via Docker 1. Follow the [ZenML Docker Deployment Guide](deploy-with-docker.md). 2. Replace `zenmldocker/zenml-server` with your custom image: ```bash docker run -it -d -p 8080:8080 --name zenml /: ``` 3. For `docker-compose`, update `docker-compose.yml`: ```yaml services: zenml: image: /: ``` #### Via Helm 1. Refer to the [ZenML Helm Deployment Guide](deploy-with-helm.md). 2. Modify the `image` section in `values.yaml`: ```yaml zenml: image: repository: / tag: ``` This summary provides essential steps and commands for deploying ZenML with custom Docker images while maintaining clarity and conciseness. ================================================== === File: docs/book/getting-started/zenml-pro/teams.md === ### ZenML Pro Teams Overview **Purpose**: Teams in ZenML Pro help manage groups of users efficiently within organizations and tenants. #### Key Benefits of Teams 1. **Group Management**: Manage permissions for multiple users simultaneously. 2. **Organizational Structure**: Reflect company or project team structures. 3. **Simplified Access Control**: Assign roles to teams instead of individual users. #### Creating and Managing Teams - **Creation Steps**: 1. Navigate to Organization settings. 2. Click on the "Teams" tab. 3. Use the "Add team" button. **Required Information**: - Team name - Description (optional) - Initial team members #### Adding Users to Teams 1. Go to the "Teams" tab in Organization settings. 2. Select the desired team. 3. Click "Add Members." 4. Choose users to add. #### Assigning Teams to Tenants 1. Go to tenant settings. 2. Click on the "Members" tab, then the "Teams" tab. 3. Select "Add Team." 4. Choose the team and assign a role. #### Team Roles and Permissions - Roles assigned to a team (Admin, Editor, Viewer, or custom) grant all members the associated permissions. For example, assigning the "Editor" role allows all team members to have Editor permissions in that tenant. #### Best Practices 1. Create teams that reflect your organization’s structure. 2. Use custom roles for detailed access control. 3. Conduct regular audits of team memberships and roles. 4. Document each team's purpose and associated projects or tenants. By utilizing Teams in ZenML Pro, user management becomes streamlined, access control simplified, and MLOps workflows better organized across your organization and tenants. ================================================== === File: docs/book/getting-started/zenml-pro/README.md === # ZenML Pro Overview ZenML Pro enhances the Open Source version with several key features: - **Managed Deployment**: Deploy multiple ZenML servers (tenants). - **User Management**: Create organizations and teams for scalable user management. - **Role-Based Access Control**: Implement customizable roles for secure resource management. - **Model and Artifact Control Plane**: Utilize the Model Control Plane and Artifact Control Plane for better tracking of ML assets. - **Triggers and Run Templates**: Create and run templates via the dashboard or API for quick pipeline iterations. - **Early-Access Features**: Access pro-specific features like triggers, filters, and usage reports. For more details, visit the [ZenML website](https://zenml.io/pro). ## Deployment Scenarios: SaaS vs Self-hosted ZenML Pro can be deployed as a SaaS solution, minimizing the need for server management, or fully self-hosted. For more information, refer to the [self-hosted deployment guide](./self-hosted.md) or [book a demo](https://www.zenml.io/book-your-demo). ### Key Resources: - [Tenants](./tenants.md) - [Organizations](./organization.md) - [Teams](./teams.md) - [Roles](./roles.md) - [Self-Hosted Deployments](./self-hosted.md) ================================================== === File: docs/book/getting-started/zenml-pro/self-hosted.md === # ZenML Pro Self-Hosted Deployment Guide Summary ## Overview ZenML Pro can be self-hosted in a Kubernetes cluster, requiring access to ZenML Pro container images, a Kubernetes cluster, a database server, and additional infrastructure for HTTPs exposure (load balancer, Ingress controller, HTTPs certificates, DNS rules). Note that features like SSO and Run Templates are not available in the on-prem version. ## Preparation and Prerequisites ### Software Artifacts - **ZenML Pro Control Plane Artifacts**: - Container images for API and dashboard (AWS and GCP URLs). - Public Helm chart: `oci://public.ecr.aws/zenml/zenml-pro`. - **ZenML Pro Tenant Server Artifacts**: - Container images for tenant server (AWS and GCP URLs). - Public open-source Helm chart: `oci://public.ecr.aws/zenml/zenml`. - **ZenML Pro Client Artifacts**: - Public client image: `zenmldocker/zenml` (for containerized pipelines). ### Accessing Container Images - **AWS**: Set up an IAM user/role with `AmazonEC2ContainerRegistryReadOnly` policy to pull images. Authenticate Docker with ECR. - **GCP**: Create a service account with access to Artifact Registry and authenticate Docker. ### Air-Gapped Installation For environments without internet access, download artifacts on an internet-connected machine, save them, and transfer to the air-gapped environment. Load images and configure Helm charts locally. ### Infrastructure Requirements 1. **Kubernetes Cluster**: Required for deployment. 2. **Database Server**: MySQL or Postgres for Control Plane; MySQL only for Tenant servers. 3. **Ingress Controller**: For HTTP(S) traffic routing. 4. **Domain Name**: FQDN for ZenML Pro Control Plane and tenants. 5. **SSL Certificate**: Required for secure connections. ## Stage 1: Install ZenML Pro Control Plane ### Set up Credentials Create a Kubernetes secret for image pull access if necessary. ### Configure the Helm Chart Customize the Helm chart using a `values.yaml` file with required configurations (database credentials, server URL, ingress settings). ### Install the Helm Chart Run the Helm install command with the customized values. ```bash helm --namespace zenml-pro upgrade --install --create-namespace zenml-pro oci://public.ecr.aws/zenml/zenml-pro --version --values my-values.yaml ``` ### Install CA Certificates If using custom CA certificates, install them on client machines and configure Docker images accordingly. ### Onboard Additional Users Use a script to create user accounts and manage them via the ZenML Pro dashboard. ## Stage 2: Enroll and Deploy ZenML Pro Tenants ### Enroll a Tenant Run a script to enroll a tenant and generate a Helm `values.yaml` file for deployment. ### Configure the ZenML Pro Tenant Helm Chart Fill in necessary values in the generated YAML file, ensuring unique database names and proper image repository settings. ### Deploy the ZenML Pro Tenant Server Run the Helm install command with the tenant-specific values. ```bash helm --namespace zenml-pro- upgrade --install --create-namespace zenml oci://public.ecr.aws/zenml/zenml --version --values zenml--values.yaml ``` ## Day 2 Operations: Upgrades and Updates 1. Check available versions and release notes. 2. Fetch new software artifacts. 3. Upgrade the ZenML Pro Control Plane first, then tenant servers using Helm commands with `--reuse-values` or modified values files. This guide provides a comprehensive overview of deploying ZenML Pro in a self-hosted environment, ensuring all critical steps and configurations are covered. ================================================== === File: docs/book/getting-started/zenml-pro/core-concepts.md === # ZenML Pro Core Concepts ZenML Pro features a distinct entity hierarchy compared to the open-source version. Key components include: - **Organization**: A collection of users, teams, and tenants. - **Tenant**: An isolated ZenML server deployment containing project resources. - **Teams**: Groups of users within an organization for resource management. - **Users**: Individual accounts on a ZenML Pro instance. - **Roles**: Control user actions within a tenant or organization. - **Templates**: Re-runnable pipeline configurations. For detailed information, refer to the linked pages: | Concept | Description | Link | |---------------------|--------------------------------------------------|--------------------| | Organizations | Managing organizations in ZenML Pro | [organization.md](./organization.md) | | Tenants | Working with tenants in ZenML Pro | [tenants.md](./tenants.md) | | Teams | Team management in ZenML Pro | [teams.md](./teams.md) | | Roles & Permissions | Role-based access control in ZenML Pro | [roles.md](./roles.md) | ================================================== === File: docs/book/getting-started/zenml-pro/roles.md === # ZenML Pro: Roles and Permissions ZenML Pro employs a role-based access control (RBAC) system for managing permissions within organizations and tenants. This guide outlines the available roles, assignment methods, and custom role creation. ## Organization-Level Roles ZenML Pro includes three predefined organization roles: 1. **Org Admin**: Full control, can manage members, tenants, billing, and assign roles. 2. **Org Editor**: Manages tenants and teams, but cannot access subscription info or delete the organization. 3. **Org Viewer**: Read-only access to tenants. ### Assigning Organization Roles To assign roles: 1. Go to Organization settings. 2. Click "Members" to update roles or use "Add members" to invite new members. **Notes**: - Admins can add themselves to any tenant role. - Editors and viewers cannot self-add to tenants they are not part of. - Custom organization roles can only be created via the [ZenML Pro API](https://cloudapi.zenml.io/). ## Tenant-Level Roles Tenant roles dictate permissions within a specific ZenML tenant. Predefined roles include: 1. **Admin**: Full control over tenant resources. 2. **Editor**: Can create and share resources but cannot modify or delete. 3. **Viewer**: Read-only access. ### Custom Roles To create a custom tenant role: 1. Access tenant settings. 2. Click "Roles" and select "Add Custom Role". 3. Name the role, select a base role, and edit permissions. **Custom Role Permissions** can be defined for: - Artifacts, Models, Pipelines, etc. with actions: Create, Read, Update, Delete, Share. ### Managing Role Permissions To modify role permissions: 1. Go to the Roles page in tenant settings. 2. Select the role and click "Edit Permissions". 3. Adjust permissions as needed. ## Sharing Resources Users can share individual resources through the dashboard. ## Best Practices 1. **Least Privilege**: Assign minimal necessary permissions. 2. **Regular Audits**: Periodically review roles and permissions. 3. **Use Custom Roles**: Tailor roles for specific team needs. 4. **Document Roles**: Keep records of custom roles and their purposes. By utilizing ZenML Pro's RBAC, teams can maintain security while fostering collaboration in MLOps projects. ================================================== === File: docs/book/getting-started/zenml-pro/organization.md === # Organizations in ZenML Pro ZenML Pro organizes your work experience around the concept of an **Organization**, which is the highest structural level in the ZenML Cloud environment. An organization typically includes a group of users and one or more [tenants](./tenants.md). ## Inviting Team Members To invite users to your organization, click `Add Member` in the Organization settings and assign an initial Role. The invited user will receive an email. Once part of the organization, users can log in to all accessible tenants. ## Managing Organization Settings Organization settings, including billing and member roles, are managed at the organization level. Access these settings by clicking your profile picture in the top right corner and selecting "Settings". ## API Operations Various operations related to organizations can be performed via the API. For more details, visit [ZenML Cloud API](https://cloudapi.zenml.io/). ================================================== === File: docs/book/getting-started/zenml-pro/pro-api.md === ### ZenML Pro API Overview The ZenML Pro API is a RESTful API compliant with OpenAPI 3.1.0, enabling interaction with ZenML resources for both SaaS and self-hosted instances. Key functionalities include: - **Tenant Management** - **Organization Management** - **User Management** - **Role-Based Access Control (RBAC)** - **Authentication and Authorization** #### Authentication To authenticate API requests, you can log in through your ZenML Pro account or use API tokens for programmatic access. Tokens are valid for 1 hour and scoped to your user account. **Generating API Tokens:** 1. Go to organization settings in your ZenML Pro dashboard. 2. Select "API Tokens." 3. Click "Create new token." 4. Use the token as a bearer token in HTTP requests. **Example Requests:** - **Using curl:** ```bash curl -H "Authorization: Bearer YOUR_API_TOKEN" https://cloudapi.zenml.io/users/me ``` - **Using wget:** ```bash wget -qO- --header="Authorization: Bearer YOUR_API_TOKEN" https://cloudapi.zenml.io/users/me ``` - **Using Python:** ```python import requests response = requests.get( "https://cloudapi.zenml.io/users/me", headers={"Authorization": f"Bearer YOUR_API_TOKEN"} ) print(response.json()) ``` **Important Notes:** - Tokens expire after 1 hour and cannot be retrieved post-generation. - Tokens inherit user permissions. #### Tenant Programmatic Access Access the ZenML Pro tenant API similarly to the OSS server API via: - Temporary API tokens. - Service accounts with API keys. #### Key API Endpoints - **Tenant Management:** - List: `GET /tenants` - Create: `POST /tenants` - Details: `GET /tenants/{tenant_id}` - Update: `PATCH /tenants/{tenant_id}` - **Organization Management:** - List: `GET /organizations` - Create: `POST /organizations` - Details: `GET /organizations/{organization_id}` - Update: `PATCH /organizations/{organization_id}` - **User Management:** - List: `GET /users` - Current User: `GET /users/me` - Update: `PATCH /users/{user_id}` - **Role-Based Access Control:** - Create Role: `POST /roles` - Assign Role: `POST /roles/{role_id}/assignments` - Check Permissions: `GET /permissions` #### Error Handling The API uses standard HTTP status codes to indicate request outcomes. Error responses include messages and additional details. #### Rate Limiting The API may enforce rate limits. Exceeding these limits results in a 429 (Too Many Requests) status code. Implement backoff and retry logic accordingly. For comprehensive details on endpoints and features, refer to the complete API documentation at [https://cloudapi.zenml.io](https://cloudapi.zenml.io). ================================================== === File: docs/book/getting-started/zenml-pro/tenants.md === ### ZenML Pro Tenants Overview **Tenants** are isolated deployments of the ZenML server, each with its own users, roles, and resources. All operations in ZenML Pro, including pipelines and connectors, are scoped to a tenant. The Pro version enhances the open-source ZenML server with additional features. #### Creating a Tenant To create a tenant: 1. Navigate to your organization page. 2. Click "+ New Tenant." 3. Enter a name and click "Create Tenant." You can also create a tenant via the Cloud API using the `POST /organizations` endpoint at `https://cloudapi.zenml.io/`. #### Organizing Tenants Effective organization of tenants is essential for MLOps management. Consider the following structures: 1. **By Development Stage**: - **Staging Tenants**: For development, testing, and experimentation. - **Production Tenants**: For live services, with stricter access controls and monitoring. 2. **By Business Logic**: - **Project-based**: Separate tenants for different ML projects (e.g., Recommendation System, NLP). - **Team-based**: Align tenants with organizational teams (e.g., Data Science, ML Engineering). - **Data Sensitivity**: Classify tenants based on data sensitivity (e.g., Public, Internal, Confidential). #### Best Practices for Tenant Organization - **Clear Naming Conventions**: Use descriptive names for easy identification. - **Access Control**: Implement role-based access control. - **Documentation**: Maintain clear records of tenant purposes. - **Regular Reviews**: Periodically assess tenant structure. - **Scalability**: Design for future growth. #### Using Your Tenant Tenants allow you to run pipelines and experiments, leveraging Pro features such as: - Model Control Plane - Artifact Control Plane - Pipeline execution from the Dashboard #### Accessing Tenant Documentation Each tenant has a connection URL for the `zenml` client and to access the OpenAPI specification. Visit `/docs` for available methods, such as running pipelines via the REST API. For further details on API access, refer to the [API Reference](../../reference/api-reference.md). ================================================== === File: docs/book/reference/api-reference.md === # ZenML API Reference Summary ## Overview The ZenML server is a FastAPI application, and its OpenAPI-compliant documentation can be accessed at `/docs` or `/redoc`. For local instances, use `http://127.0.0.1:8237/docs` after logging in with `zenml login --local`. ## Programmatic API Access ### Bearer Token Authentication To access the ZenML API programmatically, you can use a bearer token. #### Short-lived API Token 1. Generate a short-lived API token (valid for 1 hour) from the ZenML dashboard under API Tokens. 2. Use the token in HTTP requests as follows: **cURL:** ```bash curl -H "Authorization: Bearer YOUR_API_TOKEN" https://your-zenml-server/api/v1/current-user ``` **Wget:** ```bash wget -qO- --header="Authorization: Bearer YOUR_API_TOKEN" https://your-zenml-server/api/v1/current-user ``` **Python:** ```python import requests response = requests.get( "https://your-zenml-server/api/v1/current-user", headers={"Authorization": f"Bearer YOUR_API_TOKEN"} ) print(response.json()) ``` **Important Notes:** - Tokens expire after 1 hour and cannot be retrieved post-generation. - Tokens are user-scoped and inherit permissions. - For long-term access, consider using a service account and API key. ### Service Account and API Key 1. Create a service account: ```shell zenml service-account create myserviceaccount ``` This will provide a ``. 2. Obtain an API token using the API key with a POST request to `/api/v1/login`: **cURL:** ```bash curl -X POST -d "password=" https://your-zenml-server/api/v1/login ``` **Wget:** ```bash wget -qO- --post-data="password=" --header="Content-Type: application/x-www-form-urlencoded" https://your-zenml-server/api/v1/login ``` **Python:** ```python import requests response = requests.post( "https://your-zenml-server/api/v1/login", data={"password": ""}, headers={"Content-Type": "application/x-www-form-urlencoded"} ) print(response.json()) ``` 3. Use the obtained API token for authentication in API requests as shown in the short-lived token section. **Important Notes:** - Tokens are scoped to the service account and inherit permissions. - Tokens expire after a configured duration (typically 1 hour). - Rotate API keys if compromised, using the ZenML dashboard or command: ```shell zenml service-account api-key rotate ``` This summary captures essential information about accessing the ZenML API programmatically, including methods for token generation and usage. ================================================== === File: docs/book/reference/global-settings.md === ### ZenML Global Settings Overview The **ZenML Global Config Directory** stores global settings for ZenML installations, located at: - **Linux:** `~/.config/zenml` - **Mac:** `~/Library/Application Support/zenml` - **Windows:** `C:\Users\%USERNAME%\AppData\Local\zenml` This path can be overridden by setting the `ZENML_CONFIG_PATH` environment variable. To retrieve the current config directory, use: ```shell zenml status python -c 'from zenml.utils.io_utils import get_global_config_directory; print(get_global_config_directory())' ``` **Warning:** Do not manually alter or delete files in this directory. Use CLI commands for management: - `zenml analytics` - Manage analytics settings - `zenml clean` - Reset to default configuration - `zenml downgrade` - Downgrade ZenML version in the global config Upon first run, ZenML initializes the global config directory and default stack: ```plaintext Initializing the ZenML global configuration version to 0.13.2 Creating default user 'default' ... Creating default stack for user 'default'... ``` #### Global Config Directory Structure After initialization, the directory layout includes: ``` /home/stefan/.config/zenml ├── config.yaml # Global Configuration Settings └── local_stores # Local data storage for stack components ├── # Local Store paths └── default_zen_store └── zenml.db # SQLite database for ZenML data ``` **Key Files:** 1. **`config.yaml`:** Contains global settings like client ID, active database config, and analytics options. ```yaml active_stack_id: ... analytics_opt_in: true store: database: ... url: ... username: ... user_id: version: 0.13.2 ``` 2. **`local_stores`:** Subdirectories for local stack components, e.g., artifact stores. 3. **`zenml.db`:** Default SQLite database for storing stack and component information. #### Usage Analytics ZenML collects anonymized usage statistics to improve the tool. Users can opt-out via: ```bash zenml analytics opt-out ``` Analytics are processed through a central ZenML server before being sent to Segment for aggregation. #### Version Mismatch (Downgrading) To resolve version mismatch errors when downgrading ZenML, use: ```shell zenml downgrade ``` **Warning:** Downgrading may cause unexpected behavior or data loss. To reset the configuration, run: ```shell zenml clean ``` This command purges the local database and reinitializes the global configuration. ================================================== === File: docs/book/reference/how-do-i.md === # ZenML Documentation Summary **Last Updated**: December 13, 2023 ## Common Questions - **Contributing to ZenML**: Refer to the [Contribution guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). For small features or bug fixes, open a pull request. For larger changes, discuss in [Slack](https://zenml.io/slack/) or create an issue. - **Adding Custom Components**: Start with the [general documentation](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). For specific types, see dedicated sections (e.g., [custom orchestrators](../component-guide/orchestrators/custom.md)). - **Mitigating Dependency Clashes**: Consult the [handling dependencies documentation](../how-to/pipeline-development/configure-python-environments/handling-dependencies.md). - **Deploying Cloud Infrastructure/MLOps Stacks**: ZenML is stack-agnostic. Documentation for each stack component details deployment on popular cloud providers. - **Deploying ZenML on Internal Clusters**: Review the [self-hosted ZenML deployments documentation](../getting-started/deploying-zenml/README.md). - **Hyperparameter Tuning**: Refer to the [hyperparameter tuning guide](../how-to/pipeline-development/build-pipelines/hyper-parameter-tuning.md). - **Resetting ZenML Client**: Use `zenml clean` to reset your client and wipe the local metadata database. This action is destructive; consult [Slack](https://zenml.io/slack/) if unsure. - **Dynamic Pipelines and Steps**: Read the [guide on composing steps and pipelines](../user-guide/starter-guide/create-an-ml-pipeline.md) and check examples in the hyperparameter tuning guide. - **Using Project Templates**: Use [project templates](../how-to/project-setup-and-management/collaborate-with-team/project-templates/README.md) for quick setup. The Starter template (`starter`) is recommended for most use cases. - **Upgrading ZenML**: Upgrade the client with `pip install --upgrade zenml`. For server upgrades, see the [upgrade documentation](../how-to/manage-zenml-server/upgrade-zenml-server.md). - **Using Specific Stack Components**: Refer to the [component guide](../component-guide/README.md) for tips on using each integration and component with ZenML. ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/reference/environment-variables.md === # Environment Variables for ZenML ZenML allows configuration through several pre-defined environment variables: ## Logging Configuration - **Verbosity**: Set the logging level. ```bash export ZENML_LOGGING_VERBOSITY=INFO # Options: INFO, WARN, ERROR, CRITICAL, DEBUG ``` - **Format**: Define the logging format. ```bash export ZENML_LOGGING_FORMAT='%(asctime)s %(message)s' ``` ## Step Logs - **Disable Step Logs Storage**: Prevents storing step logs, which can improve performance. ```bash export ZENML_DISABLE_STEP_LOGS_STORAGE=true ``` ## Repository and Analytics - **Repository Path**: Specify where ZenML looks for its repository. ```bash export ZENML_REPOSITORY_PATH=/path/to/somewhere ``` - **Analytics Opt-out**: Disable usage analytics. ```bash export ZENML_ANALYTICS_OPT_IN=false ``` ## Debug and Active Stack - **Debug Mode**: Enable developer mode. ```bash export ZENML_DEBUG=true ``` - **Active Stack**: Set the active stack by UUID. ```bash export ZENML_ACTIVE_STACK_ID= ``` ## Pipeline Execution and Traceback - **Prevent Pipeline Execution**: Stop pipeline execution when set to true. ```bash export ZENML_PREVENT_PIPELINE_EXECUTION=true ``` - **Rich Traceback**: Disable rich traceback. ```bash export ZENML_ENABLE_RICH_TRACEBACK=false ``` ## Logging Colors and Stack Validation - **Disable Colorful Logging**: Turn off colorful logging. ```bash export ZENML_LOGGING_COLORS_DISABLED=true ``` - **Skip Stack Validation**: Bypass stack validation. ```bash export ZENML_SKIP_STACK_VALIDATION=true ``` ## Code Repository and Global Config - **Ignore Untracked Files**: Allow untracked files in code repositories. ```bash export ZENML_CODE_REPOSITORY_IGNORE_UNTRACKED_FILES=true ``` - **Global Config Path**: Set the path for the global config file. ```bash export ZENML_CONFIG_PATH=/path/to/somewhere ``` ## Server and Client Configuration - **Server Connection**: Connect to a ZenML server using the following variables. ```bash export ZENML_STORE_URL=https://... export ZENML_STORE_API_KEY= ``` For further details, refer to the respective sections in the ZenML documentation. ================================================== === File: docs/book/reference/python-client.md === # ZenML Python Client Documentation Summary ## Overview The ZenML Python `Client` enables programmatic interaction with ZenML resources such as pipelines, runs, and stacks, which are stored in a database within your ZenML instance. For other programming environments, resources can be accessed via REST API endpoints. ## Usage Example To fetch the last 10 pipeline runs for the current stack: ```python from zenml.client import Client client = Client() my_runs_on_current_stack = client.list_pipeline_runs( stack_id=client.active_stack_model.id, user_id=client.active_user.id, sort_by="desc:start_time", size=10, ) for pipeline_run in my_runs_on_current_stack: print(pipeline_run.name) ``` ## Main ZenML Resources ### Pipelines, Runs, Artifacts - **Pipelines**: Tracked pipelines. - **Pipeline Runs**: Information about executed runs. - **Run Templates**: Templates for running pipelines. - **Step Runs**: Steps of pipeline runs. - **Artifacts**: Data written to artifact stores. - **Schedules**: Metadata for scheduled runs. - **Builds**: Docker images for pipelines. - **Code Repositories**: Connected git repositories. ### Stacks, Infrastructure, Authentication - **Stack**: Registered stacks. - **Stack Components**: Components like orchestrators and artifact stores. - **Flavors**: Available stack component flavors (e.g., local, Kubeflow). - **User**: Registered users. - **Secrets**: Authentication secrets in the ZenML Secret Store. - **Service Connectors**: Connectors for infrastructure integration. ## Client Methods ### Reading and Writing Resources **List Methods**: Retrieve lists of resources. ```python client.list_pipeline_runs( stack_id=client.active_stack_model.id, user_id=client.active_user.id, sort_by="desc:start_time", size=10, ) ``` Results are returned as a `Page` of resources, defaulting to 50 results. Modify page size with `size` or fetch subsequent pages with `page`. Filter results using additional arguments. **Get Methods**: Fetch specific resources by ID, name, or name prefix. ```python client.get_pipeline_run("413cfb42-a52c-4bf1-a2fd-78af2f7f0101") # By ID client.get_pipeline_run("first_pipeline-2023_06_20-16_20_13_274466") # By Name client.get_pipeline_run("first_pipeline-2023_06_20-16") # By Name Prefix ``` **Create, Update, and Delete Methods**: Available for certain resources; check the Client SDK documentation for specifics. ### Active User and Active Stack Access current user and stack information via: ```python client.active_user client.active_stack_model ``` ## Resource Models ZenML Client methods return **Response Models**, which are Pydantic Models ensuring data validation. For example, `client.list_pipeline_runs` returns `Page[PipelineRunResponseModel]`. **Request, Update, and Filter Models** are used for server API endpoints, not Client methods. For detailed model fields, refer to the ZenML Models SDK Documentation. This summary provides a concise overview of the ZenML Python Client, its resources, methods, and models essential for effective interaction with ZenML. ================================================== === File: docs/book/reference/community-and-content.md === ### ZenML Community & Content Overview The ZenML community offers various ways to connect with the development team and enhance your understanding of the framework. - **Slack Channel**: Join the [ZenML Slack channel](https://zenml.io/slack) for community support, discussions, and project sharing. It's a great resource for finding answers to your questions. - **Social Media**: Follow us on [LinkedIn](https://www.linkedin.com/company/zenml) and [Twitter](https://twitter.com/zenml_io) for updates on releases, events, and MLOps. Engage with our posts to help spread the word. - **YouTube Channel**: Our [YouTube channel](https://www.youtube.com/c/ZenML) offers video tutorials and workshops for visual learners. - **Public Roadmap**: Contribute to our [public roadmap](https://zenml.io/roadmap) by sharing ideas for new features or voting on existing suggestions, ensuring community feedback shapes ZenML's development. - **Blog**: Visit our [Blog](https://zenml.io/blog/) for articles on tool implementation, new features, and insights from our team. - **Podcast**: Listen to our [Podcast](https://podcast.zenml.io/) for interviews and discussions on machine learning and MLOps with industry leaders. - **Newsletter**: Subscribe to our [Newsletter](https://zenml.io/newsletter-signup) for updates on open-source tooling and ZenML news. ================================================== === File: docs/book/reference/llms-txt.md === ### Summary of llms.txt Documentation for ZenML #### About llms.txt The `llms.txt` file format, proposed by [llmstxt.org](https://llmstxt.org/), provides a standardized way to deliver information for LLMs to answer questions about products or websites. It includes background information, guidance, and links to detailed markdown files, formatted for both human and LLM readability. The ZenML `llms.txt` file summarizes the documentation to assist in answering basic questions about ZenML, available at [zenml.io/llms.txt](https://zenml.io/llms.txt). #### Available llms.txt Files ZenML offers multiple `llms.txt` files for different documentation areas, accessible via the [HuggingFace dataset](https://huggingface.co/datasets/zenml/llms.txt): | File | Tokens | Purpose | |--------------------------|--------|----------------------------------------------------------------| | [llms.txt](https://zenml.io/llms.txt) | 120k | Basic ZenML concepts and getting started information | | [component-guide.txt](https://zenml.io/component-guide.txt) | 180k | Details on ZenML integrations and stack components | | [how-to-guides.txt](https://zenml.io/how-to-guides.txt) | 75k | Summarized how-to guides for common ZenML workflows | | [llms-full.txt](https://zenml.io/llms-full.txt) | 600k | Complete ZenML documentation | 1. **llms.txt**: Covers User Guides and Getting Started sections, ideal for basic inquiries. 2. **component-guide.txt**: Details on stack components and integrations. 3. **how-to-guides.txt**: Summarized pages from the how-to section, useful for process questions. 4. **llms-full.txt**: Complete documentation for the most accurate answers. #### How to Use the llms.txt Files - Select the relevant file based on your inquiry. - Each file's text is prefixed with its filename, allowing LLMs to reference sources in answers. - You can combine files for more comprehensive results, provided your context window allows it. - Instruct the LLM to avoid answers not directly supported by the text to prevent hallucinations. - Use models with large context windows, like Gemini, due to high token counts. ================================================== === File: docs/book/reference/faq.md === ### ZenML FAQ Summary #### Purpose of ZenML ZenML was created to simplify the deployment of machine learning models in production, addressing challenges faced by the team while developing large-scale ML pipelines. #### ZenML vs. Orchestrators ZenML is not just another orchestrator like Airflow or Kubeflow; it is a framework that allows users to run ML pipelines on any orchestrator. It supports standard orchestrators out-of-the-box and allows for custom orchestrator development. #### Tool Integration For integration with tools, refer to the [documentation](https://docs.zenml.io) and the [component guide](../component-guide/README.md). Active integration examples can be found in the [integration test code](https://github.com/zenml-io/zenml/tree/main/tests/integration/examples). ZenML is extensible, and users are encouraged to integrate it with other tools. #### Windows Support ZenML officially supports Windows via WSL. Some features may not work outside of WSL. #### Apple Silicon Support ZenML supports Macs with Apple Silicon. Set the following environment variable for local server use: ```bash export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES ``` This is necessary for local server functionality but not required for CLI use. #### Custom Tool Integration For extending ZenML with custom tools, refer to the guide on [implementing a custom stack component](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). #### Community Contribution To contribute, select issues labeled as [`good-first-issue`](https://github.com/zenml-io/zenml/labels/good%20first%20issue) and review the [Contributing Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). #### Community Engagement Join the [Slack group](https://zenml.io/slack/) for questions about bugs or use cases. #### Licensing ZenML is licensed under the Apache License Version 2.0. Contributions will also be under this license. Full license details are in [LICENSE.md](https://github.com/zenml-io/zenml/blob/main/LICENSE). ================================================== === File: docs/book/user-guide/starter-guide/README.md === # ZenML Starter Guide Summary The ZenML Starter Guide is designed for MLOps engineers and data scientists to build robust ML platforms using the ZenML framework. It provides foundational knowledge and tools to manage machine learning operations effectively. ## Key Topics Covered: - **Creating Your First ML Pipeline**: Learn to set up and execute a basic ML pipeline. - **Understanding Caching**: Explore how caching works between pipeline steps to optimize performance. - **Managing Data and Versioning**: Gain insights into data management and version control for ML projects. - **Tracking ML Models**: Understand the methods for tracking and managing machine learning models. ## Prerequisites: - A Python environment set up. - `virtualenv` installed. By the end of the guide, you will complete a starter project, marking your entry into MLOps with ZenML. For additional support, refer to the [SDK Docs](https://sdkdocs.zenml.io/) for internal functions and classes. Prepare your development environment and start your MLOps journey with ZenML! ================================================== === File: docs/book/user-guide/starter-guide/create-an-ml-pipeline.md === ### Summary of ZenML Documentation on Creating ML Pipelines ZenML facilitates the creation of modular and scalable machine learning (ML) pipelines by decoupling stages like data ingestion, preprocessing, and model evaluation into **Steps** that can be integrated into an end-to-end **Pipeline**. This structure enhances reproducibility and efficiency in ML workflows. #### Installation To get started, install ZenML and initialize your project: ```shell pip install "zenml[server]" zenml login --local zenml init ``` #### Simple ML Pipeline Example A basic example of a ZenML pipeline is provided below, demonstrating data loading and model training: ```python from zenml import pipeline, step @step def load_data() -> dict: training_data = [[1, 2], [3, 4], [5, 6]] labels = [0, 1, 0] return {'features': training_data, 'labels': labels} @step def train_model(data: dict) -> None: total_features = sum(map(sum, data['features'])) total_labels = sum(data['labels']) print(f"Trained model using {len(data['features'])} data points. " f"Feature sum is {total_features}, label sum is {total_labels}") @pipeline def simple_ml_pipeline(): dataset = load_data() train_model(dataset) if __name__ == "__main__": run = simple_ml_pipeline() ``` Run the script with: ```bash python run.py ``` #### Dashboard Exploration After execution, use `zenml login --local` to view the results in the ZenML Dashboard at [http://127.0.0.1:8237/](http://127.0.0.1:8237/). Log in with the username **"default"** to explore execution history, artifacts, and DAG visualizations. #### Understanding Steps and Artifacts Each function executed in the pipeline is a `step`, and the objects returned are `artifacts`. ZenML automatically tracks these artifacts, parameters, and configurations, promoting a reproducible codebase. #### Expanding to a Full ML Workflow For a complete ML workflow, use the Iris dataset with a Support Vector Classifier (SVC). Install necessary packages: ```bash pip install matplotlib zenml integration install sklearn -y ``` Define a data loading step with multiple outputs: ```python @step def training_data_loader() -> Tuple[ Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"], ]: iris = load_iris(as_frame=True) return train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) ``` Create a parameterized training step: ```python @step def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[ Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"], ]: model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) return model, model.score(X_train.to_numpy(), y_train.to_numpy()) ``` Combine steps into a pipeline: ```python @pipeline def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": training_pipeline(gamma=0.0015) ``` #### YAML Configuration You can configure pipeline runs using a YAML file: ```yaml parameters: gamma: 0.01 ``` Reference the config file in your code: ```python training_pipeline = training_pipeline.with_options(config_path='/local/path/to/config.yaml') ``` #### Full Code Example The complete code for the Iris dataset workflow is: ```python from typing_extensions import Tuple, Annotated import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.svm import SVC from zenml import pipeline, step @step def training_data_loader() -> Tuple[Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"]]: iris = load_iris(as_frame=True) return train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) @step def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) return model, model.score(X_train.to_numpy(), y_train.to_numpy()) @pipeline def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": training_pipeline() ``` This summary captures the essential details and technical information necessary for understanding and implementing ZenML pipelines. ================================================== === File: docs/book/user-guide/starter-guide/track-ml-models.md === # Summary of ZenML Model Control Plane Documentation ## Overview The ZenML Model Control Plane (MCP) manages ML models, which consist of multiple versions and encapsulate pipelines, artifacts, metadata, and business data. A ZenML Model serves as a unified entity representing an ML product's logic. ### Key Concepts - **Model**: Represents a collection of pipelines, artifacts, and metadata. It includes technical models (files with weights and parameters), training data, and predictions. - **Model Management**: Models are accessed via the ZenML API, CLI, or ZenML Pro dashboard. ### Model Configuration in Pipelines To use a ZenML model in a pipeline, pass a `Model` object at the pipeline or step level. This links all artifacts generated during the pipeline run to the specified model, enabling lineage tracking. #### Example Code ```python from zenml import pipeline, Model model = Model(name="iris_classifier", version=None, license="Apache 2.0", description="A classification model for the iris dataset.") @step(model=model) def svc_trainer(...): ... @pipeline(model=model) def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": training_pipeline() ``` ### Viewing Models and Versions - **CLI**: Use commands like `zenml model list` and `zenml model version list ` to view models and their versions. - **Dashboard**: The ZenML Pro dashboard provides visualizations for models and their associated runs and artifacts. ### Fetching Models in Pipelines Models can be accessed via `get_step_context()` or `get_pipeline_context()`. #### Example Code ```python @step def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001): model = get_step_context().model ... @pipeline(model=Model(name="iris_classifier", version="production")) def training_pipeline(gamma: float = 0.002): model = get_pipeline_context().model ... ``` ### Logging Metadata Models can log metadata using the `log_model_metadata` method, allowing capture of key-value pairs. #### Example Code ```python from zenml import get_step_context, step, log_model_metadata @step def svc_trainer(...): ... log_model_metadata(model_name="iris_classifier", metadata={"accuracy": float(accuracy)}) ``` ### Model Stages Models can exist in various stages: `staging`, `production`, `latest`, and `archived`. Stages indicate the lifecycle state of a model. #### Example Code ```python model = Model(name="iris_classifier", version="latest") model.set_stage(stage="production", force=True) ``` ### CLI Commands for Stages - List staging models: `zenml model version list --stage staging` - Update to production: `zenml model version update -s production` ### Conclusion ZenML's Model Control Plane is a powerful feature for managing ML models and their lifecycle, enhancing traceability and reproducibility in ML workflows. For deeper insights, refer to the dedicated Model Management guide. ================================================== === File: docs/book/user-guide/starter-guide/starter-project.md === # Starter Project Overview This documentation outlines a simple starter project to apply foundational MLOps concepts, including pipelines, artifacts, and models. ## Getting Started 1. **Set Up Environment**: Create a fresh virtual environment and install dependencies: ```bash pip install "zenml[templates,server]" notebook zenml integration install sklearn -y ``` 2. **Initialize Project**: Use ZenML templates to set up the project: ```bash mkdir zenml_starter cd zenml_starter zenml init --template starter --template-with-defaults pip install -r requirements.txt ``` **Alternative Method**: If the above doesn't work, clone the MLOps starter example: ```bash git clone --depth 1 git@github.com:zenml-io/zenml.git cd zenml/examples/mlops_starter pip install -r requirements.txt zenml init ``` ## Learning Outcomes You will execute three key pipelines: - **Feature Engineering Pipeline**: Loads and prepares data for training. - **Training Pipeline**: Trains a model using the preprocessed dataset. - **Batch Inference Pipeline**: Runs predictions on new data with the trained model. ## Conclusion and Next Steps This project serves as an introduction to MLOps with ZenML. Experiment with ZenML to solidify your understanding, then proceed to the [production guide](../production-guide/) for further learning. ================================================== === File: docs/book/user-guide/starter-guide/manage-artifacts.md === ### ZenML Artifact Management Overview ZenML automates the versioning and management of artifacts (data, models, evaluations) within machine learning workflows, ensuring reproducibility and traceability. This guide covers how to name, organize, and utilize artifacts effectively. #### Managing Artifacts in ZenML Pipelines - **Artifact Naming**: Use the `Annotated` object to assign custom names to outputs for better discoverability. Unnamed artifacts default to `{pipeline_name}::{step_name}::output`. ```python from typing_extensions import Annotated import pandas as pd from sklearn.datasets import load_iris from zenml import pipeline, step @step def training_data_loader() -> Annotated[pd.DataFrame, "iris_dataset"]: iris = load_iris(as_frame=True) return iris.get("frame") @pipeline def feature_engineering_pipeline(): training_data_loader() if __name__ == "__main__": feature_engineering_pipeline() ``` - **Artifact Versioning**: ZenML automatically versions artifacts with an incrementing number. Custom versions can be specified using `ArtifactConfig`. ```python from zenml import step, ArtifactConfig @step def training_data_loader() -> Annotated[pd.DataFrame, ArtifactConfig(name="iris_dataset", version="raw_2023")]: ... ``` - **Metadata and Tags**: You can add metadata and tags to artifacts using `ArtifactConfig` or by using the `get_step_context()` method. ```python @step def annotation_approach() -> Annotated[str, ArtifactConfig(name="artifact_name", run_metadata={"metadata_key": "metadata_value"}, tags=["tag_name"])]: return "string" ``` #### Comparing Metadata Across Runs (Pro Feature) The ZenML Pro dashboard includes an Experiment Comparison tool to visualize and analyze metadata across pipeline runs. It offers: - **Table View**: Structured comparison with sorting and filtering. - **Parallel Coordinates View**: Identifies relationships between metadata parameters. #### Specifying Artifact Types Assigning a type to an artifact allows for better filtering and visualization in the dashboard. ```python from zenml import ArtifactConfig, step from zenml.enums import ArtifactType @step def trainer() -> Annotated[MyCustomModel, ArtifactConfig(artifact_type=ArtifactType.MODEL)]: return MyCustomModel(...) ``` #### Consuming External Artifacts Use `ExternalArtifact` to initialize artifacts with arbitrary data types, such as dataframes or CSV files. ```python import numpy as np from zenml import ExternalArtifact, pipeline, step @step def print_data(data: np.ndarray): print(data) @pipeline def printing_pipeline(): data = ExternalArtifact(value=np.array([0])) print_data(data=data) if __name__ == "__main__": printing_pipeline() ``` #### Managing Artifacts from Other Pipelines You can fetch artifacts produced by other pipelines using the `Client`. ```python from zenml.client import Client @step def trainer(dataset: pd.DataFrame): ... @pipeline def training_pipeline(): client = Client() dataset_artifact = client.get_artifact_version(name_id_or_prefix="iris_dataset") trainer(dataset=dataset_artifact) if __name__ == "__main__": training_pipeline() ``` #### Linking Existing Data as ZenML Artifacts You can link existing data (like model checkpoints) as artifacts in ZenML. ```python from zenml.client import Client from zenml import register_artifact from pytorch_lightning import Trainer from uuid import uuid4 prefix = Client().active_stack.artifact_store.path default_root_dir = os.path.join(prefix, uuid4().hex) trainer = Trainer(default_root_dir=default_root_dir) trainer.fit(model) register_artifact(default_root_dir, name="all_my_model_checkpoints") ``` #### Logging Metadata for Artifacts You can log metadata associated with artifacts to provide context about the data. ```python from zenml import step, log_artifact_metadata @step def model_finetuner_step(model: ClassifierMixin, dataset: Tuple[np.ndarray, np.ndarray]) -> Annotated[ClassifierMixin, ArtifactConfig(name="my_model", tags=["SVC", "trained"])]: model.fit(dataset[0], dataset[1]) accuracy = model.score(dataset[0], dataset[1]) log_artifact_metadata(metadata={"accuracy": float(accuracy)}) return model ``` ### Example Code This section combines the above concepts into a simple script. ```python from typing import Optional, Tuple from typing_extensions import Annotated import numpy as np from sklearn.base import ClassifierMixin from sklearn.datasets import load_digits from sklearn.svm import SVC from zenml import ArtifactConfig, pipeline, step, log_artifact_metadata, save_artifact, load_artifact from zenml.client import Client @step def versioned_data_loader_step() -> Annotated[Tuple[np.ndarray, np.ndarray], ArtifactConfig(name="my_dataset", tags=["digits"])]: digits = load_digits() return (digits.images.reshape((len(digits.images), -1)), digits.target) @step def model_finetuner_step(model: ClassifierMixin, dataset: Tuple[np.ndarray, np.ndarray]) -> Annotated[ClassifierMixin, ArtifactConfig(name="my_model", tags=["SVC", "trained"])]: model.fit(dataset[0], dataset[1]) accuracy = model.score(dataset[0], dataset[1]) log_artifact_metadata(metadata={"accuracy": float(accuracy)}) return model @pipeline def model_finetuning_pipeline(dataset_version: Optional[str] = None, model_version: Optional[str] = None): client = Client() dataset = client.get_artifact_version(name_id_or_prefix="my_dataset", version=dataset_version) if dataset_version else versioned_data_loader_step() model = client.get_artifact_version(name_id_or_prefix="my_model", version=model_version) model_finetuner_step(model=model, dataset=dataset) def main(): untrained_model = SVC(gamma=0.001) save_artifact(untrained_model, name="my_model", version="1", tags=["SVC", "untrained"]) model_finetuning_pipeline() model_finetuning_pipeline(dataset_version="1") latest_trained_model = load_artifact("my_model") old_dataset = load_artifact("my_dataset", version="1") latest_trained_model.predict(old_dataset[0]) if __name__ == "__main__": main() ``` This script demonstrates the creation and management of datasets and models, including versioning and metadata logging. ================================================== === File: docs/book/user-guide/starter-guide/cache-previous-executions.md === ### Summary: Iterating Quickly with ZenML through Caching ZenML enhances the development of machine learning pipelines through **step caching**, which speeds up iterative processes by reusing outputs from previous runs when inputs, parameters, or code remain unchanged. Caching is enabled by default, allowing ZenML to track and version all components of steps and pipelines. #### Key Features: - **Caching Behavior**: Outputs from previous runs are reused, reducing execution time and costs, especially when running pipelines remotely. - **Client-side Caching**: By default, cached steps are computed on the client machine. To force the orchestrator to compute cached steps, set the environment variable `ZENML_PREVENT_CLIENT_SIDE_CACHING=True`. - **Manual Caching Control**: Caching does not detect changes in external inputs or file systems automatically. You can disable caching for specific steps that rely on such changes: ```python @step(enable_cache=False) def load_data_from_external_system(...) -> ...: # This step will always run ``` #### Configuring Caching: 1. **Pipeline Level**: Set caching policy in the `@pipeline` decorator: ```python @pipeline(enable_cache=False) def first_pipeline(...): """Pipeline with cache disabled""" ``` 2. **Dynamic Configuration**: Override caching settings at runtime: ```python first_pipeline = first_pipeline.with_options(enable_cache=False) ``` 3. **Step Level**: Control caching for individual steps: ```python @step(enable_cache=False) def import_data_from_api(...): """Import most up-to-date data from public api""" ``` #### Code Example: A simple script demonstrating caching behavior: ```python from typing_extensions import Tuple, Annotated import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.base import ClassifierMixin from sklearn.svm import SVC from zenml import pipeline, step from zenml.logger import get_logger logger = get_logger(__name__) @step def training_data_loader() -> Tuple[Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"]]: iris = load_iris(as_frame=True) return train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) @step def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) return model, model.score(X_train.to_numpy(), y_train.to_numpy()) @pipeline def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": training_pipeline() logger.info("\n\nFirst step cached, second not due to parameter change") training_pipeline(gamma=0.0001) logger.info("\n\nFirst step cached, second not due to settings") svc_trainer = svc_trainer.with_options(enable_cache=False) training_pipeline() logger.info("\n\nCaching disabled for the entire pipeline") training_pipeline.with_options(enable_cache=False)() ``` This example illustrates how ZenML handles caching across different steps and pipelines, optimizing the development workflow in machine learning projects. ================================================== === File: docs/book/user-guide/production-guide/configure-pipeline.md === ### Summary of Pipeline Configuration Documentation #### Overview This documentation explains how to configure a ZenML pipeline to add compute resources and manage dependencies using a YAML configuration file. #### Configuring the Pipeline To configure the pipeline, the `run.py` script is executed, which sets the `config_path` to a YAML file (`training_rf.yaml`). The pipeline is then configured with `with_options`: ```python pipeline_args["config_path"] = os.path.join(config_folder, "training_rf.yaml") training_pipeline_configured = training_pipeline.with_options(**pipeline_args) training_pipeline_configured() ``` #### YAML Configuration Breakdown The YAML configuration consists of several key sections: 1. **Docker Settings** ```yaml settings: docker: required_integrations: - sklearn requirements: - pyarrow ``` This section specifies Docker settings, including required libraries and integrations. 2. **Model Association** ```yaml model: name: breast_cancer_classifier version: rf license: Apache 2.0 description: A breast cancer classifier tags: ["breast_cancer", "classifier"] ``` This section associates a ZenML model with the pipeline. 3. **Parameters** ```yaml parameters: model_type: "rf" # Choose between rf/sgd ``` This defines parameters expected by the pipeline, such as `model_type`. #### Scaling Compute Resources To scale compute resources, add the following to `training_rf.yaml`: ```yaml settings: orchestrator: memory: 32 # in GB steps: model_trainer: settings: orchestrator: cpus: 8 ``` This configures the entire pipeline with 32 GB of memory and 8 CPU cores for the model trainer step. ##### Azure Users For Azure Kubernetes orchestrators, the configuration differs slightly: ```yaml settings: resources: memory: "32GB" steps: model_trainer: settings: resources: memory: "8GB" ``` #### Running the Pipeline Run the pipeline with the command: ```python python run.py --training-pipeline ``` This will provision a machine with the specified configuration. Note that not all orchestrators support `ResourceSettings` directly. #### Additional Resources For more details on settings and GPU attachment, refer to the ZenML documentation on runtime configuration and GPU training. ================================================== === File: docs/book/user-guide/production-guide/remote-storage.md === ### Transitioning to Remote Artifact Storage #### Connecting Remote Storage Remote storage allows for cloud-based artifact management, enhancing collaboration and scalability for production workloads. Artifacts are materialized in a central location, accessible to authorized users. #### Provisioning and Registering a Remote Artifact Store ZenML supports multiple artifact store types. Here are instructions for major cloud providers: **AWS:** 1. Install AWS CLI. 2. Install the S3 integration: ```shell zenml integration install s3 -y ``` 3. Register the S3 Artifact Store: ```shell zenml artifact-store register cloud_artifact_store -f s3 --path=s3://bucket-name ``` **GCP:** 1. Install Google Cloud CLI. 2. Install the GCP integration: ```shell zenml integration install gcp -y ``` 3. Register the GCS Artifact Store: ```shell zenml artifact-store register cloud_artifact_store -f gcp --path=gs://bucket-name ``` **Azure:** 1. Install Azure CLI. 2. Install the Azure integration: ```shell zenml integration install azure -y ``` 3. Register the Azure Artifact Store: ```shell zenml artifact-store register cloud_artifact_store -f azure --path=az://container-name ``` **Other Providers:** You can use cloud-agnostic solutions like Minio or create custom stack components. #### Configuring Permissions with Service Connectors Service connectors manage credentials for accessing cloud infrastructure. They provide temporary tokens for stack components. **AWS Service Connector:** ```shell AWS_PROFILE= zenml service-connector register cloud_connector --type aws --auto-configure ``` **GCP Service Connector:** ```shell zenml service-connector register cloud_connector --type gcp --auth-method service-account --service_account_json=@ --project_id= --generate_temporary_tokens=False ``` **Azure Service Connector:** ```shell zenml service-connector register cloud_connector --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= ``` Attach the service connector to the artifact store: ```shell zenml artifact-store connect cloud_artifact_store --connector cloud_connector ``` #### Running a Pipeline on a Cloud Stack Register a new stack with the remote artifact store: ```shell zenml stack register local_with_remote_storage -o default -a cloud_artifact_store ``` Set the stack as active: ```shell zenml stack set local_with_remote_storage ``` Run the training pipeline: ```shell python run.py --training-pipeline ``` Artifacts will be stored in remote storage, allowing team members to access them without local setup. List artifact versions: ```shell zenml artifact version list --created="gte:$(date -v-15M '+%Y-%m-%d %H:%M:%S')" ``` By connecting to remote storage, you enhance collaboration and scalability in your MLOps workflow. ================================================== === File: docs/book/user-guide/production-guide/README.md === # Production Guide Summary The ZenML production guide is designed for ML practitioners looking to implement MLOps in a workplace, building on the concepts from the Starter Guide. It focuses on transitioning from local pipeline execution to production deployment in the cloud. ## Key Topics Covered: - **Deploying ZenML**: Instructions for setting up ZenML in a production environment. - **Understanding Stacks**: Overview of the stack architecture used in ZenML. - **Connecting Remote Storage**: Guidance on integrating cloud storage solutions. - **Orchestrating on the Cloud**: Techniques for managing workflows in cloud environments. - **Configuring Pipeline for Scalability**: Strategies to scale compute resources efficiently. - **Code Repository Configuration**: Steps to connect a code repository for version control. ## Prerequisites: - A Python environment with `virtualenv` installed. - A major cloud provider (AWS, GCP, Azure) with respective CLIs installed and authorized. By following this guide, you will complete an end-to-end MLOps project, serving as a model for future implementations. **Note**: For internal ZenML functions and classes, refer to the [SDK Docs](https://sdkdocs.zenml.io/) for detailed information. ================================================== === File: docs/book/user-guide/production-guide/cloud-orchestration.md === ## Summary of Cloud Orchestration Documentation ### Overview This documentation outlines the process of transitioning MLOps pipelines from local execution to cloud environments, leveraging cloud resources for scalability and robustness. Key components involved are: - **Orchestrator**: Manages workflow and execution of pipelines. - **Container Registry**: Stores Docker container images. - **Remote Storage**: Complements the cloud stack for artifact storage. ### Cloud Stack Components To deploy a cloud stack, users can utilize the **Skypilot** orchestrator, which provisions a VM on a public cloud to execute pipelines. ZenML employs **Docker** to package code and dependencies into images that are pushed to a container registry. ### Sequence of Events for Running a Pipeline 1. User initiates a pipeline on the client machine, executing `run.py`. 2. Client retrieves stack configuration from the server. 3. Client builds and pushes a Docker image to the container registry. 4. Client creates a run in the orchestrator (e.g., Skypilot). 5. Orchestrator pulls the Docker image to execute the pipeline. 6. Artifacts are stored in the artifact store (cloud storage). 7. Pipeline execution status is reported back to the ZenML server. ### Provisioning and Registering Components #### AWS Setup 1. Install AWS and Skypilot integrations: ```shell zenml integration install aws skypilot_aws -y ``` 2. Register the service connector: ```shell AWS_PROFILE= zenml service-connector register cloud_connector --type aws --auto-configure ``` 3. Register the Skypilot orchestrator: ```shell zenml orchestrator register cloud_orchestrator -f vm_aws zenml orchestrator connect cloud_orchestrator --connector cloud_connector ``` 4. Register the AWS container registry: ```shell zenml container-registry register cloud_container_registry -f aws --uri=.dkr.ecr..amazonaws.com zenml container-registry connect cloud_container_registry --connector cloud_connector ``` #### GCP Setup 1. Install GCP and Skypilot integrations: ```shell zenml integration install gcp skypilot_gcp -y ``` 2. Register the service connector: ```shell zenml service-connector register cloud_connector --type gcp --auth-method service-account --service_account_json=@ --project_id= ``` 3. Register the Skypilot orchestrator: ```shell zenml orchestrator register cloud_orchestrator -f vm_gcp zenml orchestrator connect cloud_orchestrator --connect cloud_connector ``` 4. Register the GCP container registry: ```shell zenml container-registry register cloud_container_registry -f gcp --uri=gcr.io/ zenml container-registry connect cloud_container_registry --connector cloud_connector ``` #### Azure Setup Due to compatibility issues, Azure users should use the Kubernetes orchestrator: 1. Install Azure and Kubernetes integrations: ```shell zenml integration install azure kubernetes -y ``` 2. Register the service connector: ```shell zenml service-connector register cloud_connector --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= ``` 3. Register the Kubernetes orchestrator: ```shell zenml orchestrator register cloud_orchestrator --flavor kubernetes zenml orchestrator connect cloud_orchestrator --connect cloud_connector ``` 4. Register the Azure container registry: ```shell zenml container-registry register cloud_container_registry -f azure --uri=.azurecr.io zenml container-registry connect cloud_container_registry --connector cloud_connector ``` ### Running a Pipeline on Cloud Stack 1. Register a new stack: ```shell zenml stack register minimal_cloud_stack -o cloud_orchestrator -a cloud_artifact_store -c cloud_container_registry ``` 2. Set the stack active: ```shell zenml stack set minimal_cloud_stack ``` 3. Run the training pipeline: ```shell python run.py --training-pipeline ``` ### Additional Resources For further exploration of stack components, refer to the **Component Guide** for various artifact stores, container registries, and orchestrators integrated with ZenML. ================================================== === File: docs/book/user-guide/production-guide/understand-stacks.md === ### Summary: Switching Infrastructure Backend in ZenML #### Understanding Stacks - A **stack** is the configuration of tools and infrastructure for running ZenML pipelines. By default, pipelines run on the `default` stack. - ZenML separates code from configuration, allowing easy switching of environments without code changes. #### Stack Commands - **Describe Active Stack**: ```bash zenml stack describe ``` - **List Stacks**: ```bash zenml stack list ``` #### Components of a Stack - A stack consists of at least an **orchestrator** (executes pipeline code) and an **artifact store** (persists step outputs). - **Orchestrator**: ```bash zenml orchestrator list ``` - **Artifact Store**: ```bash zenml artifact-store list ``` #### Registering a Stack 1. **Create an Artifact Store**: ```bash zenml artifact-store register my_artifact_store --flavor=local ``` 2. **Create a New Stack**: ```bash zenml stack register a_new_local_stack -o default -a my_artifact_store ``` #### Inspecting Stack - To view details of a registered stack: ```bash zenml stack describe a_new_local_stack ``` #### Switching Stacks - Use the ZenML VS Code extension to easily view and switch stacks. #### Running a Pipeline on the New Stack 1. Set the new stack as active: ```bash zenml stack set a_new_local_stack ``` 2. Run the pipeline: ```bash python run.py --training-pipeline ``` #### Additional Notes - For requirements of a stack, use: ```bash zenml stack export-requirements ``` - For more information on ZenML functions, refer to the [SDK Docs](https://sdkdocs.zenml.io/). ================================================== === File: docs/book/user-guide/production-guide/deploying-zenml.md === ### Deploying ZenML Deploying ZenML is essential for moving from local development to production. Initially, ZenML operates with a local SQLite database for metadata storage (pipelines, models, artifacts). For production, deploy the server centrally to facilitate collaboration and interaction among infrastructure components. #### Deployment Options 1. **ZenML Pro Trial**: - Sign up for a managed SaaS solution with one-click deployment. - If the ZenML Python client is installed, connect to the trial using: ```bash zenml login --pro ``` - Additional features and a dashboard are included. Self-hosting is an option post-trial. 2. **Self-hosting on Cloud Provider**: - ZenML is open-source and can be self-hosted in a Kubernetes cluster. - Create a Kubernetes cluster using your cloud provider's documentation: - [AWS](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html) - [Azure](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-portal?tabs=azure-cli) - [GCP](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-zonal-cluster#before_you_begin) #### Connecting to Deployed ZenML Connect your local ZenML client to the ZenML Server using the CLI: ```bash zenml login ``` This command initiates a browser-based validation process. Once connected, all metadata will be centrally tracked. To revert to local mode, use: ```bash zenml logout ``` #### Further Resources - **[Deploying ZenML](../../getting-started/deploying-zenml/README.md)**: Overview of deployment options and architecture. - **[Full how-to guides](../../getting-started/deploying-zenml/README.md)**: Detailed guides for deploying on Docker, Hugging Face Spaces, Kubernetes, etc. ================================================== === File: docs/book/user-guide/production-guide/end-to-end.md === ### End-to-End MLOps Project with ZenML This documentation outlines the steps to create an end-to-end MLOps project using ZenML, incorporating advanced MLOps concepts: **Key Concepts Covered:** - Deploying ZenML - Abstracting infrastructure with stacks - Connecting to remote storage - Cloud orchestration - Configuring scalable pipelines - Integrating with a Git repository ### Getting Started 1. **Set Up Virtual Environment:** Install necessary dependencies: ```bash pip install "zenml[templates,server]" notebook zenml integration install sklearn -y ``` 2. **Create Project Directory:** Use ZenML templates to initialize the project: ```bash mkdir zenml_batch_e2e cd zenml_batch_e2e zenml init --template e2e_batch --template-with-defaults pip install -r requirements.txt ``` **Alternative Method:** Clone the ZenML example if the above fails: ```bash git clone --depth 1 git@github.com:zenml-io/zenml.git cd zenml/examples/e2e pip install -r requirements.txt zenml init ``` ### Learning Outcomes The e2e project template demonstrates key ZenML functionalities for supervised ML with batch predictions. It builds on the starter project and encourages experimentation with pipelines on a remote cloud stack and a tracked Git repository. ### Conclusion and Next Steps You now have a foundational MLOps project using ZenML connected to cloud infrastructure. Explore writing your own pipelines and stacks, and refer to the [how-to section](../../how-to/pipeline-development/build-pipelines/README.md) for advanced topics. Good luck with your MLOps journey! ================================================== === File: docs/book/user-guide/production-guide/ci-cd.md === ### Managing the Lifecycle of a ZenML Pipeline with CI/CD #### Overview This guide outlines how to manage ZenML pipelines using Continuous Integration and Delivery (CI/CD) through GitHub Actions. This approach allows data scientists to develop locally while automating testing and deployment to production. #### Setting Up CI/CD 1. **GitHub Repository**: Use the [ZenML Gitflow Repository](https://github.com/zenml-io/zenml-gitflow/) as a template for CI/CD workflows that automate model training and deployment. 2. **API Key Configuration**: Create an API key for machine-to-machine connections in ZenML: ```bash zenml service-account create github_action_api_key ``` Store the generated API key securely. 3. **GitHub Secrets**: Store the `ZENML_API_KEY` in GitHub secrets for secure access during actions. #### Optional Staging and Production Stacks You may want different configurations for staging and production environments. This can include: - Different data sources - Separate configuration files for models, Docker settings, and resource settings. #### Triggering Pipelines on Pull Requests To ensure code quality, set up a GitHub Action to run your pipeline on pull requests: ```yaml on: pull_request: branches: [ staging, main ] ``` #### Workflow Configuration Define environment variables in your workflow: ```yaml jobs: run-staging-workflow: runs-on: run-zenml-pipeline env: ZENML_STORE_URL: ${{ secrets.ZENML_HOST }} ZENML_STORE_API_KEY: ${{ secrets.ZENML_API_KEY }} ZENML_STACK: stack_name ZENML_GITHUB_SHA: ${{ github.event.pull_request.head.sha }} ZENML_GITHUB_URL_PR: ${{ github.event.pull_request._links.html.href }} ``` #### Steps to Run Pipeline Include the following steps in your GitHub Action: ```yaml steps: - name: Check out repository code uses: actions/checkout@v3 - uses: actions/setup-python@v4 with: python-version: '3.9' - name: Install requirements run: pip3 install -r requirements.txt - name: Confirm ZenML client connection run: zenml status - name: Set stack run: zenml stack set ${{ env.ZENML_STACK }} - name: Run pipeline run: python run.py --pipeline end-to-end --dataset production --version ${{ env.ZENML_GITHUB_SHA }} --github-pr-url ${{ env.ZENML_GITHUB_URL_PR }} ``` #### Optional: Comment Metrics on Pull Requests You can configure your workflow to leave a report on the pull request based on the pipeline results. Refer to the template in the ZenML Gitflow repository for implementation details. ================================================== === File: docs/book/user-guide/production-guide/connect-code-repository.md === ### ZenML Git Repository Integration **Overview**: Connect a Git repository to ZenML to optimize Docker builds and enhance collaboration in MLOps projects. #### Benefits of Connecting a Git Repository - Reduces redundant Docker builds by reusing existing images based on the current Git commit hash. - Facilitates better code management and collaboration among team members. #### Pipeline Execution Flow 1. Trigger a pipeline run locally. 2. ZenML parses the `@pipeline` function for necessary steps. 3. Local client requests stack info from ZenML server. 4. Checks if an existing Docker image can be reused. 5. Initiates a run in the orchestrator, setting up the cloud execution environment. 6. Orchestrator downloads code from the Git repository and runs the pipeline steps using the existing Docker image. 7. Artifacts are stored in a cloud-based artifact store. 8. Pipeline run status and metadata are reported back to the ZenML server. #### Creating a GitHub Repository 1. Sign in to [GitHub](https://github.com/). 2. Click "+" and select "New repository." 3. Name the repository, set visibility, and add a README or .gitignore if needed. 4. Click "Create repository." **Push Local Code to GitHub**: ```sh git init git add . git commit -m "Initial commit" git remote add origin https://github.com/YOUR_USERNAME/YOUR_REPOSITORY_NAME.git git push -u origin master ``` *Replace `YOUR_USERNAME` and `YOUR_REPOSITORY_NAME` with your details.* #### Linking GitHub to ZenML 1. Obtain a GitHub Personal Access Token (PAT): - Go to GitHub settings > Developer settings > Personal access tokens > Generate new token. - Name the token, select specific repository access, and grant `contents` read-only access. - Generate and save the token. 2. Install GitHub integration and register the repository: ```sh zenml integration install github zenml code-repository register --type=github \ --owner= --repository= \ --token= ``` *Fill in ``, ``, ``, and ``.* #### Running the Pipeline - First run (Docker image built): ```python python run.py --training-pipeline ``` - Subsequent runs (Docker build skipped): ```python python run.py --training-pipeline ``` For more details, refer to the ZenML Git Integration documentation. ================================================== === File: docs/book/user-guide/llmops-guide/README.md === # ZenML LLMOps Guide Summary The ZenML LLMOps Guide provides a framework for integrating Large Language Models (LLMs) into MLOps pipelines, targeting ML practitioners and MLOps engineers. Key topics include: - **RAG with ZenML**: Introduction to Retrieval-Augmented Generation (RAG). - **Code Examples**: - RAG implementation in 85 lines of code. - Evaluation in 65 lines of code. - Finetuning LLMs in 100 lines of code. - **Core Concepts**: - **Data Ingestion & Preprocessing**: Techniques for preparing data for LLMs. - **Embeddings Generation**: Creating vector representations of data. - **Vector Database**: Storing embeddings for efficient retrieval. - **Inference Pipeline**: Basic setup for RAG inference. - **Evaluation Metrics**: Methods for assessing retrieval and generation performance. - **Reranking**: Improving retrieval results through reranking techniques. - **Finetuning**: Strategies for enhancing LLMs and embeddings, including using Sentence Transformers and synthetic data. - **Deployment**: Guidelines for deploying finetuned models. The guide emphasizes a practical application—a question answering system for ZenML—demonstrating the progression from a simple RAG pipeline to advanced techniques like finetuning and reranking. **Prerequisites**: Users should have a Python environment with ZenML installed and familiarity with the Starter and Production Guides. By following this guide, users will learn to build scalable and maintainable LLM-powered applications within their MLOps workflows. ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/embeddings-generation.md === ### Generating Embeddings for Retrieval This section outlines the process of generating embeddings to enhance retrieval performance in a Retrieval-Augmented Generation (RAG) pipeline. Embeddings are vector representations that capture the semantic meaning of data in a high-dimensional space, allowing for improved retrieval of relevant information based on similarity to user queries. #### Key Points: - **Embeddings Purpose**: They facilitate the quick identification of relevant data chunks, outperforming simple keyword searches, especially for complex queries. - **Library Used**: The `sentence-transformers` library is employed to generate embeddings using pre-trained models, specifically `sentence-transformers/all-MiniLM-L12-v2`, which produces 384-dimensional embeddings. - **Dimensionality Reduction**: Techniques like UMAP and t-SNE can visualize embeddings in two dimensions, helping to identify patterns and relationships in the data. #### Code for Generating Embeddings ```python from typing import Annotated, List import numpy as np from sentence_transformers import SentenceTransformer from structures import Document from zenml import ArtifactConfig, log_artifact_metadata, step @step def generate_embeddings(split_documents: List[Document]) -> Annotated[List[Document], ArtifactConfig(name="documents_with_embeddings")]: model = SentenceTransformer("sentence-transformers/all-MiniLM-L12-v2") log_artifact_metadata(artifact_name="embeddings", metadata={"embedding_type": "sentence-transformers/all-MiniLM-L12-v2", "embedding_dimensionality": 384}) embeddings = model.encode([doc.page_content for doc in split_documents]) for doc, embedding in zip(split_documents, embeddings): doc.embedding = embedding return split_documents ``` #### Visualization of Embeddings The embeddings can be visualized using t-SNE or UMAP to understand their clustering based on semantic meaning. The following code snippets demonstrate how to visualize embeddings using both methods: ```python import matplotlib.pyplot as plt import numpy as np from sklearn.manifold import TSNE import umap from zenml.client import Client artifact = Client().get_artifact_version('EMBEDDINGS_ARTIFACT_UUID') embeddings = np.array([doc.embedding for doc in documents]) parent_sections = [doc.parent_section for doc in documents] def visualize_embeddings(embeddings, parent_sections, method='tsne'): if method == 'tsne': embeddings_2d = TSNE(n_components=2).fit_transform(embeddings) else: embeddings_2d = umap.UMAP(n_components=2).fit_transform(embeddings) plt.figure(figsize=(8, 8)) unique_sections = list(set(parent_sections)) colors = plt.cm.get_cmap('tab10', len(unique_sections)) for idx, section in enumerate(unique_sections): mask = [section == ps for ps in parent_sections] plt.scatter(embeddings_2d[mask, 0], embeddings_2d[mask, 1], c=[colors(idx)], label=section) plt.title(f"{method.upper()} Visualization") plt.legend() plt.show() ``` ### Summary Embeddings are essential for enhancing retrieval in RAG pipelines. The process involves generating embeddings using a pre-trained model, visualizing them to identify semantic clusters, and storing them for efficient retrieval. This modular approach allows for flexibility in future database integrations. For complete code examples, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/README.md === ### RAG Pipelines with ZenML Retrieval-Augmented Generation (RAG) combines retrieval-based and generation-based models, enhancing the capabilities of large language models (LLMs). This guide outlines the setup of RAG pipelines using ZenML, covering essential components and processes. #### Key Topics: - **Purpose of RAG**: Addresses limitations of LLMs, which may generate incorrect or inappropriate responses due to ambiguous prompts and constraints on text length. - **Data Ingestion and Preprocessing**: Steps to prepare data for the RAG pipeline. - **Embeddings**: Use of embeddings to represent data, forming the basis for the retrieval mechanism. - **Vector Database**: Storing embeddings for efficient retrieval. - **Artifact Tracking**: Utilizing ZenML to track RAG-related artifacts. #### Conclusion: The guide culminates in a demonstration of how all components work together for basic RAG inference. For more information on LLM capabilities, refer to [Google's Gemini 1.5 Pro](https://developers.googleblog.com/2024/02/gemini-15-available-for-private-preview-in-google-ai-studio.html). ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/understanding-rag.md === ### Summary of Retrieval-Augmented Generation (RAG) **Overview**: Retrieval-Augmented Generation (RAG) enhances the capabilities of Large Language Models (LLMs) by integrating a retrieval mechanism that fetches relevant documents from a large corpus to inform response generation. This technique addresses LLM limitations, such as generating incorrect responses and handling extensive text inputs. **RAG Pipeline Process**: 1. **Retriever**: Identifies relevant documents from a corpus. 2. **Generator**: Produces responses based on retrieved documents. This combination is particularly effective for tasks requiring contextual understanding, such as question answering, summarization, and dialogue generation. RAG mitigates context and token limitations by focusing on a smaller set of relevant documents, making it more cost-effective than pure generation-based approaches. **When to Use RAG**: RAG is ideal for generating long-form responses that require contextual grounding, especially when a large corpus is available. It is a practical starting point for exploring LLMs due to its lower data and computational resource requirements. **Integration with ZenML**: ZenML facilitates the creation of RAG pipelines, combining retrieval and generation capabilities. Key features include: - **Data Ingestion**: Tools for managing data and indexing. - **Artifact Tracking**: Monitors hyperparameters, model weights, and performance metrics through the Model Control Plane and ZenML Pro dashboard. - **Scalability**: Easily adapts to larger document corpora and more complex setups, including fine-tuning and reranking. - **Reproducibility**: Allows rerunning pipelines with preserved previous versions for performance comparison. - **Maintainability**: Modular pipeline structure simplifies updates and experimentation. - **Collaboration**: Enables sharing and teamwork on pipelines, enhancing collective insights. ### Advantages of ZenML for RAG: - **Reproducibility**: Update and compare pipeline versions easily. - **Scalability**: Deploy on cloud providers for larger document handling. - **Artifact Tracking**: Monitor and debug pipeline performance with associated metadata. - **Maintainability**: Modular design for easy updates and configuration changes. - **Collaboration**: Share insights and findings with team members. The documentation will further explore components of a basic RAG pipeline and advanced topics like reranking documents and fine-tuning models. ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/storing-embeddings-in-a-vector-database.md === ### Summary: Storing Embeddings in a Vector Database This documentation outlines the process of storing embeddings in a vector database, specifically using PostgreSQL, to facilitate efficient retrieval based on similarity to queries. #### Key Points: - **Purpose**: Storing embeddings allows for quick retrieval of relevant document chunks without regenerating embeddings each time. - **Database Choice**: PostgreSQL is recommended for its scalability and efficiency in handling high-dimensional vectors. Other vector databases can also be used. - **Setup Instructions**: For PostgreSQL setup, refer to the repository instructions for using Supabase. #### Code Overview: The following Python code demonstrates how to index documents and their embeddings in PostgreSQL using the `psycopg2` package: ```python from zenml import step @step def index_generator(documents: List[Document]) -> None: try: conn = get_db_conn() with conn.cursor() as cur: cur.execute("CREATE EXTENSION IF NOT EXISTS vector") conn.commit() cur.execute(""" CREATE TABLE IF NOT EXISTS embeddings ( id SERIAL PRIMARY KEY, content TEXT, token_count INTEGER, embedding VECTOR({EMBEDDING_DIMENSIONALITY}), filename TEXT, parent_section TEXT, url TEXT );""") conn.commit() for doc in documents: cur.execute("SELECT COUNT(*) FROM embeddings WHERE content = %s", (doc.page_content,)) if cur.fetchone()[0] == 0: cur.execute( "INSERT INTO embeddings (content, token_count, embedding, filename, parent_section, url) VALUES (%s, %s, %s, %s, %s, %s)", (doc.page_content, doc.token_count, doc.embedding.tolist(), doc.filename, doc.parent_section, doc.url) ) conn.commit() cur.execute("SELECT COUNT(*) FROM embeddings;") num_records = cur.fetchone()[0] num_lists = max(num_records / 1000, 10) if num_records <= 1000000 else math.sqrt(num_records) cur.execute(f"CREATE INDEX IF NOT EXISTS embeddings_idx ON embeddings USING ivfflat (embedding vector_cosine_ops) WITH (lists = {num_lists});") conn.commit() except Exception as e: logger.error(f"Error in index_generator: {e}") raise finally: if conn: conn.close() ``` #### Functionality: - Connects to the PostgreSQL database. - Creates the `vector` extension and `embeddings` table if they do not exist. - Inserts document embeddings only if they are not already present. - Calculates index parameters and creates an index using the `ivfflat` method for cosine distance similarity. #### Considerations: - The decision to update embeddings depends on data volatility; new embeddings are added only if they do not exist. - Performance may improve by running on a GPU-enabled machine if the dataset is large. This setup enables efficient retrieval of documents based on their embeddings, enhancing the capabilities of a question-answering system. For further details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/basic-rag-inference-pipeline.md === ### Summary of RAG Inference Documentation **Overview**: This documentation outlines the process of using Retrieval-Augmented Generation (RAG) components to generate responses to user prompts based on documents stored in an index. #### Simple RAG Inference 1. **Running the Inference**: To query the index store, use the following command: ```bash python run.py --rag-query "your_query_here" --model=gpt4 ``` 2. **Inference Function**: The inference is executed through a function call rather than a ZenML pipeline. The primary function for processing input is defined as: ```python def process_input_with_retrieval(input: str, model: str = OPENAI_MODEL, n_items_retrieved: int = 5) -> str: delimiter = "```" related_docs = get_topn_similar_docs(get_embeddings(input), get_db_conn(), n=n_items_retrieved) system_message = """You are a friendly chatbot. You can answer questions about ZenML, its features, and use cases. Respond concisely and technically. Use only ZenML documentation for answers.""" messages = [ {"role": "system", "content": system_message}, {"role": "user", "content": f"{delimiter}{input}{delimiter}"}, {"role": "assistant", "content": "Relevant ZenML documentation:\n" + "\n".join(doc[0] for doc in related_docs)}, ] return get_completion_from_messages(messages, model=model) ``` 3. **Document Retrieval**: The function `get_topn_similar_docs` retrieves the most similar documents based on the query embedding: ```python def get_topn_similar_docs(query_embedding: List[float], conn: psycopg2.extensions.connection, n: int = 5) -> List[Tuple]: embedding_array = np.array(query_embedding) register_vector(conn) cur = conn.cursor() cur.execute(f"SELECT content FROM embeddings ORDER BY embedding <=> %s LIMIT {n}", (embedding_array,)) return cur.fetchall() ``` This uses the `pgvector` PostgreSQL plugin for efficient similarity searches. 4. **Generating Responses**: The `get_completion_from_messages` function interfaces with various LLMs using `litellm`: ```python def get_completion_from_messages(messages, model=OPENAI_MODEL, temperature=0.4, max_tokens=1000): completion_response = litellm.completion(model=model, messages=messages, temperature=temperature, max_tokens=max_tokens) return completion_response.choices[0].message.content ``` This allows flexibility in using different LLMs without rewriting code. #### Conclusion The documentation provides a foundational understanding of building a basic RAG inference pipeline using embeddings for document retrieval and LLMs for response generation. Future sections will address improving retrieval performance through fine-tuning embeddings, particularly for large and diverse document sets. For complete code, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) and specifically the [`llm_utils.py` file](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/utils/llm_utils.py). ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/data-ingestion.md === ### Summary: Ingesting and Preprocessing Data for RAG Pipelines with ZenML This documentation outlines the process of ingesting and preprocessing data for Retrieval-Augmented Generation (RAG) pipelines using ZenML. The initial step involves gathering a large corpus of documents and relevant metadata for training retriever and generator models. ZenML facilitates data ingestion through integration with various tools and frameworks. #### URL Scraping A ZenML step can be created to scrape URLs from documentation. The following code demonstrates a URL scraper that collects relevant URLs: ```python from typing import List from typing_extensions import Annotated from zenml import log_artifact_metadata, step from steps.url_scraping_utils import get_all_pages @step def url_scraper( docs_url: str = "https://docs.zenml.io", repo_url: str = "https://github.com/zenml-io/zenml", website_url: str = "https://zenml.io", ) -> Annotated[List[str], "urls"]: """Generates a list of relevant URLs to scrape.""" docs_urls = get_all_pages(docs_url) log_artifact_metadata({"count": len(docs_urls)}) return docs_urls ``` The `get_all_pages` function retrieves a unique set of URLs from the documentation, ensuring only the most recent information is ingested. The URL count is logged for visibility. #### Document Loading The `unstructured` library is used to load and parse the scraped URLs: ```python from typing import List from unstructured.partition.html import partition_html from zenml import step @step def web_url_loader(urls: List[str]) -> List[str]: """Loads documents from a list of URLs.""" return ["\n\n".join(map(str, partition_html(url))) for url in urls] ``` This function simplifies the extraction of text content from HTML, making it suitable for LLM processing. #### Data Preprocessing After loading documents, they need to be preprocessed into manageable chunks: ```python import logging from typing import Annotated, List from utils.llm_utils import split_documents from zenml import ArtifactConfig, log_artifact_metadata, step logging.basicConfig(level=logging.INFO) @step(enable_cache=False) def preprocess_documents(documents: List[str]) -> Annotated[List[str], ArtifactConfig(name="split_chunks")]: """Preprocesses documents by splitting them into chunks.""" log_artifact_metadata({"chunk_size": 500, "chunk_overlap": 50}) return split_documents(documents, chunk_size=500, chunk_overlap=50) ``` Chunk size and overlap are critical parameters. A chunk size of 500 with a 50-character overlap is suggested for documentation, balancing retrieval efficiency and LLM processing. #### Additional Considerations Further preprocessing may include text cleaning, handling code snippets, or extracting metadata, depending on the data structure and use case. For complete code examples and additional details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) and the specific [steps code](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide/steps/). ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/rag-85-loc.md === ### Summary of RAG Pipeline Implementation This documentation outlines a simple implementation of a Retrieval-Augmented Generation (RAG) pipeline in 85 lines of Python code. The pipeline performs the following tasks: 1. **Data Loading**: Utilizes a fictional dataset about "ZenML World" as the corpus. 2. **Text Processing**: Splits the text into chunks and tokenizes it (i.e., splits into words). 3. **Query Handling**: Accepts a query and retrieves the most relevant text chunks from the corpus. 4. **Response Generation**: Uses OpenAI's GPT-3.5 model to generate answers based on the relevant text chunks. ### Key Functions - **`preprocess_text(text)`**: Normalizes text by converting it to lowercase, removing punctuation, and trimming whitespace. - **`tokenize(text)`**: Tokenizes preprocessed text into words. - **`retrieve_relevant_chunks(query, corpus, top_n=2)`**: - Computes Jaccard similarity between the query and corpus chunks. - Returns the top `n` most similar chunks. - **`answer_question(query, corpus, top_n=2)`**: - Retrieves relevant chunks and generates an answer using the OpenAI API. - Returns a default message if no relevant chunks are found. ### Example Corpus The corpus consists of descriptions of various creatures and landscapes in "ZenML World", including: - Luminescent forests with Zenbots - Cosmic Butterflies in neon skies - Telepathic Treants - Fractal Fungi in melodic caverns - Holographic Hummingbirds - Gravitational Geckos - Plasma Phoenixes - Crystalline Crabs ### Example Queries and Outputs 1. **Query**: "What are Plasma Phoenixes?" - **Answer**: Describes Plasma Phoenixes as energy creatures soaring above chromatic canyons. 2. **Query**: "What kinds of creatures live on the prismatic shores of ZenML World?" - **Answer**: Mentions crystalline crabs with transparent exoskeletons. 3. **Query**: "What is the capital of Panglossia?" - **Answer**: States that the capital is not mentioned in the context. ### Implementation Notes - The similarity check is basic and uses the Jaccard similarity coefficient, which is defined as the size of the intersection divided by the size of the union of two sets. - The implementation is not optimized for performance but serves as a foundational example for understanding the RAG pipeline. ### Code Snippet ```python import os import re import string from openai import OpenAI def preprocess_text(text): return re.sub(r"\s+", " ", text.lower().translate(str.maketrans("", "", string.punctuation))).strip() def tokenize(text): return preprocess_text(text).split() def retrieve_relevant_chunks(query, corpus, top_n=2): query_tokens = set(tokenize(query)) similarities = [(chunk, len(query_tokens.intersection(set(tokenize(chunk)))) / len(query_tokens.union(set(tokenize(chunk))))) for chunk in corpus] return [chunk for chunk, _ in sorted(similarities, key=lambda x: x[1], reverse=True)[:top_n]] def answer_question(query, corpus, top_n=2): relevant_chunks = retrieve_relevant_chunks(query, corpus, top_n) if not relevant_chunks: return "I don't have enough information to answer the question." context = "\n".join(relevant_chunks) client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) return client.chat.completions.create(messages=[{"role": "system", "content": f"Based on the provided context, answer the following question: {query}\n\nContext:\n{context}"}, {"role": "user", "content": query}], model="gpt-3.5-turbo").choices[0].message.content.strip() # Example corpus corpus = [preprocess_text(sentence) for sentence in [ "The luminescent forests of ZenML World are inhabited by glowing Zenbots...", "In the neon skies of ZenML World, Cosmic Butterflies flutter gracefully...", # Additional sentences... ]] # Example queries print(answer_question("What are Plasma Phoenixes?", corpus)) print(answer_question("What kinds of creatures live on the prismatic shores of ZenML World?", corpus)) print(answer_question("What is the capital of Panglossia?", corpus)) ``` This summary captures the essential components and functionality of the RAG pipeline implementation while maintaining clarity and conciseness. ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/evaluation-in-practice.md === ### Summary of RAG System Evaluation Documentation **Overview**: This documentation provides guidance on evaluating the performance of a Retrieval-Augmented Generation (RAG) system, emphasizing the separation of embedding generation and evaluation processes. #### Evaluation Pipeline - The evaluation is structured as a separate pipeline that runs after embedding generation. - This separation allows for focused evaluation and can serve as a gating mechanism for production readiness. - For faster iteration, consider using a local LLM judge during development, switching to a cloud LLM (e.g., Anthropic's Claude, OpenAI's GPT-3.5/4) for final evaluations. #### Importance of Human Review - Automated evaluations can streamline the process but do not replace the need for human oversight. - The LLM judge is costly and slow, necessitating human review to ensure embeddings and the RAG system perform as expected. #### Evaluation Frequency - The depth and frequency of evaluations should align with project constraints and use case needs. - Balance quick, inexpensive tests (e.g., retrieval system) with more costly, time-consuming evaluations (e.g., LLM judge). - Structure evaluations to run some tests frequently and others less often. #### Next Steps - The documentation suggests improving retrieval performance by adding a reranker without retraining embeddings. #### Practical Implementation To run the evaluation pipeline: 1. Clone the project repository: ```bash git clone https://github.com/zenml-io/zenml-projects.git ``` 2. Navigate to the `llm-complete-guide` directory and follow the `README.md` instructions. 3. Execute the evaluation pipeline: ```bash python run.py --evaluation ``` 4. Results will be output to the console, with progress and logs viewable in the dashboard. This concise overview captures the essential elements of evaluating a RAG system while retaining critical technical details. ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/README.md === ### Evaluation and Metrics for RAG Pipeline This section discusses evaluating the performance of a Retrieval-Augmented Generation (RAG) pipeline using metrics and visualizations. Evaluating a RAG pipeline is essential for understanding its effectiveness and identifying areas for improvement, particularly since traditional metrics like accuracy, precision, and recall are not suitable for subjective text generation. #### Key Evaluation Areas: 1. **Retrieval Evaluation**: Assessing the relevance of retrieved documents or document chunks to the query. 2. **Generation Evaluation**: Evaluating the coherence and helpfulness of the generated text for the specific use case. #### Evaluation Considerations: - The evaluation criteria depend on the specific use case and acceptable error tolerance. For example, in a user-facing chatbot: - Relevance of retrieved documents. - Coherence and helpfulness of generated answers. - Absence of hate speech or toxic language. #### End-to-End Evaluation: The generation evaluation serves as an end-to-end assessment of the RAG pipeline, allowing for subjective metrics since it evaluates the system's final output. #### Best Practices: In production settings, consider establishing a baseline by evaluating a raw LLM model without retrieval components, then compare it with the RAG pipeline's performance to gauge the added value of retrieval and generation. #### Code Example: Refer to the [high-level code example](evaluation-in-65-loc.md) for a demonstration of the two main evaluation areas. Subsequent sections will provide detailed guidance on when to conduct evaluations and what to analyze in the results. ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/retrieval.md === ### Retrieval Evaluation Summary The retrieval component in a Retrieval-Augmented Generation (RAG) pipeline identifies relevant documents based on a query, converting it into a vector for semantic search. Evaluating its performance involves assessing the accuracy of retrieved documents against expected results. #### Manual Evaluation - **Handcrafted Queries**: Create specific queries known to yield particular documents. This manual process helps identify edge cases and areas for improvement. - **Example Queries**: - "How do I get going with the Label Studio integration?" - "How can I write my own custom materializer?" - **Implementation**: - Encode the query as a vector and query a PostgreSQL database for similar vectors. ```python def query_similar_docs(question: str, url_ending: str) -> tuple: embedded_question = get_embeddings(question) top_similar_docs_urls = get_topn_similar_docs(embedded_question, get_db_conn(), n=5, only_urls=True) return (question, url_ending, [url[0] for url in top_similar_docs_urls]) def test_retrieved_docs_retrieve_best_url(question_doc_pairs: list) -> float: failures = sum(1 for pair in question_doc_pairs if pair["url_ending"] not in query_similar_docs(pair["question"], pair["url_ending"])[2]) return round((failures / len(question_doc_pairs)) * 100, 2) ``` - **Logging**: Provides immediate feedback on failures during local testing. ```python @step def retrieval_evaluation_small() -> Annotated[float, "small_failure_rate_retrieval"]: return test_retrieved_docs_retrieve_best_url(question_doc_pairs) ``` #### Automated Evaluation - **Synthetic Queries**: Use an LLM to generate questions based on document chunks for broader evaluation. ```python def generate_question(chunk: str, local: bool = False) -> str: model = LOCAL_MODEL if local else "gpt-3.5-turbo" response = completion(model=model, messages=[{"content": f"Generate a question about this text: `{chunk}`", "role": "user"}]) return response.choices[0].message.content @step def generate_questions_from_chunks(docs_with_embeddings: List[Document], local: bool = False) -> List[Document]: for doc in docs_with_embeddings: doc.generated_questions = [generate_question(doc.page_content, local)] return docs_with_embeddings ``` - **Evaluation Process**: Check if the original document URL appears in the top results for generated questions. ```python @step def retrieval_evaluation_full(sample_size: int = 50) -> Annotated[float, "full_failure_rate_retrieval"]: dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train").shuffle(seed=42).select(range(sample_size)) failures = sum(1 for item in dataset if item["generated_questions"][0] not in query_similar_docs(item["generated_questions"][0], item["filename"].split("/")[-1])[2]) return round((failures / len(dataset)) * 100, 2) ``` #### Performance Insights - Initial tests showed a 20% failure rate with manual queries and a 16% failure rate with synthetic queries, indicating room for improvement. - Suggested improvements include: - **Diverse Question Generation**: Experiment with different prompts for varied question types. - **Semantic Similarity Metrics**: Use metrics like cosine similarity for nuanced performance evaluation. - **Comparative Evaluation**: Test different retrieval methods for performance comparison. - **Error Analysis**: Investigate failure patterns to guide improvements. #### Conclusion The evaluation process, from manual checks to automated testing with synthetic queries, establishes a baseline for the retrieval component's performance. Continuous refinement through diverse testing and error analysis is essential for enhancing the RAG pipeline's effectiveness. Further evaluation of the generation component will complement this assessment, ensuring a robust question-answering system. For complete code, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) and specifically the [eval_retrieval.py file](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_retrieval.py). ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/evaluation-in-65-loc.md === ### Summary of RAG Evaluation Implementation This documentation outlines how to evaluate a Retrieval-Augmented Generation (RAG) pipeline in 65 lines of code, building on a previous example. The full code can be found in the project repository [here](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/most_basic_eval.py). The evaluation relies on functions from an earlier RAG pipeline. #### Evaluation Data The evaluation data consists of questions and expected answers: ```python eval_data = [ {"question": "What creatures inhabit the luminescent forests of ZenML World?", "expected_answer": "The luminescent forests of ZenML World are inhabited by glowing Zenbots."}, {"question": "What do Fractal Fungi do in the melodic caverns of ZenML World?", "expected_answer": "Fractal Fungi emit pulsating tones..."}, {"question": "Where do Gravitational Geckos live in ZenML World?", "expected_answer": "Gravitational Geckos traverse the inverted cliffs of ZenML World."}, ] ``` #### Evaluation Functions Two key functions are defined for evaluation: 1. **Retrieval Evaluation**: - Checks if retrieved chunks contain any words from the expected answer. ```python def evaluate_retrieval(question, expected_answer, corpus, top_n=2): relevant_chunks = retrieve_relevant_chunks(question, corpus, top_n) return any(any(word in chunk for word in tokenize(expected_answer)) for chunk in relevant_chunks) ``` 2. **Generation Evaluation**: - Utilizes OpenAI's API to assess the relevance and accuracy of the generated answer. ```python def evaluate_generation(question, expected_answer, generated_answer): client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) chat_completion = client.chat.completions.create( messages=[{"role": "system", "content": "You are an evaluation judge. ..."}, {"role": "user", "content": f"Question: {question}\nExpected Answer: {expected_answer}\nGenerated Answer: {generated_answer}\nIs the generated answer relevant and accurate?"}], model="gpt-3.5-turbo" ) return chat_completion.choices[0].message.content.strip().lower() == "yes" ``` #### Evaluation Process The evaluation process iterates through the `eval_data`, calculating scores for both retrieval and generation: ```python retrieval_scores = [] generation_scores = [] for item in eval_data: retrieval_scores.append(evaluate_retrieval(item["question"], item["expected_answer"], corpus)) generated_answer = answer_question(item["question"], corpus) generation_scores.append(evaluate_generation(item["question"], item["expected_answer"], generated_answer)) retrieval_accuracy = sum(retrieval_scores) / len(retrieval_scores) generation_accuracy = sum(generation_scores) / len(generation_scores) print(f"Retrieval Accuracy: {retrieval_accuracy:.2f}") print(f"Generation Accuracy: {generation_accuracy:.2f}") ``` #### Results The example demonstrates achieving 100% accuracy for both retrieval and generation. Future sections will explore more sophisticated implementations of RAG evaluation. ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/generation.md === ### Generation Evaluation in RAG Pipeline The generation component of a Retrieval-Augmented Generation (RAG) pipeline generates answers based on retrieved context. Evaluating this component is subjective and lacks precise metrics, but several approaches can be employed. #### Handcrafted Evaluation Tests Start with simple tests to verify if generated outputs contain or exclude specific terms based on known expected results. For example, if ZenML supports "Airflow" and "Kubeflow," the generated answers should include these terms and exclude unsupported ones like "Flyte" and "Prefect." **Example Test Cases:** - **Bad Answers Table:** | Question | Bad Words | |----------|-----------| | What orchestrators does ZenML support? | AWS Step Functions, Flyte, Prefect, Dagster | | What is the default orchestrator in ZenML? | Flyte, AWS Step Functions | - **Good Responses Table:** | Question | Good Words | |----------|------------| | What are the supported orchestrators in ZenML? | Kubeflow, Airflow | | What is the default orchestrator in ZenML? | local | **Testing Code Example:** ```python class TestResult(BaseModel): success: bool question: str keyword: str = "" response: str def test_content_for_bad_words(item: dict, n_items_retrieved: int = 5) -> TestResult: question = item["question"] bad_words = item["bad_words"] response = process_input_with_retrieval(question, n_items_retrieved=n_items_retrieved) for word in bad_words: if word in response: return TestResult(success=False, question=question, keyword=word, response=response) return TestResult(success=True, question=question, response=response) ``` #### End-to-End Evaluation Combine tests for bad answers, bad immediate responses, and good responses to evaluate the generation component comprehensively. **End-to-End Evaluation Code Example:** ```python @step def e2e_evaluation() -> ( Annotated[float, "failure_rate_bad_answers"], Annotated[float, "failure_rate_bad_immediate_responses"], Annotated[float, "failure_rate_good_responses"], ): failure_rate_bad_answers = run_tests(bad_answers, test_content_for_bad_words) failure_rate_bad_immediate_responses = run_tests(bad_immediate_responses, test_response_starts_with_bad_words) failure_rate_good_responses = run_tests(good_responses, test_content_contains_good_words) return (failure_rate_bad_answers, failure_rate_bad_immediate_responses, failure_rate_good_responses) ``` #### Automated Evaluation with Another LLM Another method is to use a different LLM to evaluate the output of the primary LLM. This requires setting up a Pydantic model to ensure the evaluation scores are integers between 1 and 5. **Pydantic Model Example:** ```python class LLMJudgedTestResult(BaseModel): toxicity: conint(ge=1, le=5) faithfulness: conint(ge=1, le=5) helpfulness: conint(ge=1, le=5) relevance: conint(ge=1, le=5) ``` **LLM Judged Test Function:** ```python def llm_judged_test_e2e(question: str, context: str, n_items_retrieved: int = 5) -> LLMJudgedTestResult: response = process_input_with_retrieval(question, n_items_retrieved=n_items_retrieved) prompt = f""" Analyze the following text and context to provide scores for toxicity, faithfulness, helpfulness, and relevance. **Text:** {response} **Context:** {context} **Output format:** {{"toxicity": int, "faithfulness": int, "helpfulness": int, "relevance": int}} """ response = completion(model="gpt-4-turbo", messages=[{"content": prompt, "role": "user"}]) json_output = response["choices"][0]["message"]["content"].strip() return LLMJudgedTestResult(**json.loads(json_output)) ``` **Running LLM Judged Tests:** ```python def run_llm_judged_tests(test_function: Callable, sample_size: int = 50) -> Tuple[Annotated[float, "average_toxicity_score"], ...]: dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train") sampled_dataset = dataset.shuffle(seed=42).select(range(sample_size)) # Process and calculate scores... ``` ### Conclusion This evaluation framework allows for tracking improvements in the RAG pipeline's retrieval and generation components. Consider integrating additional frameworks like `ragas`, `trulens`, or `DeepEval` for more sophisticated evaluations as your project evolves. For complete code, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/). ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/starter-choices-for-finetuning-llms.md === ### Summary of Finetuning LLMs Documentation This guide provides a structured approach to finetuning large language models (LLMs) for specific tasks. Key steps include selecting a use case, gathering data, choosing a base model, and evaluating success. #### Quick Assessment Questions Before starting, consider: 1. **Define Success**: Use measurable metrics (e.g., "95% accuracy in extracting order IDs"). 2. **Data Readiness**: Ensure you have sufficient labeled data (e.g., "1000 labeled support tickets"). 3. **Task Consistency**: Choose tasks with clear, consistent outputs (e.g., "Convert email to 5 specific fields"). 4. **Human Verification**: Ensure correctness can be verified by humans (e.g., "Check if extracted date matches document"). #### Picking a Use Case Select a small, manageable use case that cannot be easily solved by non-LLM methods. Examples include: - Triage customer support queries with a defined checklist. #### Picking Data Choose data that closely aligns with your use case to minimize the need for extensive annotation. Aim for hundreds to thousands of examples. **Good Use Cases**: - **Structured Data Extraction**: Extracting order details from emails (500-1000 annotated emails). - **Domain-Specific Classification**: Categorizing support tickets (1000+ labeled examples). - **Standardized Response Generation**: Generating responses from documentation (500+ pairs). **Challenging Use Cases**: - Open-ended chat, creative writing, general knowledge QA, and complex decision-making are less ideal due to vague metrics and validation difficulties. #### Success Indicators Evaluate your use case with these indicators: - **Task Scope**: Specific tasks like "Extract purchase date from receipts" are better than vague ones. - **Output Format**: Structured outputs are preferable. - **Data Availability**: Have 500+ examples ready. - **Evaluation Method**: Use precise metrics rather than subjective evaluations. - **Business Impact**: Aim for measurable benefits (e.g., "Save 10 hours of manual data entry"). #### Picking a Base Model Choose a model based on your task: - **Llama 3.1-8B**: Best for structured data extraction and classification (16GB GPU RAM). - **Llama 3.1-70B**: Suitable for complex reasoning (80GB GPU RAM). - **Mistral 7B**: Good for general text generation (16GB GPU RAM). - **Phi-2**: Ideal for lightweight tasks and rapid prototyping (8GB GPU RAM). **Model Selection Matrix**: ```mermaid graph TD A[Choose Your Task] --> B{Structured Output?} B -->|Yes| C[Llama-8B Base] B -->|No| D{Complex Reasoning?} D -->|Yes| E[Llama-70B Base] D -->|No| F{Resource Constrained?} F -->|Yes| G[Phi-2] F -->|No| H[Mistral-7B] ``` #### Evaluating Success Define clear metrics for success to measure progress effectively. For structured data extraction, consider: - Accuracy of extracted fields. - Precision and recall for specific field types. - Processing time per document. - Error rates on edge cases. #### Next Steps With a clear understanding of your use case, data, and evaluation methods, proceed to the technical implementation, starting with practical examples using the Accelerate library. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-with-accelerate.md === # Finetuning an LLM with Accelerate and PEFT This documentation outlines the process of finetuning a language model (LLM) using the Viggo dataset, which consists of over 5,000 pairs of meaning representations and natural language descriptions for video game dialogues. The goal is to train models that can generate natural language responses from structured inputs. ## Finetuning Pipeline The finetuning pipeline includes the following steps: 1. **prepare_data**: Load and preprocess the Viggo dataset. 2. **finetune**: Finetune the model on the dataset. 3. **evaluate_base**: Evaluate the base model before finetuning. 4. **evaluate_finetuned**: Evaluate the finetuned model. 5. **promote**: Promote the best model to "staging" in the Model Control Plane. For initial experiments, it is recommended to use smaller models (e.g., Llama 3.1 ~8B parameters) to facilitate quick iterations. ## Implementation Details ### Data Preparation The `prepare_data` step loads and tokenizes data from the Hugging Face hub. Ensure the input data format is correct, particularly for instruction-tuned models. Logging inputs and outputs is advised. ### Finetuning with Accelerate The finetuning process utilizes the `accelerate` library for multi-GPU support. Below is a concise version of the finetuning code: ```python model = load_base_model(base_model_id, use_accelerate=use_accelerate) trainer = transformers.Trainer( model=model, train_dataset=tokenized_train_dataset, eval_dataset=tokenized_val_dataset, args=transformers.TrainingArguments( output_dir=output_dir, warmup_steps=warmup_steps, per_device_train_batch_size=per_device_train_batch_size, max_steps=max_steps, learning_rate=lr, logging_dir="./logs", evaluation_strategy="steps", do_eval=True, label_names=["input_ids"], ), data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), callbacks=[ZenMLCallback(accelerator=accelerator)], ) ``` ### Evaluation Metrics The evaluation uses the `evaluate` library to compute ROUGE scores, which include: - **ROUGE-N**: n-gram overlap. - **ROUGE-L**: Longest Common Subsequence. - **ROUGE-W**: Weighted Longest Common Subsequence. - **ROUGE-S**: Skip-bigram co-occurrence. These metrics help assess the quality of generated text against reference texts. ### ZenML Accelerate Decorator ZenML provides a `@run_with_accelerate` decorator for cleaner distributed training configuration: ```python from zenml.integrations.huggingface.steps import run_with_accelerate @run_with_accelerate(num_processes=4, multi_gpu=True) @step def finetune_step(tokenized_train_dataset, tokenized_val_dataset, base_model_id: str, output_dir: str): model = load_base_model(base_model_id, use_accelerate=True) trainer = transformers.Trainer(...) # Trainer setup as shown above trainer.train() return trainer.model ``` ### Docker Configuration Ensure your Docker environment is configured for CUDA and Accelerate: ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings( parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", requirements=["accelerate", "torchvision"] ) @pipeline(settings={"docker": docker_settings}) def finetuning_pipeline(...): # Pipeline steps here ``` ## Data Iteration and Evaluation Careful attention to data formatting is crucial. If the finetuned model performs poorly, inspect the input data and tokenization. Consider supplementing or synthetically generating data if necessary. Establish evaluation metrics early to measure model performance and optimize parameters. Future considerations include: - Enhanced evaluations. - Model serving and inference. - Integration within existing production architecture. The goal is to minimize model size while maintaining acceptable performance for specific use cases. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-100-loc.md === ### Summary: Fine-tuning an LLM in 100 Lines of Code This documentation provides a concise guide to implementing a fine-tuning pipeline for a language model (LLM) in approximately 100 lines of code. The example focuses on fine-tuning the TinyLlama model (1.1B parameters) to generate responses about a fictional setting, "ZenML World." #### Key Components: 1. **Installation**: Required packages can be installed using: ```bash pip install datasets transformers torch accelerate>=0.26.0 ``` 2. **Dataset Preparation**: A small instruction-tuning dataset is created with input-output pairs: ```python def prepare_dataset() -> Dataset: data = [ {"instruction": "Describe a Zenbot.", "response": "A Zenbot is a luminescent robotic entity..."}, {"instruction": "What are Cosmic Butterflies?", "response": "Cosmic Butterflies are ethereal creatures..."}, {"instruction": "Tell me about the Telepathic Treants.", "response": "Telepathic Treants are ancient, sentient trees..."} ] return Dataset.from_list(data) ``` 3. **Tokenization**: Data is formatted and tokenized for training: ```python def tokenize_data(example: Dict[str, str], tokenizer: AutoTokenizer) -> Dict[str, torch.Tensor]: formatted_text = f"### Instruction: {example['instruction']}\n### Response: {example['response']}" return tokenizer(formatted_text, truncation=True, padding="max_length", max_length=128) ``` 4. **Model Fine-tuning**: The model is fine-tuned with specific training parameters: ```python def fine_tune_model(base_model: str = "TinyLlama/TinyLlama-1.1B-Chat-v1.0") -> Tuple[AutoModelForCausalLM, AutoTokenizer]: tokenizer = AutoTokenizer.from_pretrained(base_model) model = AutoModelForCausalLM.from_pretrained(base_model, torch_dtype=torch.bfloat16, device_map="auto") dataset = prepare_dataset() tokenized_dataset = dataset.map(lambda x: tokenize_data(x, tokenizer), remove_columns=dataset.column_names) training_args = TrainingArguments( output_dir="./zenml-world-model", num_train_epochs=3, per_device_train_batch_size=1, gradient_accumulation_steps=4, learning_rate=2e-4, bf16=True, logging_steps=10, save_total_limit=2 ) trainer = Trainer(model=model, args=training_args, train_dataset=tokenized_dataset, data_collator=DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)) trainer.train() return model, tokenizer ``` 5. **Response Generation**: The fine-tuned model generates responses based on prompts: ```python def generate_response(prompt: str, model: AutoModelForCausalLM, tokenizer: AutoTokenizer, max_length: int = 128) -> str: inputs = tokenizer(f"### Instruction: {prompt}\n### Response:", return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_length=max_length, temperature=0.7, num_return_sequences=1) return tokenizer.decode(outputs[0], skip_special_tokens=True) ``` 6. **Testing the Model**: The model is tested with various prompts to generate responses. #### Limitations: - The dataset is small (only 3 examples). - Larger models may yield better results but require more resources. - Minimal training epochs and simple learning rates are used for demonstration. - Proper evaluation metrics and validation data are necessary for production systems. #### Next Steps: The guide suggests exploring more advanced fine-tuning techniques, larger datasets, evaluation metrics, and deployment strategies in future sections. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-llms.md === ### Summary of LLM Finetuning Documentation **Overview**: This documentation focuses on finetuning Large Language Models (LLMs) to enhance performance and cost-effectiveness for specific tasks. It builds upon previous learnings related to RAG (Retrieval-Augmented Generation) systems. **Key Points**: - **Purpose of Finetuning**: While APIs like OpenAI and Anthropic are useful, finetuning an LLM on your own data can improve: - Response generation in specific formats. - Understanding of domain-specific terminology. - Reduction of prompt length for consistent outputs. - Adherence to specific patterns or protocols. - Optimization for latency by minimizing context window size. **Guide Structure**: 1. **Finetuning in 100 lines of code**: A concise code example for finetuning. 2. **Why and when to finetune LLMs**: Scenarios justifying finetuning. 3. **Starter choices with finetuning**: Initial considerations for finetuning. 4. **Finetuning with 🤗 Accelerate**: Utilizing the Accelerate library for efficient finetuning. 5. **Evaluation for finetuning**: Methods to assess the performance of finetuned models. 6. **Deploying finetuned models**: Steps for deploying models after finetuning. 7. **Next steps**: Guidance on further actions post-finetuning. **Implementation**: The finetuning process is straightforward, but understanding when to finetune and how to evaluate performance is crucial. For practical application, refer to the `llm-lora-finetuning` repository on GitHub, which contains the complete code. This code can be executed locally (with a GPU) or on cloud platforms. **Conclusion**: Finetuning LLMs can significantly enhance their utility in specific applications, and this guide provides a comprehensive framework for understanding and implementing the process. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/evaluation-for-finetuning.md === # Summary of LLM Finetuning Evaluations Documentation ## Overview Evaluations (evals) for Large Language Model (LLM) finetuning are essential for assessing model performance, reliability, and safety, similar to unit tests in software development. They help catch issues early, track progress, and ensure the model behaves as expected. An incremental approach to building evaluations is recommended to avoid paralysis in the development process. ## Motivation and Benefits Key motivations for implementing evals include: 1. **Prevent Regressions**: Ensure new changes don't harm existing functionality. 2. **Track Improvements**: Quantify model enhancements over iterations. 3. **Ensure Safety and Robustness**: Identify and mitigate risks, biases, or unexpected behaviors. A robust evaluation strategy leads to more reliable and performant LLMs, providing a clear understanding of model capabilities and limitations. ## Types of Evaluations While generic evaluation frameworks are common, custom evals tailored to specific use cases are crucial. They can be categorized into: 1. **Success Modes**: Focus on desired outputs, such as correct formatting and appropriate responses. 2. **Failure Modes**: Target undesired outputs, including hallucinations, incorrect formats, and biased responses. ### Example Code for Custom Evals ```python from my_library import query_llm good_responses = { "what are the best salads available at the food court?": ["caesar", "italian"], "how late is the shopping center open until?": ["10pm", "22:00", "ten"] } for question, answers in good_responses.items(): llm_response = query_llm(question) assert any(answer in llm_response for answer in answers) bad_responses = { "who is the manager of the shopping center?": ["tom hanks", "spiderman"] } for question, answers in bad_responses.items(): llm_response = query_llm(question) assert not any(answer in llm_response for answer in answers) ``` ## Generalized Evals and Frameworks Generalized evals provide structured evaluation approaches, including: - Organization of evals - Standardized metrics - Insights into model performance Examples of frameworks include: - [prodigy-evaluate](https://github.com/explosion/prodigy-evaluate) - [ragas](https://docs.ragas.io/en/stable/getstarted/monitoring.html) - [giskard](https://docs.giskard.ai/en/stable/getting_started/quickstart/quickstart_llm.html) These frameworks can be integrated into ZenML pipelines, as demonstrated in the `llm-lora-finetuning` project. ## Data and Tracking Regular analysis of inference data is vital for identifying patterns and areas for improvement. Implement comprehensive logging early in development to track model behavior. Recommended frameworks for data collection and analysis include: - [weave](https://github.com/wandb/weave) - [openllmetry](https://github.com/traceloop/openllmetry) Creating simple dashboards to visualize core performance metrics can help monitor progress and assess the impact of changes. Focus on key metrics aligned with iteration goals, prioritizing simplicity over perfection. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/why-and-when-to-finetune-llms.md === ### Summary: When to Finetune LLMs This guide provides an overview for finetuning large language models (LLMs) on custom data. Key points include: - **Finetuning Limitations**: It is not a universal solution and may not achieve desired accuracy. It introduces technical debt. - **Use Cases Beyond Chatbots**: LLMs can be applied in various contexts, often with lower failure rates than chatbots. - **Final Step in Experimentation**: Finetuning should follow other approaches like smaller models or Retrieval-Augmented Generation (RAG). #### When to Finetune LLMs Consider finetuning in the following scenarios: 1. **Domain-Specific Knowledge**: For deep understanding in specialized fields (e.g., medical, legal). 2. **Consistent Style/Format**: When specific output formats are required (e.g., code generation). 3. **Improved Task Accuracy**: For tasks needing higher precision. 4. **Handling Proprietary Information**: When data cannot be sent to external APIs. 5. **Custom Instructions**: To integrate frequently used prompts into the model. 6. **Improved Efficiency**: To enhance performance with shorter prompts. #### Decision Flowchart ```mermaid flowchart TD A[Should I finetune an LLM?] --> B{Is prompt engineering
sufficient?} B -->|Yes| C[Use prompt engineering] B -->|No| D{Is it a knowledge retrieval
problem?} D -->|Yes| E{Is real-time data
needed?} E -->|Yes| F[Use RAG] E -->|No| G{Is data volume
large?} G -->|Yes| H[Consider hybrid:
RAG + Finetuning] G -->|No| F D -->|No| I{Is it a narrow,
specific task?} I -->|Yes| J{Can a smaller
model handle it?} J -->|Yes| K[Use smaller model] J -->|No| L[Consider finetuning] I -->|No| M{Do you need
consistent style?} M -->|Yes| L M -->|No| N{Is deep domain
expertise required?} N -->|Yes| O{Is the domain
well-represented?} O -->|Yes| P[Use base model] O -->|No| L N -->|No| Q{Is data
proprietary?} Q -->|Yes| R{Can you use
API solutions?} R -->|Yes| S[Use API solutions] R -->|No| L Q -->|No| S ``` #### Alternatives to Finetuning - **Prompt Engineering**: Often effective without finetuning. - **RAG**: More effective for specific knowledge bases. - **Smaller Models**: Better for narrow tasks. - **API Solutions**: Simpler and cost-effective if sensitive data is not involved. Finetuning can be powerful but should be considered only after exploring simpler solutions. The next section will cover practical considerations for finetuning LLMs. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/next-steps.md === # Next Steps After iterating on your finetuned model, assess key areas: - Factors improving model performance - Factors degrading model performance - Minimum viable model size - Alignment with company processes (iteration time vs. hardware limitations) - Effectiveness in addressing the business use case These insights will guide your next steps, which may include: - Scaling for more users or real-time scenarios - Meeting critical accuracy requirements, potentially necessitating a larger model - Integrating LLM finetuning into your business systems, including monitoring, logging, and evaluation While it may be tempting to switch to larger models, focus on enhancing your data quality first, especially if starting with a limited dataset. Consider using a flywheel approach or generating synthetic data before upgrading to a more powerful model. ## Resources Recommended resources for LLM finetuning: - [Mastering LLMs Course](https://parlance-labs.com/education/): Video course by Hamel Husain and Dan Becker - [Phil Schmid's Blog](https://www.philschmid.de/): Worked examples of LLM finetuning - [Sam Witteveen's YouTube Channel](https://www.youtube.com/@samwitteveenai): Videos on finetuning and prompt engineering ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/deploying-finetuned-models.md === # Deployment Options for Finetuned LLMs Deploying your finetuned LLM is essential for real-world applications. This process requires careful planning to ensure performance, reliability, and cost-effectiveness. ## Deployment Considerations Key factors influencing deployment include: - **Resource Requirements**: LLMs need substantial RAM, processing power, and specialized hardware. Choose hardware based on your use case to balance performance and cost. - **Real-Time Needs**: Consider failover scenarios, conduct benchmarks, and model expected user load. Decide between streaming and non-streaming approaches, each affecting latency and resource use. - **Optimization Techniques**: Techniques like quantization can reduce resource usage but require careful evaluation to avoid performance loss. ## Deployment Options and Trade-offs 1. **Roll Your Own**: Set up and manage your own infrastructure, offering control but requiring expertise. Typically involves creating a Docker-based service (e.g., FastAPI). 2. **Serverless Options**: Provide scalability and cost-efficiency, charging only for used resources. Beware of "cold start" latency for infrequently accessed models. 3. **Always-On Options**: Keep the model running to minimize latency, but this can be more costly due to idle resource payments. 4. **Fully Managed Solutions**: Cloud providers offer managed services that simplify deployment but may limit flexibility and increase costs. Consider your team's expertise, budget, load patterns, and specific requirements when choosing a deployment option. ## Deployment with vLLM and ZenML [vLLM](https://github.com/vllm-project/vllm) is a library for high-throughput, low-latency LLM deployment. ZenML provides a [vLLM integration](../../../component-guide/model-deployers/vllm.md) for easy deployment. ### Example Code Snippet ```python from zenml import pipeline from typing import Annotated from steps.vllm_deployer import vllm_model_deployer_step from zenml.integrations.vllm.services.vllm_deployment import VLLMDeploymentService @pipeline() def deploy_vllm_pipeline(model: str, timeout: int = 1200) -> Annotated[VLLMDeploymentService, "my_finetuned_llm"]: service = vllm_model_deployer_step(model=model, timeout=timeout) return service ``` The `model` argument can be a local path or a Hugging Face Hub ID, deploying the model locally for batch inference with an OpenAI-compatible API. ## Cloud-Specific Deployment Options - **AWS**: Use Amazon SageMaker for managed LLM deployment with real-time endpoints. For serverless, combine AWS Lambda with API Gateway. For more control, use Amazon ECS or EKS with Fargate. - **GCP**: Google Cloud AI Platform offers managed ML services similar to SageMaker. Use Cloud Run for serverless hosting or Google Kubernetes Engine (GKE) for containerized models. ## Architectures for Real-Time Engagement Deploy models behind a load balancer with auto-scaling for responsiveness. Implement caching (e.g., Redis) to improve response times and use asynchronous architectures with message queues (e.g., Amazon SQS) for complex queries. For global deployments, consider edge computing services like [AWS Lambda@Edge](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-at-the-edge.html?tag=soumet-20). ## Reducing Latency and Increasing Throughput Optimize for low latency and high throughput by: - Using model optimization techniques (e.g., quantization, distillation). - Leveraging GPU instances for faster inference. - Implementing request batching and parallel processing. - Monitoring and profiling to identify bottlenecks. Continuous measurement and optimization are crucial for maintaining performance. ## Monitoring and Maintenance Post-deployment, focus on: 1. **Evaluation Failures**: Regularly assess model performance. 2. **Latency Metrics**: Ensure response times meet requirements. 3. **Load Patterns**: Monitor user interactions for scaling and optimization. 4. **Data Analysis**: Analyze inputs/outputs for trends and biases. Ensure compliance with privacy regulations when logging responses. By considering these deployment options and maintaining monitoring practices, you can ensure optimal performance of your finetuned LLM. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/evaluating-reranking-performance.md === ### Evaluating Reranking Performance with ZenML This documentation outlines how to evaluate the performance of a reranking model using ZenML. The evaluation process involves comparing retrieval performance before and after applying reranking, utilizing established metrics. #### Key Steps in Evaluation 1. **Retrieval Evaluation Function**: The core function `perform_retrieval_evaluation` assesses retrieval performance based on a sample of generated questions. It checks if the expected URL is present in the retrieved results and calculates the failure rate. ```python def perform_retrieval_evaluation(sample_size: int, use_reranking: bool) -> float: dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train") sampled_dataset = dataset.shuffle(seed=42).select(range(sample_size)) failures = sum( 1 for item in sampled_dataset if not any( item["filename"].split("/")[-1] in url for url in query_similar_docs(item["generated_questions"][0], item["filename"].split("/")[-1], use_reranking)[2] ) ) return round((failures / len(sampled_dataset)) * 100, 2) ``` 2. **Evaluation Steps**: Two separate steps execute the retrieval evaluation: - Without reranking - With reranking ```python @step def retrieval_evaluation_full(sample_size: int = 100) -> float: return perform_retrieval_evaluation(sample_size, use_reranking=False) @step def retrieval_evaluation_full_with_reranking(sample_size: int = 100) -> float: return perform_retrieval_evaluation(sample_size, use_reranking=True) ``` 3. **Logging and Analysis**: The evaluation logs provide insights into failures, helping identify issues with the generated questions or the model's performance. 4. **Visualization**: The results can be visualized in the ZenML dashboard, displaying metrics such as failure rates and other evaluation scores. ```python @step(enable_cache=False) def visualize_evaluation_results(...): # Code to normalize scores and plot evaluation metrics plt.barh(y_pos, scores, align="center") plt.savefig(buf, format="png") return Image.open(buf) ``` 5. **Running the Evaluation Pipeline**: To execute the evaluation pipeline, clone the project repository and run the evaluation command after the main pipeline has generated embeddings. ```bash git clone https://github.com/zenml-io/zenml-projects.git cd llm-complete-guide python run.py --evaluation ``` ### Conclusion This documentation provides a structured approach to evaluate a reranking model's performance using ZenML, emphasizing the importance of logging, analysis, and visualization to enhance retrieval performance. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/understanding-reranking.md === ## Reranking Overview ### What is Reranking? Reranking refines the initial ranking of documents retrieved by a system, particularly in Retrieval-Augmented Generation (RAG). The initial retrieval often uses sparse methods like BM25 or TF-IDF, which may not fully capture semantic meaning. Rerankers reorder documents by considering features such as semantic similarity and relevance scores, ensuring that the most relevant documents are prioritized for generating accurate outputs. ### Types of Rerankers 1. **Cross-Encoders**: - Concatenate query and document as input. - Output a relevance score. - Example: BERT-based models. - Pros: Effective interaction capture. - Cons: Computationally expensive. 2. **Bi-Encoders**: - Use separate encoders for query and document. - Generate independent embeddings and compute similarity. - Pros: More efficient than cross-encoders. - Cons: Weaker interaction capture. 3. **Lightweight Models**: - Include distilled models or small transformer variants. - Balance effectiveness and efficiency. - Suitable for real-time applications. ### Benefits of Reranking in RAG 1. **Improved Relevance**: Identifies the most relevant documents for better context. 2. **Semantic Understanding**: Captures semantic meaning beyond keyword matching. 3. **Domain Adaptation**: Fine-tuned on domain-specific data for enhanced performance. 4. **Personalization**: Tailors document retrieval based on user preferences and interactions. ### Next Steps The next section will cover implementing reranking in ZenML and integrating it into the RAG inference pipeline. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/README.md === ### Summary: Adding Reranking to RAG Inference in ZenML Rerankers enhance retrieval systems using LLMs by improving the quality of retrieved documents through reordering based on additional features or scores. This section outlines how to integrate a reranker into your RAG inference pipeline in ZenML. #### Key Points: - **Purpose of Rerankers**: They optimize the relevance and quality of retrieved documents, potentially leading to better LLM responses. - **Workflow Context**: Reranking is an optional enhancement to an established workflow that includes data ingestion, preprocessing, embeddings generation, and retrieval. - **Evaluation Metrics**: Basic metrics have been set up to evaluate the retrieval system's performance prior to adding reranking. #### Visual Aid: - A diagram illustrates the reranking workflow, emphasizing its role as an enhancement rather than a necessity. By implementing a reranker, you can achieve improved document relevance in your retrieval system. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/implementing-reranking.md === ### Implementing Reranking in ZenML This documentation outlines how to integrate a reranking step into an existing RAG (Retrieval-Augmented Generation) pipeline using the `rerankers` package. The reranker reorders retrieved documents based on their relevance to a given query. #### Adding Reranking 1. **Dependency**: Use the `rerankers` package, which provides an interface for various model types without needing to manage model specifics. 2. **Reranker Class**: The `Reranker` abstract class allows for custom implementations or the use of existing models. **Example Code**: ```python from rerankers import Reranker ranker = Reranker('cross-encoder') texts = [ "I like to play soccer", "I like to play football", "War and Peace is a great book", "I love dogs", "Ginger cats aren't very smart", "I like to play basketball", ] results = ranker.rank(query="What's your favorite sport?", docs=texts) ``` **Output**: The reranker produces a list of documents ordered by relevance, with the most relevant documents appearing first. #### Reranking Function A helper function can be created to rerank documents based on a query: ```python def rerank_documents(query: str, documents: List[Tuple], reranker_model: str = "flashrank") -> List[Tuple[str, str]]: ranker = Reranker(reranker_model) docs_texts = [f"{doc[0]} PARENT SECTION: {doc[2]}" for doc in documents] results = ranker.rank(query=query, docs=docs_texts) return [(results.results[i].text, documents[results.results[i].doc_id][1]) for i in range(len(results.results))] ``` This function takes a query and a list of documents (tuples of content and URL), reranks them, and returns a list of tuples with the reranked document text and original URLs. #### Querying Similar Documents The reranked documents can be integrated into a function that queries similar documents: ```python def query_similar_docs(question: str, url_ending: str, use_reranking: bool = False, returned_sample_size: int = 5) -> Tuple[str, str, List[str]]: embedded_question = get_embeddings(question) db_conn = get_db_conn() num_docs = 20 if use_reranking else returned_sample_size top_similar_docs = get_topn_similar_docs(embedded_question, db_conn, n=num_docs, include_metadata=True) if use_reranking: reranked_docs_and_urls = rerank_documents(question, top_similar_docs)[:returned_sample_size] urls = [doc[1] for doc in reranked_docs_and_urls] else: urls = [doc[1] for doc in top_similar_docs] return (question, url_ending, urls) ``` This function retrieves document embeddings, connects to a database, and optionally reranks the top documents before returning the URLs. #### Evaluation To evaluate the performance of the reranker, refer to the complete code in the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) repository, specifically the [`eval_retrieval.py`](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_retrieval.py) file. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/reranking.md === **Summary: Adding Reranking to RAG Inference in ZenML** Rerankers enhance retrieval systems using LLMs by improving the quality of retrieved documents through reordering based on additional features or scores. This section outlines how to integrate a reranker into the RAG inference pipeline in ZenML. Key Points: - Rerankers are optional but can significantly enhance the relevance and quality of retrieved documents, leading to improved LLM responses. - The workflow includes data ingestion, preprocessing, embeddings generation, retrieval, and evaluation metrics. - Reranking is an additional step to optimize the performance of the existing setup. Visual Aid: A diagram illustrates the reranking workflow within the ZenML framework. This integration aims to maximize retrieval performance and enhance the overall efficiency of the system. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-embeddings/synthetic-data-generation.md === ### Summary of Synthetic Data Generation with Distilabel **Objective**: Generate synthetic data to fine-tune embeddings using an existing dataset of technical documentation from Hugging Face. **Dataset**: The dataset consists of `page_content` (text chunks) and their source URLs. The goal is to pair `page_content` with generated questions. **Pipeline Overview**: 1. Load the Hugging Face dataset. 2. Use `distilabel` to generate synthetic queries. 3. Push the generated data to a new Hugging Face dataset and an Argilla instance for annotation. **Synthetic Data Generation**: - **Tool**: `distilabel` generates synthetic data by creating queries for documentation chunks. - **LLM**: The pipeline uses `gpt-4o` (OpenAI) but supports other LLMs. - **Process**: - Load the dataset and map `page_content` to `anchor`. - Generate queries using `GenerateSentencePair`, which creates both positive and negative queries to help the embeddings model learn appropriate responses. **Code Snippet**: ```python import os from typing import Annotated, Tuple import distilabel from datasets import Dataset from distilabel.llms import OpenAILLM from distilabel.steps import LoadDataFromHub from distilabel.steps.tasks import GenerateSentencePair from zenml import step @step def generate_synthetic_queries(train_dataset: Dataset, test_dataset: Dataset) -> Tuple[Annotated[Dataset, "train_with_queries"], Annotated[Dataset, "test_with_queries"]]: llm = OpenAILLM(model=OPENAI_MODEL_GEN, api_key=os.getenv("OPENAI_API_KEY")) with distilabel.pipeline.Pipeline(name="generate_embedding_queries") as pipeline: load_dataset = LoadDataFromHub(output_mappings={"page_content": "anchor"}) generate_sentence_pair = GenerateSentencePair(triplet=True, action="query", llm=llm, input_batch_size=10, context=synthetic_generation_context) load_dataset >> generate_sentence_pair train_distiset = pipeline.run(parameters={load_dataset.name: {"repo_id": DATASET_NAME_DEFAULT, "split": "train"}, generate_sentence_pair.name: {"llm": {"generation_kwargs": OPENAI_MODEL_GEN_KWARGS_EMBEDDINGS}}}) test_distiset = pipeline.run(parameters={load_dataset.name: {"repo_id": DATASET_NAME_DEFAULT, "split": "test"}, generate_sentence_pair.name: {"llm": {"generation_kwargs": OPENAI_MODEL_GEN_KWARGS_EMBEDDINGS}}}) return train_distiset["default"]["train"], test_distiset["default"]["train"] ``` **Data Annotation with Argilla**: - After generating synthetic data, it is pushed to Argilla for inspection. - Metadata added includes: - `parent_section`: Documentation section of the chunk. - `token_count`: Number of tokens in the chunk. - Similarity metrics between queries and embeddings for analysis. **Embedding Generation**: - The embeddings are generated using a model (e.g., `Snowflake/snowflake-arctic-embed-large`). - The function `format_data` processes the dataset to compute embeddings and similarities. **Code Snippet for Formatting Data**: ```python def format_data(batch): model = SentenceTransformer(EMBEDDINGS_MODEL_ID_BASELINE, device="cuda" if torch.cuda.is_available() else "cpu") def get_embeddings(batch_column): return [vector.tolist() for vector in model.encode(batch_column)] batch["anchor-vector"] = get_embeddings(batch["anchor"]) batch["similarity-positive-negative"] = get_similarities(batch["positive-vector"], batch["negative-vector"]) return batch ``` **Next Steps**: After data exploration and annotation in Argilla, the embeddings can be fine-tuned, even if the annotation step is skipped, assuming the generated data quality is sufficient. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-embeddings/finetuning-embeddings-with-sentence-transformers.md === ### Summary: Finetuning Embeddings with Sentence Transformers This documentation outlines the process for finetuning embeddings using the Sentence Transformers library. The pipeline involves loading a dataset, finetuning the model, evaluating the results, and visualizing them. #### Key Steps in the Pipeline: 1. **Data Loading**: - Load data from Hugging Face or Argilla using the `--argilla` flag: ```bash python run.py --embeddings --argilla ``` 2. **Finetuning Process**: - **Model Loading**: Load the base model using Sentence Transformers with SDPA for efficient training. - **Loss Function**: Utilize `MatryoshkaLoss`, a wrapper around `MultipleNegativesRankingLoss`, allowing simultaneous training across different embedding dimensions. - **Dataset Preparation**: Load training data from a specified dataset path. - **Evaluator**: Create an evaluator to assess model performance during training. - **Training Arguments**: Set hyperparameters (epochs, batch size, learning rate, etc.) using `SentenceTransformerTrainingArguments`. - **Trainer**: Initialize `SentenceTransformerTrainer` with the model, training arguments, dataset, and loss function. Start training with `trainer.train()`. - **Model Saving**: Push the finetuned model to Hugging Face Hub: ```python trainer.model.push_to_hub(EMBEDDINGS_MODEL_ID_FINE_TUNED) ``` - **Metadata Logging**: Log training metadata for observability. - **Model Rehydration**: Save and reload the trained model to handle materialization errors. #### Simplified Code Snippet: ```python # Load the base model model = SentenceTransformer(EMBEDDINGS_MODEL_ID_BASELINE) # Define the loss function train_loss = MatryoshkaLoss(model, MultipleNegativesRankingLoss(model)) # Prepare the training dataset train_dataset = load_dataset("json", data_files=train_dataset_path) # Set up the training arguments args = SentenceTransformerTrainingArguments(...) # Create the trainer trainer = SentenceTransformerTrainer(model, args, train_dataset, train_loss) # Start training trainer.train() # Save the finetuned model trainer.model.push_to_hub(EMBEDDINGS_MODEL_ID_FINE_TUNED) ``` The finetuning process enhances model performance across various embedding sizes, and the model is tracked within ZenML for observability. After training, the pipeline evaluates and visualizes the base and finetuned embeddings. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-embeddings/finetuning-embeddings.md === **Summary: Finetuning Embeddings on Custom Synthetic Data** This documentation outlines the process of finetuning embeddings on synthetic data to enhance retrieval performance in a Retrieval-Augmented Generation (RAG) pipeline. The existing pipeline utilizes off-the-shelf embeddings, which serve as a baseline for standard tasks. However, finetuning these embeddings on domain-specific data can lead to significant performance improvements. ### Key Steps: 1. **Generate Synthetic Data**: Utilize `distilabel` for synthetic data generation. 2. **Finetune Embeddings**: Use Sentence Transformers for embedding finetuning. 3. **Evaluate Embeddings**: Assess the finetuned embeddings and leverage ZenML's model control plane for systematic evaluation. ### Libraries Used: - **ZenML**: Framework for building production-ready RAG pipelines. - **Argilla**: Facilitates collaboration among AI engineers and domain experts through an interactive UI for data organization and exploration. - **Distilabel**: Offers a scalable method for generating synthetic data and providing AI feedback. Both Argilla and Distilabel can be used independently but are more effective when combined. The guide includes instructions for following along with examples in the `llm-complete-guide` repository, where full code is available. The finetuning process can be executed locally or on cloud compute. ### Visual Reference: ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-embeddings/evaluating-finetuned-embeddings.md === ### Summary of Documentation on Evaluating Finetuned Embeddings This documentation outlines the process for evaluating finetuned embeddings and comparing them to original base embeddings using the MatryoshkaLoss function. The evaluation steps are straightforward and involve the following key components: #### Evaluation Code ```python from zenml import log_model_metadata, step def evaluate_model(dataset: DatasetDict, model: SentenceTransformer) -> Dict[str, float]: evaluator = get_evaluator(dataset=dataset, model=model) return evaluator(model) @step def evaluate_base_model(dataset: DatasetDict) -> Annotated[Dict[str, float], "base_model_evaluation_results"]: model = SentenceTransformer(EMBEDDINGS_MODEL_ID_BASELINE, device="cuda" if torch.cuda.is_available() else "cpu") results = evaluate_model(dataset=dataset, model=model) base_model_eval = {f"dim_{dim}_cosine_ndcg@10": float(results[f"dim_{dim}_cosine_ndcg@10"]) for dim in EMBEDDINGS_MODEL_MATRYOSHKA_DIMS} log_model_metadata(metadata={"base_model_eval": base_model_eval}) return results ``` #### Key Points - **Logging Results**: Evaluation results are logged as model metadata in ZenML, allowing inspection through the Model Control Plane. - **Result Format**: Results are returned as a dictionary of string keys and float values, which are versioned and tracked. - **Visualization**: Results can be visualized using `PIL.Image` and `matplotlib`, comparing base and finetuned model evaluations, showing improvements in recall across dimensions. - **Production Considerations**: For better performance, focus on improving training data quality, potentially removing low-signal logs. #### Model Control Plane The Model Control Plane provides a unified interface to inspect results, artifacts, models, and metadata. It includes sections for: - Artifacts generated - Models generated - Logged metadata - Pipeline runs associated with the model This interface is available on ZenML Pro, facilitating comparison of evaluation values and inspection of training parameters. #### Next Steps After evaluating the embeddings, the next steps involve integrating them into the original RAG pipeline, regenerating embeddings, and rerunning retrieval evaluations. Future sections will cover LLM finetuning and deployment, with resources available for starting LLM finetuning projects using ZenML. For further details, refer to the provided links for additional documentation and project repositories. ================================================== === File: docs/book/user-guide/cloud-guide/cloud-guide.md === ### Cloud Guide Summary This section provides guidance on connecting major public clouds to your ZenML deployment by configuring a **stack**. A stack is a configuration of tools and infrastructure for running pipelines. ZenML acts as a translation layer, enabling code execution across different stacks. **Key Points:** - **Stack Registration**: This guide focuses on registering a stack, assuming the necessary resources for pipeline execution are already provisioned. - **Provisioning Infrastructure**: Infrastructure can be provisioned manually or through: - In-browser stack deployment wizard - Stack registration wizard - ZenML Terraform modules ![ZenML is the translation layer that allows your code to run on any of your stacks](../../.gitbook/assets/vpc_zenml.png) ==================================================