wjayesh commited on
Commit
307a24f
·
verified ·
1 Parent(s): 5f11033

Upload basics.txt with huggingface_hub

Browse files
Files changed (1) hide show
  1. basics.txt +264 -14
basics.txt CHANGED
@@ -1,5 +1,4 @@
1
- This file is a merged representation of the entire codebase, combining all repository files into a single document.
2
- Generated by Repomix on: 2025-02-07T15:55:39.070Z
3
 
4
  ================================================================
5
  File Summary
@@ -35,11 +34,11 @@ Usage Guidelines:
35
 
36
  Notes:
37
  ------
38
- - Some files may have been excluded based on .gitignore rules and Repomix's
39
- configuration.
40
- - Binary files are not included in this packed representation. Please refer to
41
- the Repository Structure section for a complete list of file paths, including
42
- binary files.
43
 
44
  Additional Info:
45
  ----------------
@@ -117,6 +116,11 @@ File: docs/book/user-guide/cloud-guide/cloud-guide.md
117
  description: Taking your ZenML workflow to the next level.
118
  ---
119
 
 
 
 
 
 
120
  # ☁️ Cloud guide
121
 
122
  This section of the guide consists of easy to follow guides on how to connect the major public clouds to your ZenML deployment. We achieve this by configuring a [stack](../production-guide/understand-stacks.md).
@@ -138,6 +142,11 @@ File: docs/book/user-guide/llmops-guide/evaluation/evaluation-in-65-loc.md
138
  description: Learn how to implement evaluation for RAG in just 65 lines of code.
139
  ---
140
 
 
 
 
 
 
141
  # Evaluation in 65 lines of code
142
 
143
  Our RAG guide included [a short example](../rag-with-zenml/rag-85-loc.md) for how to implement a basic RAG pipeline in just 85 lines of code. In this section, we'll build on that example to show how you can evaluate the performance of your RAG pipeline in just 65 lines. For the full code, please visit the project repository [here](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/most\_basic\_eval.py). The code that follows requires the functions from the earlier RAG pipeline code to work.
@@ -230,6 +239,11 @@ File: docs/book/user-guide/llmops-guide/evaluation/evaluation-in-practice.md
230
  description: Learn how to evaluate the performance of your RAG system in practice.
231
  ---
232
 
 
 
 
 
 
233
  # Evaluation in practice
234
 
235
  Now that we've seen individually how to evaluate the retrieval and generation components of our pipeline, it's worth taking a step back to think through how all of this works in practice.
@@ -279,6 +293,11 @@ File: docs/book/user-guide/llmops-guide/evaluation/generation.md
279
  description: Evaluate the generation component of your RAG pipeline.
280
  ---
281
 
 
 
 
 
 
282
  # Generation evaluation
283
 
284
  Now that we have a sense of how to evaluate the retrieval component of our RAG
@@ -677,6 +696,11 @@ File: docs/book/user-guide/llmops-guide/evaluation/README.md
677
  description: Track how your RAG pipeline improves using evaluation and metrics.
678
  ---
679
 
 
 
 
 
 
680
  # Evaluation and metrics
681
 
682
  In this section, we'll explore how to evaluate the performance of your RAG pipeline using metrics and visualizations. Evaluating your RAG pipeline is crucial to understanding how well it performs and identifying areas for improvement. With language models in particular, it's hard to evaluate their performance using traditional metrics like accuracy, precision, and recall. This is because language models generate text, which is inherently subjective and difficult to evaluate quantitatively.
@@ -715,6 +739,11 @@ File: docs/book/user-guide/llmops-guide/evaluation/retrieval.md
715
  description: See how the retrieval component responds to changes in the pipeline.
716
  ---
717
 
 
 
 
 
 
718
  # Retrieval evaluation
719
 
720
  The retrieval component of our RAG pipeline is responsible for finding relevant
@@ -1064,6 +1093,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-embeddings/evaluating-finetun
1064
  description: Evaluate finetuned embeddings and compare to original base embeddings.
1065
  ---
1066
 
 
 
 
 
 
1067
  Now that we've finetuned our embeddings, we can evaluate them and compare to the
1068
  base embeddings. We have all the data saved and versioned already, and we will
1069
  reuse the same MatryoshkaLoss function for evaluation.
@@ -1204,6 +1238,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-embeddings/finetuning-embeddi
1204
  description: Finetune embeddings with Sentence Transformers.
1205
  ---
1206
 
 
 
 
 
 
1207
  We now have a dataset that we can use to finetune our embeddings. You can
1208
  [inspect the positive and negative examples](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions_0_60_0_distilabel) on the Hugging Face [datasets page](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions_0_60_0_distilabel) since
1209
  our previous pipeline pushed the data there.
@@ -1308,6 +1347,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-embeddings/finetuning-embeddi
1308
  description: Finetune embeddings on custom synthetic data to improve retrieval performance.
1309
  ---
1310
 
 
 
 
 
 
1311
  We previously learned [how to use RAG with ZenML](../rag-with-zenml/README.md) to
1312
  build a production-ready RAG pipeline. In this section, we will explore how to
1313
  optimize and maintain your embedding models through synthetic data generation and
@@ -1355,6 +1399,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-embeddings/synthetic-data-gen
1355
  description: Generate synthetic data with distilabel to finetune embeddings.
1356
  ---
1357
 
 
 
 
 
 
1358
  We already have [a dataset of technical documentation](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions_0_60_0) that was generated
1359
  previously while we were working on the RAG pipeline. We'll use this dataset
1360
  to generate synthetic data with `distilabel`. You can inspect the data directly
@@ -1900,6 +1949,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-100-loc.md
1900
  description: Learn how to implement an LLM fine-tuning pipeline in just 100 lines of code.
1901
  ---
1902
 
 
 
 
 
 
1903
  # Quick Start: Fine-tuning an LLM
1904
 
1905
  There's a lot to understand about LLM fine-tuning - from choosing the right base model to preparing your dataset and selecting training parameters. But let's start with a concrete implementation to see how it works in practice. The following 100 lines of code demonstrate:
@@ -2118,6 +2172,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-llms.md
2118
  description: Finetune LLMs for specific tasks or to improve performance and cost.
2119
  ---
2120
 
 
 
 
 
 
2121
  So far in our LLMOps journey we've learned [how to use RAG with
2122
  ZenML](../rag-with-zenml/README.md), how to [evaluate our RAG
2123
  systems](../evaluation/README.md), how to [use reranking to improve retrieval](../reranking/README.md), and how to
@@ -2167,6 +2226,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-with-accelera
2167
  description: "Finetuning an LLM with Accelerate and PEFT"
2168
  ---
2169
 
 
 
 
 
 
2170
  # Finetuning an LLM with Accelerate and PEFT
2171
 
2172
  We're finally ready to get our hands on the code and see how it works. In this
@@ -2420,6 +2484,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-llms/starter-choices-for-fine
2420
  description: Get started with finetuning LLMs by picking a use case and data.
2421
  ---
2422
 
 
 
 
 
 
2423
  # Starter choices for finetuning LLMs
2424
 
2425
  Finetuning large language models can be a powerful way to tailor their
@@ -2590,6 +2659,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-llms/why-and-when-to-finetune
2590
  description: Deciding when is the right time to finetune LLMs.
2591
  ---
2592
 
 
 
 
 
 
2593
  # Why and when to finetune LLMs
2594
 
2595
  This guide is intended to be a practical overview that gets you started with
@@ -2678,6 +2752,11 @@ File: docs/book/user-guide/llmops-guide/rag-with-zenml/basic-rag-inference-pipel
2678
  description: Use your RAG components to generate responses to prompts.
2679
  ---
2680
 
 
 
 
 
 
2681
  # Simple RAG Inference
2682
 
2683
  Now that we have our index store, we can use it to make queries based on the
@@ -2842,6 +2921,11 @@ File: docs/book/user-guide/llmops-guide/rag-with-zenml/data-ingestion.md
2842
  description: Understand how to ingest and preprocess data for RAG pipelines with ZenML.
2843
  ---
2844
 
 
 
 
 
 
2845
  The first step in setting up a RAG pipeline is to ingest the data that will be
2846
  used to train and evaluate the retriever and generator models. This data can
2847
  include a large corpus of documents, as well as any relevant metadata or
@@ -3018,6 +3102,11 @@ File: docs/book/user-guide/llmops-guide/rag-with-zenml/embeddings-generation.md
3018
  description: Generate embeddings to improve retrieval performance.
3019
  ---
3020
 
 
 
 
 
 
3021
  # Generating Embeddings for Retrieval
3022
 
3023
  In this section, we'll explore how to generate embeddings for your data to
@@ -3233,6 +3322,11 @@ File: docs/book/user-guide/llmops-guide/rag-with-zenml/rag-85-loc.md
3233
  description: Learn how to implement a RAG pipeline in just 85 lines of code.
3234
  ---
3235
 
 
 
 
 
 
3236
  There's a lot of theory and context to think about when it comes to RAG, but
3237
  let's start with a quick implementation in code to motivate what follows. The
3238
  following 85 lines do the following:
@@ -3374,6 +3468,11 @@ File: docs/book/user-guide/llmops-guide/rag-with-zenml/README.md
3374
  description: RAG is a sensible way to get started with LLMs.
3375
  ---
3376
 
 
 
 
 
 
3377
  # RAG Pipelines with ZenML
3378
 
3379
  Retrieval-Augmented Generation (RAG) is a powerful technique that combines the
@@ -3414,6 +3513,11 @@ File: docs/book/user-guide/llmops-guide/rag-with-zenml/storing-embeddings-in-a-v
3414
  description: Store embeddings in a vector database for efficient retrieval.
3415
  ---
3416
 
 
 
 
 
 
3417
  # Storing embeddings in a vector database
3418
 
3419
  The process of generating the embeddings doesn't take too long, especially if the machine on which the step is running has a GPU, but it's still not something we want to do every time we need to retrieve a document. Instead, we can store the embeddings in a vector database, which allows us to quickly retrieve the most relevant chunks based on their similarity to the query.
@@ -3550,6 +3654,11 @@ description: >-
3550
  benefits.
3551
  ---
3552
 
 
 
 
 
 
3553
  # Understanding Retrieval-Augmented Generation (RAG)
3554
 
3555
  LLMs are powerful but not without their limitations. They are prone to generating incorrect responses, especially when it's unclear what the input prompt is asking for. They are also limited in the amount of text they can understand and generate. While some LLMs can handle more than 1 million tokens of input, most open-source models can handle far less. Your use case also might not require all the complexity and cost associated with running a large LLM.
@@ -3603,6 +3712,11 @@ File: docs/book/user-guide/llmops-guide/reranking/evaluating-reranking-performan
3603
  description: Evaluate the performance of your reranking model.
3604
  ---
3605
 
 
 
 
 
 
3606
  # Evaluating reranking performance
3607
 
3608
  We've already set up an evaluation pipeline, so adding reranking evaluation is relatively straightforward. In this section, we'll explore how to evaluate the performance of your reranking model using ZenML.
@@ -3830,6 +3944,11 @@ File: docs/book/user-guide/llmops-guide/reranking/implementing-reranking.md
3830
  description: Learn how to implement reranking in ZenML.
3831
  ---
3832
 
 
 
 
 
 
3833
  # Implementing Reranking in ZenML
3834
 
3835
  We already have a working RAG pipeline, so inserting a reranker into the
@@ -3988,6 +4107,11 @@ File: docs/book/user-guide/llmops-guide/reranking/README.md
3988
  description: Add reranking to your RAG inference for better retrieval performance.
3989
  ---
3990
 
 
 
 
 
 
3991
  Rerankers are a crucial component of retrieval systems that use LLMs. They help
3992
  improve the quality of the retrieved documents by reordering them based on
3993
  additional features or scores. In this section, we'll explore how to add a
@@ -4017,6 +4141,11 @@ File: docs/book/user-guide/llmops-guide/reranking/reranking.md
4017
  description: Add reranking to your RAG inference for better retrieval performance.
4018
  ---
4019
 
 
 
 
 
 
4020
  Rerankers are a crucial component of retrieval systems that use LLMs. They help
4021
  improve the quality of the retrieved documents by reordering them based on
4022
  additional features or scores. In this section, we'll explore how to add a
@@ -4046,6 +4175,11 @@ File: docs/book/user-guide/llmops-guide/reranking/understanding-reranking.md
4046
  description: Understand how reranking works.
4047
  ---
4048
 
 
 
 
 
 
4049
  ## What is reranking?
4050
 
4051
  Reranking is the process of refining the initial ranking of documents retrieved
@@ -4176,6 +4310,11 @@ description: >-
4176
  Delivery
4177
  ---
4178
 
 
 
 
 
 
4179
  # Set up CI/CD
4180
 
4181
  Until now, we have been executing ZenML pipelines locally. While this is a good mode of operating pipelines, in
@@ -4327,6 +4466,11 @@ File: docs/book/user-guide/production-guide/cloud-orchestration.md
4327
  description: Orchestrate using cloud resources.
4328
  ---
4329
 
 
 
 
 
 
4330
  # Orchestrate on the cloud
4331
 
4332
  Until now, we've only run pipelines locally. The next step is to get free from our local machines and transition our pipelines to execute on the cloud. This will enable you to run your MLOps pipelines in a cloud environment, leveraging the scalability and robustness that cloud platforms offer.
@@ -4515,6 +4659,11 @@ File: docs/book/user-guide/production-guide/configure-pipeline.md
4515
  description: Add more resources to your pipeline configuration.
4516
  ---
4517
 
 
 
 
 
 
4518
  # Configure your pipeline to add compute
4519
 
4520
  Now that we have our pipeline up and running in the cloud, you might be wondering how ZenML figured out what sort of dependencies to install in the Docker image that we just ran on the VM. The answer lies in the [runner script we executed (i.e. run.py)](https://github.com/zenml-io/zenml/blob/main/examples/quickstart/run.py#L215), in particular, these lines:
@@ -4685,6 +4834,11 @@ description: >-
4685
  MLOps projects.
4686
  ---
4687
 
 
 
 
 
 
4688
  # Configure a code repository
4689
 
4690
  Throughout the lifecycle of a MLOps pipeline, it can get quite tiresome to always wait for a Docker build every time after running a pipeline (even if the local Docker cache is used). However, there is a way to just have one pipeline build and keep reusing it until a change to the pipeline environment is made: by connecting a code repository.
@@ -4792,6 +4946,11 @@ File: docs/book/user-guide/production-guide/deploying-zenml.md
4792
  description: Deploying ZenML is the first step to production.
4793
  ---
4794
 
 
 
 
 
 
4795
  # Deploying ZenML
4796
 
4797
  When you first get started with ZenML, it is based on the following architecture on your machine:
@@ -4866,6 +5025,11 @@ File: docs/book/user-guide/production-guide/end-to-end.md
4866
  description: Put your new knowledge in action with an end-to-end project
4867
  ---
4868
 
 
 
 
 
 
4869
  # An end-to-end project
4870
 
4871
  That was awesome! We learned so many advanced MLOps production concepts:
@@ -4964,6 +5128,11 @@ File: docs/book/user-guide/production-guide/remote-storage.md
4964
  description: Transitioning to remote artifact storage.
4965
  ---
4966
 
 
 
 
 
 
4967
  # Connecting remote storage
4968
 
4969
  In the previous chapters, we've been working with artifacts stored locally on our machines. This setup is fine for individual experiments, but as we move towards a collaborative and production-ready environment, we need a solution that is more robust, shareable, and scalable. Enter remote storage!
@@ -5186,6 +5355,11 @@ File: docs/book/user-guide/production-guide/understand-stacks.md
5186
  description: Learning how to switch the infrastructure backend of your code.
5187
  ---
5188
 
 
 
 
 
 
5189
  # Understanding stacks
5190
 
5191
  Now that we have ZenML deployed, we can take the next steps in making sure that our machine learning workflows are production-ready. As you were running [your first pipelines](../starter-guide/create-an-ml-pipeline.md), you might have already noticed the term `stack` in the logs and on the dashboard.
@@ -5414,6 +5588,11 @@ File: docs/book/user-guide/starter-guide/cache-previous-executions.md
5414
  description: Iterating quickly with ZenML through caching.
5415
  ---
5416
 
 
 
 
 
 
5417
  # Cache previous executions
5418
 
5419
  Developing machine learning pipelines is iterative in nature. ZenML speeds up development in this work with step caching.
@@ -5598,6 +5777,11 @@ File: docs/book/user-guide/starter-guide/create-an-ml-pipeline.md
5598
  description: Start with the basics of steps and pipelines.
5599
  ---
5600
 
 
 
 
 
 
5601
  # Create an ML pipeline
5602
 
5603
  In the quest for production-ready ML models, workflows can quickly become complex. Decoupling and standardizing stages such as data ingestion, preprocessing, and model evaluation allows for more manageable, reusable, and scalable processes. ZenML pipelines facilitate this by enabling each stage—represented as **Steps**—to be modularly developed and then integrated smoothly into an end-to-end **Pipeline**.
@@ -5938,6 +6122,11 @@ File: docs/book/user-guide/starter-guide/manage-artifacts.md
5938
  description: Understand and adjust how ZenML versions your data.
5939
  ---
5940
 
 
 
 
 
 
5941
  # Manage artifacts
5942
 
5943
  Data sits at the heart of every machine learning workflow. Managing and versioning this data correctly is essential for reproducibility and traceability within your ML pipelines. ZenML takes a proactive approach to data versioning, ensuring that every artifact—be it data, models, or evaluations—is automatically tracked and versioned upon pipeline execution.
@@ -6574,6 +6763,11 @@ File: docs/book/user-guide/starter-guide/starter-project.md
6574
  description: Put your new knowledge into action with a simple starter project
6575
  ---
6576
 
 
 
 
 
 
6577
  # A starter project
6578
 
6579
  By now, you have understood some of the basic pillars of a MLOps system:
@@ -6644,6 +6838,11 @@ File: docs/book/user-guide/starter-guide/track-ml-models.md
6644
  description: Creating a full picture of a ML model using the Model Control Plane
6645
  ---
6646
 
 
 
 
 
 
6647
  # Track ML models
6648
 
6649
  ![Walkthrough of ZenML Model Control Plane (Dashboard available only on ZenML Pro)](../../.gitbook/assets/mcp_walkthrough.gif)
@@ -6908,8 +7107,13 @@ The [ZenML Pro](https://zenml.io/pro) dashboard has additional capabilities, tha
6908
  ZenML Model and versions are some of the most powerful features in ZenML. To understand them in a deeper way, read the [dedicated Model Management](../../how-to/model-management-metrics/model-control-plane/README.md) guide.
6909
 
6910
  <figure><img src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" alt="ZenML Scarf"><figcaption></figcaption></figure>
6911
- This file is a merged representation of the entire codebase, combining all repository files into a single document.
6912
- Generated by Repomix on: 2025-02-07T15:55:40.051Z
 
 
 
 
 
6913
 
6914
  ================================================================
6915
  File Summary
@@ -6945,11 +7149,11 @@ Usage Guidelines:
6945
 
6946
  Notes:
6947
  ------
6948
- - Some files may have been excluded based on .gitignore rules and Repomix's
6949
- configuration.
6950
- - Binary files are not included in this packed representation. Please refer to
6951
- the Repository Structure section for a complete list of file paths, including
6952
- binary files.
6953
 
6954
  Additional Info:
6955
  ----------------
@@ -6989,6 +7193,11 @@ File: docs/book/getting-started/deploying-zenml/custom-secret-stores.md
6989
  description: Learning how to develop a custom secret store.
6990
  ---
6991
 
 
 
 
 
 
6992
  # Custom secret stores
6993
 
6994
  The secrets store acts as the one-stop shop for all the secrets to which your pipeline or stack components might need access. It is responsible for storing, updating and deleting _only the secrets values_ for ZenML secrets, while the ZenML secret metadata is stored in the SQL database. The secrets store interface implemented by all available secrets store back-ends is defined in the `zenml.zen_stores.secrets_stores.secrets_store_interface` core module and looks more or less like this:
@@ -7095,6 +7304,11 @@ File: docs/book/getting-started/deploying-zenml/deploy-using-huggingface-spaces.
7095
  description: Deploying ZenML to Huggingface Spaces.
7096
  ---
7097
 
 
 
 
 
 
7098
  # Deploy using HuggingFace Spaces
7099
 
7100
  A quick way to deploy ZenML and get started is to use [HuggingFace Spaces](https://huggingface.co/spaces). HuggingFace Spaces is a platform for hosting and sharing ML projects and workflows, and it also works to deploy ZenML. You can be up and running in minutes (for free) with a hosted ZenML server, so it's a good option if you want to try out ZenML without any infrastructure overhead.
@@ -7174,6 +7388,11 @@ File: docs/book/getting-started/deploying-zenml/deploy-with-custom-image.md
7174
  description: Deploying ZenML with custom Docker images.
7175
  ---
7176
 
 
 
 
 
 
7177
  # Deploy with custom images
7178
 
7179
  In most cases, deploying ZenML with the default `zenmlhub/zenml-server` Docker image should work just fine. However, there are some scenarios when you might need to deploy ZenML with a custom Docker image:
@@ -7372,6 +7591,11 @@ File: docs/book/getting-started/deploying-zenml/secret-management.md
7372
  description: Configuring the secrets store.
7373
  ---
7374
 
 
 
 
 
 
7375
  # Secret store configuration and management
7376
 
7377
  ## Centralized secrets store
@@ -7507,6 +7731,11 @@ description: >
7507
  Learn how to use the ZenML Pro API.
7508
  ---
7509
 
 
 
 
 
 
7510
  # Using the ZenML Pro API
7511
 
7512
  ZenML Pro offers a powerful API that allows you to interact with your ZenML resources. Whether you're using the [SaaS version](https://cloud.zenml.io) or a self-hosted ZenML Pro instance, you can leverage this API to manage tenants, organizations, users, roles, and more.
@@ -7677,6 +7906,11 @@ description: >
7677
  Learn about the different roles and permissions you can assign to your team members in ZenML Pro.
7678
  ---
7679
 
 
 
 
 
 
7680
  # ZenML Pro: Roles and Permissions
7681
 
7682
  ZenML Pro offers a robust role-based access control (RBAC) system to manage permissions across your organization and tenants. This guide will help you understand the different roles available, how to assign them, and how to create custom roles tailored to your team's needs.
@@ -7819,6 +8053,11 @@ description: >
7819
  Learn about Teams in ZenML Pro and how they can be used to manage groups of users across your organization and tenants.
7820
  ---
7821
 
 
 
 
 
 
7822
  # Organize users in Teams
7823
 
7824
  ZenML Pro introduces the concept of Teams to help you manage groups of users efficiently. A team is a collection of users that acts as a single entity within your organization and tenants. This guide will help you understand how teams work, how to create and manage them, and how to use them effectively in your MLOps workflows.
@@ -7898,6 +8137,11 @@ description: >
7898
  Learn how to use tenants in ZenML Pro.
7899
  ---
7900
 
 
 
 
 
 
7901
  # Tenants
7902
 
7903
  Tenants are individual, isolated deployments of the ZenML server. Each tenant has its own set of users, roles, and resources. Essentially, everything you do in ZenML Pro revolves around a tenant: all of your pipelines, stacks, runs, connectors and so on are scoped to a tenant.
@@ -8451,3 +8695,9 @@ Are you interested in ZenML Pro? [Sign up](https://cloud.zenml.io/?utm\_source=d
8451
  and get access to Scenario 1. with a free 14-day trial now!
8452
 
8453
  <figure><img src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" alt="ZenML Scarf"><figcaption></figcaption></figure>
 
 
 
 
 
 
 
1
+ This file is a merged representation of a subset of the codebase, containing specifically included files, combined into a single document by Repomix.
 
2
 
3
  ================================================================
4
  File Summary
 
34
 
35
  Notes:
36
  ------
37
+ - Some files may have been excluded based on .gitignore rules and Repomix's configuration
38
+ - Binary files are not included in this packed representation. Please refer to the Repository Structure section for a complete list of file paths, including binary files
39
+ - Only files matching these patterns are included: docs/book/user-guide/**/*.md
40
+ - Files matching patterns in .gitignore are excluded
41
+ - Files matching default ignore patterns are excluded
42
 
43
  Additional Info:
44
  ----------------
 
116
  description: Taking your ZenML workflow to the next level.
117
  ---
118
 
119
+ {% hint style="warning" %}
120
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
121
+ {% endhint %}
122
+
123
+
124
  # ☁️ Cloud guide
125
 
126
  This section of the guide consists of easy to follow guides on how to connect the major public clouds to your ZenML deployment. We achieve this by configuring a [stack](../production-guide/understand-stacks.md).
 
142
  description: Learn how to implement evaluation for RAG in just 65 lines of code.
143
  ---
144
 
145
+ {% hint style="warning" %}
146
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
147
+ {% endhint %}
148
+
149
+
150
  # Evaluation in 65 lines of code
151
 
152
  Our RAG guide included [a short example](../rag-with-zenml/rag-85-loc.md) for how to implement a basic RAG pipeline in just 85 lines of code. In this section, we'll build on that example to show how you can evaluate the performance of your RAG pipeline in just 65 lines. For the full code, please visit the project repository [here](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/most\_basic\_eval.py). The code that follows requires the functions from the earlier RAG pipeline code to work.
 
239
  description: Learn how to evaluate the performance of your RAG system in practice.
240
  ---
241
 
242
+ {% hint style="warning" %}
243
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
244
+ {% endhint %}
245
+
246
+
247
  # Evaluation in practice
248
 
249
  Now that we've seen individually how to evaluate the retrieval and generation components of our pipeline, it's worth taking a step back to think through how all of this works in practice.
 
293
  description: Evaluate the generation component of your RAG pipeline.
294
  ---
295
 
296
+ {% hint style="warning" %}
297
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
298
+ {% endhint %}
299
+
300
+
301
  # Generation evaluation
302
 
303
  Now that we have a sense of how to evaluate the retrieval component of our RAG
 
696
  description: Track how your RAG pipeline improves using evaluation and metrics.
697
  ---
698
 
699
+ {% hint style="warning" %}
700
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
701
+ {% endhint %}
702
+
703
+
704
  # Evaluation and metrics
705
 
706
  In this section, we'll explore how to evaluate the performance of your RAG pipeline using metrics and visualizations. Evaluating your RAG pipeline is crucial to understanding how well it performs and identifying areas for improvement. With language models in particular, it's hard to evaluate their performance using traditional metrics like accuracy, precision, and recall. This is because language models generate text, which is inherently subjective and difficult to evaluate quantitatively.
 
739
  description: See how the retrieval component responds to changes in the pipeline.
740
  ---
741
 
742
+ {% hint style="warning" %}
743
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
744
+ {% endhint %}
745
+
746
+
747
  # Retrieval evaluation
748
 
749
  The retrieval component of our RAG pipeline is responsible for finding relevant
 
1093
  description: Evaluate finetuned embeddings and compare to original base embeddings.
1094
  ---
1095
 
1096
+ {% hint style="warning" %}
1097
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1098
+ {% endhint %}
1099
+
1100
+
1101
  Now that we've finetuned our embeddings, we can evaluate them and compare to the
1102
  base embeddings. We have all the data saved and versioned already, and we will
1103
  reuse the same MatryoshkaLoss function for evaluation.
 
1238
  description: Finetune embeddings with Sentence Transformers.
1239
  ---
1240
 
1241
+ {% hint style="warning" %}
1242
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1243
+ {% endhint %}
1244
+
1245
+
1246
  We now have a dataset that we can use to finetune our embeddings. You can
1247
  [inspect the positive and negative examples](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions_0_60_0_distilabel) on the Hugging Face [datasets page](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions_0_60_0_distilabel) since
1248
  our previous pipeline pushed the data there.
 
1347
  description: Finetune embeddings on custom synthetic data to improve retrieval performance.
1348
  ---
1349
 
1350
+ {% hint style="warning" %}
1351
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1352
+ {% endhint %}
1353
+
1354
+
1355
  We previously learned [how to use RAG with ZenML](../rag-with-zenml/README.md) to
1356
  build a production-ready RAG pipeline. In this section, we will explore how to
1357
  optimize and maintain your embedding models through synthetic data generation and
 
1399
  description: Generate synthetic data with distilabel to finetune embeddings.
1400
  ---
1401
 
1402
+ {% hint style="warning" %}
1403
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1404
+ {% endhint %}
1405
+
1406
+
1407
  We already have [a dataset of technical documentation](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions_0_60_0) that was generated
1408
  previously while we were working on the RAG pipeline. We'll use this dataset
1409
  to generate synthetic data with `distilabel`. You can inspect the data directly
 
1949
  description: Learn how to implement an LLM fine-tuning pipeline in just 100 lines of code.
1950
  ---
1951
 
1952
+ {% hint style="warning" %}
1953
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1954
+ {% endhint %}
1955
+
1956
+
1957
  # Quick Start: Fine-tuning an LLM
1958
 
1959
  There's a lot to understand about LLM fine-tuning - from choosing the right base model to preparing your dataset and selecting training parameters. But let's start with a concrete implementation to see how it works in practice. The following 100 lines of code demonstrate:
 
2172
  description: Finetune LLMs for specific tasks or to improve performance and cost.
2173
  ---
2174
 
2175
+ {% hint style="warning" %}
2176
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2177
+ {% endhint %}
2178
+
2179
+
2180
  So far in our LLMOps journey we've learned [how to use RAG with
2181
  ZenML](../rag-with-zenml/README.md), how to [evaluate our RAG
2182
  systems](../evaluation/README.md), how to [use reranking to improve retrieval](../reranking/README.md), and how to
 
2226
  description: "Finetuning an LLM with Accelerate and PEFT"
2227
  ---
2228
 
2229
+ {% hint style="warning" %}
2230
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2231
+ {% endhint %}
2232
+
2233
+
2234
  # Finetuning an LLM with Accelerate and PEFT
2235
 
2236
  We're finally ready to get our hands on the code and see how it works. In this
 
2484
  description: Get started with finetuning LLMs by picking a use case and data.
2485
  ---
2486
 
2487
+ {% hint style="warning" %}
2488
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2489
+ {% endhint %}
2490
+
2491
+
2492
  # Starter choices for finetuning LLMs
2493
 
2494
  Finetuning large language models can be a powerful way to tailor their
 
2659
  description: Deciding when is the right time to finetune LLMs.
2660
  ---
2661
 
2662
+ {% hint style="warning" %}
2663
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2664
+ {% endhint %}
2665
+
2666
+
2667
  # Why and when to finetune LLMs
2668
 
2669
  This guide is intended to be a practical overview that gets you started with
 
2752
  description: Use your RAG components to generate responses to prompts.
2753
  ---
2754
 
2755
+ {% hint style="warning" %}
2756
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2757
+ {% endhint %}
2758
+
2759
+
2760
  # Simple RAG Inference
2761
 
2762
  Now that we have our index store, we can use it to make queries based on the
 
2921
  description: Understand how to ingest and preprocess data for RAG pipelines with ZenML.
2922
  ---
2923
 
2924
+ {% hint style="warning" %}
2925
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2926
+ {% endhint %}
2927
+
2928
+
2929
  The first step in setting up a RAG pipeline is to ingest the data that will be
2930
  used to train and evaluate the retriever and generator models. This data can
2931
  include a large corpus of documents, as well as any relevant metadata or
 
3102
  description: Generate embeddings to improve retrieval performance.
3103
  ---
3104
 
3105
+ {% hint style="warning" %}
3106
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3107
+ {% endhint %}
3108
+
3109
+
3110
  # Generating Embeddings for Retrieval
3111
 
3112
  In this section, we'll explore how to generate embeddings for your data to
 
3322
  description: Learn how to implement a RAG pipeline in just 85 lines of code.
3323
  ---
3324
 
3325
+ {% hint style="warning" %}
3326
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3327
+ {% endhint %}
3328
+
3329
+
3330
  There's a lot of theory and context to think about when it comes to RAG, but
3331
  let's start with a quick implementation in code to motivate what follows. The
3332
  following 85 lines do the following:
 
3468
  description: RAG is a sensible way to get started with LLMs.
3469
  ---
3470
 
3471
+ {% hint style="warning" %}
3472
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3473
+ {% endhint %}
3474
+
3475
+
3476
  # RAG Pipelines with ZenML
3477
 
3478
  Retrieval-Augmented Generation (RAG) is a powerful technique that combines the
 
3513
  description: Store embeddings in a vector database for efficient retrieval.
3514
  ---
3515
 
3516
+ {% hint style="warning" %}
3517
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3518
+ {% endhint %}
3519
+
3520
+
3521
  # Storing embeddings in a vector database
3522
 
3523
  The process of generating the embeddings doesn't take too long, especially if the machine on which the step is running has a GPU, but it's still not something we want to do every time we need to retrieve a document. Instead, we can store the embeddings in a vector database, which allows us to quickly retrieve the most relevant chunks based on their similarity to the query.
 
3654
  benefits.
3655
  ---
3656
 
3657
+ {% hint style="warning" %}
3658
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3659
+ {% endhint %}
3660
+
3661
+
3662
  # Understanding Retrieval-Augmented Generation (RAG)
3663
 
3664
  LLMs are powerful but not without their limitations. They are prone to generating incorrect responses, especially when it's unclear what the input prompt is asking for. They are also limited in the amount of text they can understand and generate. While some LLMs can handle more than 1 million tokens of input, most open-source models can handle far less. Your use case also might not require all the complexity and cost associated with running a large LLM.
 
3712
  description: Evaluate the performance of your reranking model.
3713
  ---
3714
 
3715
+ {% hint style="warning" %}
3716
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3717
+ {% endhint %}
3718
+
3719
+
3720
  # Evaluating reranking performance
3721
 
3722
  We've already set up an evaluation pipeline, so adding reranking evaluation is relatively straightforward. In this section, we'll explore how to evaluate the performance of your reranking model using ZenML.
 
3944
  description: Learn how to implement reranking in ZenML.
3945
  ---
3946
 
3947
+ {% hint style="warning" %}
3948
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3949
+ {% endhint %}
3950
+
3951
+
3952
  # Implementing Reranking in ZenML
3953
 
3954
  We already have a working RAG pipeline, so inserting a reranker into the
 
4107
  description: Add reranking to your RAG inference for better retrieval performance.
4108
  ---
4109
 
4110
+ {% hint style="warning" %}
4111
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
4112
+ {% endhint %}
4113
+
4114
+
4115
  Rerankers are a crucial component of retrieval systems that use LLMs. They help
4116
  improve the quality of the retrieved documents by reordering them based on
4117
  additional features or scores. In this section, we'll explore how to add a
 
4141
  description: Add reranking to your RAG inference for better retrieval performance.
4142
  ---
4143
 
4144
+ {% hint style="warning" %}
4145
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
4146
+ {% endhint %}
4147
+
4148
+
4149
  Rerankers are a crucial component of retrieval systems that use LLMs. They help
4150
  improve the quality of the retrieved documents by reordering them based on
4151
  additional features or scores. In this section, we'll explore how to add a
 
4175
  description: Understand how reranking works.
4176
  ---
4177
 
4178
+ {% hint style="warning" %}
4179
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
4180
+ {% endhint %}
4181
+
4182
+
4183
  ## What is reranking?
4184
 
4185
  Reranking is the process of refining the initial ranking of documents retrieved
 
4310
  Delivery
4311
  ---
4312
 
4313
+ {% hint style="warning" %}
4314
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
4315
+ {% endhint %}
4316
+
4317
+
4318
  # Set up CI/CD
4319
 
4320
  Until now, we have been executing ZenML pipelines locally. While this is a good mode of operating pipelines, in
 
4466
  description: Orchestrate using cloud resources.
4467
  ---
4468
 
4469
+ {% hint style="warning" %}
4470
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
4471
+ {% endhint %}
4472
+
4473
+
4474
  # Orchestrate on the cloud
4475
 
4476
  Until now, we've only run pipelines locally. The next step is to get free from our local machines and transition our pipelines to execute on the cloud. This will enable you to run your MLOps pipelines in a cloud environment, leveraging the scalability and robustness that cloud platforms offer.
 
4659
  description: Add more resources to your pipeline configuration.
4660
  ---
4661
 
4662
+ {% hint style="warning" %}
4663
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
4664
+ {% endhint %}
4665
+
4666
+
4667
  # Configure your pipeline to add compute
4668
 
4669
  Now that we have our pipeline up and running in the cloud, you might be wondering how ZenML figured out what sort of dependencies to install in the Docker image that we just ran on the VM. The answer lies in the [runner script we executed (i.e. run.py)](https://github.com/zenml-io/zenml/blob/main/examples/quickstart/run.py#L215), in particular, these lines:
 
4834
  MLOps projects.
4835
  ---
4836
 
4837
+ {% hint style="warning" %}
4838
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
4839
+ {% endhint %}
4840
+
4841
+
4842
  # Configure a code repository
4843
 
4844
  Throughout the lifecycle of a MLOps pipeline, it can get quite tiresome to always wait for a Docker build every time after running a pipeline (even if the local Docker cache is used). However, there is a way to just have one pipeline build and keep reusing it until a change to the pipeline environment is made: by connecting a code repository.
 
4946
  description: Deploying ZenML is the first step to production.
4947
  ---
4948
 
4949
+ {% hint style="warning" %}
4950
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
4951
+ {% endhint %}
4952
+
4953
+
4954
  # Deploying ZenML
4955
 
4956
  When you first get started with ZenML, it is based on the following architecture on your machine:
 
5025
  description: Put your new knowledge in action with an end-to-end project
5026
  ---
5027
 
5028
+ {% hint style="warning" %}
5029
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
5030
+ {% endhint %}
5031
+
5032
+
5033
  # An end-to-end project
5034
 
5035
  That was awesome! We learned so many advanced MLOps production concepts:
 
5128
  description: Transitioning to remote artifact storage.
5129
  ---
5130
 
5131
+ {% hint style="warning" %}
5132
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
5133
+ {% endhint %}
5134
+
5135
+
5136
  # Connecting remote storage
5137
 
5138
  In the previous chapters, we've been working with artifacts stored locally on our machines. This setup is fine for individual experiments, but as we move towards a collaborative and production-ready environment, we need a solution that is more robust, shareable, and scalable. Enter remote storage!
 
5355
  description: Learning how to switch the infrastructure backend of your code.
5356
  ---
5357
 
5358
+ {% hint style="warning" %}
5359
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
5360
+ {% endhint %}
5361
+
5362
+
5363
  # Understanding stacks
5364
 
5365
  Now that we have ZenML deployed, we can take the next steps in making sure that our machine learning workflows are production-ready. As you were running [your first pipelines](../starter-guide/create-an-ml-pipeline.md), you might have already noticed the term `stack` in the logs and on the dashboard.
 
5588
  description: Iterating quickly with ZenML through caching.
5589
  ---
5590
 
5591
+ {% hint style="warning" %}
5592
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
5593
+ {% endhint %}
5594
+
5595
+
5596
  # Cache previous executions
5597
 
5598
  Developing machine learning pipelines is iterative in nature. ZenML speeds up development in this work with step caching.
 
5777
  description: Start with the basics of steps and pipelines.
5778
  ---
5779
 
5780
+ {% hint style="warning" %}
5781
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
5782
+ {% endhint %}
5783
+
5784
+
5785
  # Create an ML pipeline
5786
 
5787
  In the quest for production-ready ML models, workflows can quickly become complex. Decoupling and standardizing stages such as data ingestion, preprocessing, and model evaluation allows for more manageable, reusable, and scalable processes. ZenML pipelines facilitate this by enabling each stage—represented as **Steps**—to be modularly developed and then integrated smoothly into an end-to-end **Pipeline**.
 
6122
  description: Understand and adjust how ZenML versions your data.
6123
  ---
6124
 
6125
+ {% hint style="warning" %}
6126
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
6127
+ {% endhint %}
6128
+
6129
+
6130
  # Manage artifacts
6131
 
6132
  Data sits at the heart of every machine learning workflow. Managing and versioning this data correctly is essential for reproducibility and traceability within your ML pipelines. ZenML takes a proactive approach to data versioning, ensuring that every artifact—be it data, models, or evaluations—is automatically tracked and versioned upon pipeline execution.
 
6763
  description: Put your new knowledge into action with a simple starter project
6764
  ---
6765
 
6766
+ {% hint style="warning" %}
6767
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
6768
+ {% endhint %}
6769
+
6770
+
6771
  # A starter project
6772
 
6773
  By now, you have understood some of the basic pillars of a MLOps system:
 
6838
  description: Creating a full picture of a ML model using the Model Control Plane
6839
  ---
6840
 
6841
+ {% hint style="warning" %}
6842
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
6843
+ {% endhint %}
6844
+
6845
+
6846
  # Track ML models
6847
 
6848
  ![Walkthrough of ZenML Model Control Plane (Dashboard available only on ZenML Pro)](../../.gitbook/assets/mcp_walkthrough.gif)
 
7107
  ZenML Model and versions are some of the most powerful features in ZenML. To understand them in a deeper way, read the [dedicated Model Management](../../how-to/model-management-metrics/model-control-plane/README.md) guide.
7108
 
7109
  <figure><img src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" alt="ZenML Scarf"><figcaption></figcaption></figure>
7110
+
7111
+
7112
+
7113
+ ================================================================
7114
+ End of Codebase
7115
+ ================================================================
7116
+ This file is a merged representation of a subset of the codebase, containing specifically included files, combined into a single document by Repomix.
7117
 
7118
  ================================================================
7119
  File Summary
 
7149
 
7150
  Notes:
7151
  ------
7152
+ - Some files may have been excluded based on .gitignore rules and Repomix's configuration
7153
+ - Binary files are not included in this packed representation. Please refer to the Repository Structure section for a complete list of file paths, including binary files
7154
+ - Only files matching these patterns are included: docs/book/getting-started/**/*.md
7155
+ - Files matching patterns in .gitignore are excluded
7156
+ - Files matching default ignore patterns are excluded
7157
 
7158
  Additional Info:
7159
  ----------------
 
7193
  description: Learning how to develop a custom secret store.
7194
  ---
7195
 
7196
+ {% hint style="warning" %}
7197
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7198
+ {% endhint %}
7199
+
7200
+
7201
  # Custom secret stores
7202
 
7203
  The secrets store acts as the one-stop shop for all the secrets to which your pipeline or stack components might need access. It is responsible for storing, updating and deleting _only the secrets values_ for ZenML secrets, while the ZenML secret metadata is stored in the SQL database. The secrets store interface implemented by all available secrets store back-ends is defined in the `zenml.zen_stores.secrets_stores.secrets_store_interface` core module and looks more or less like this:
 
7304
  description: Deploying ZenML to Huggingface Spaces.
7305
  ---
7306
 
7307
+ {% hint style="warning" %}
7308
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7309
+ {% endhint %}
7310
+
7311
+
7312
  # Deploy using HuggingFace Spaces
7313
 
7314
  A quick way to deploy ZenML and get started is to use [HuggingFace Spaces](https://huggingface.co/spaces). HuggingFace Spaces is a platform for hosting and sharing ML projects and workflows, and it also works to deploy ZenML. You can be up and running in minutes (for free) with a hosted ZenML server, so it's a good option if you want to try out ZenML without any infrastructure overhead.
 
7388
  description: Deploying ZenML with custom Docker images.
7389
  ---
7390
 
7391
+ {% hint style="warning" %}
7392
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7393
+ {% endhint %}
7394
+
7395
+
7396
  # Deploy with custom images
7397
 
7398
  In most cases, deploying ZenML with the default `zenmlhub/zenml-server` Docker image should work just fine. However, there are some scenarios when you might need to deploy ZenML with a custom Docker image:
 
7591
  description: Configuring the secrets store.
7592
  ---
7593
 
7594
+ {% hint style="warning" %}
7595
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7596
+ {% endhint %}
7597
+
7598
+
7599
  # Secret store configuration and management
7600
 
7601
  ## Centralized secrets store
 
7731
  Learn how to use the ZenML Pro API.
7732
  ---
7733
 
7734
+ {% hint style="warning" %}
7735
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7736
+ {% endhint %}
7737
+
7738
+
7739
  # Using the ZenML Pro API
7740
 
7741
  ZenML Pro offers a powerful API that allows you to interact with your ZenML resources. Whether you're using the [SaaS version](https://cloud.zenml.io) or a self-hosted ZenML Pro instance, you can leverage this API to manage tenants, organizations, users, roles, and more.
 
7906
  Learn about the different roles and permissions you can assign to your team members in ZenML Pro.
7907
  ---
7908
 
7909
+ {% hint style="warning" %}
7910
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7911
+ {% endhint %}
7912
+
7913
+
7914
  # ZenML Pro: Roles and Permissions
7915
 
7916
  ZenML Pro offers a robust role-based access control (RBAC) system to manage permissions across your organization and tenants. This guide will help you understand the different roles available, how to assign them, and how to create custom roles tailored to your team's needs.
 
8053
  Learn about Teams in ZenML Pro and how they can be used to manage groups of users across your organization and tenants.
8054
  ---
8055
 
8056
+ {% hint style="warning" %}
8057
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
8058
+ {% endhint %}
8059
+
8060
+
8061
  # Organize users in Teams
8062
 
8063
  ZenML Pro introduces the concept of Teams to help you manage groups of users efficiently. A team is a collection of users that acts as a single entity within your organization and tenants. This guide will help you understand how teams work, how to create and manage them, and how to use them effectively in your MLOps workflows.
 
8137
  Learn how to use tenants in ZenML Pro.
8138
  ---
8139
 
8140
+ {% hint style="warning" %}
8141
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
8142
+ {% endhint %}
8143
+
8144
+
8145
  # Tenants
8146
 
8147
  Tenants are individual, isolated deployments of the ZenML server. Each tenant has its own set of users, roles, and resources. Essentially, everything you do in ZenML Pro revolves around a tenant: all of your pipelines, stacks, runs, connectors and so on are scoped to a tenant.
 
8695
  and get access to Scenario 1. with a free 14-day trial now!
8696
 
8697
  <figure><img src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" alt="ZenML Scarf"><figcaption></figcaption></figure>
8698
+
8699
+
8700
+
8701
+ ================================================================
8702
+ End of Codebase
8703
+ ================================================================