content
listlengths 1
171
| tag
dict |
---|---|
[
{
"data": "(instances-console)= Use the command to attach to instance consoles. The console is available at boot time already, so you can use it to see boot messages and, if necessary, debug startup issues of a container or VM. To get an interactive console, enter the following command: incus console <instance_name> To show log output, pass the `--show-log` flag: incus console <instance_name> --show-log You can also immediately attach to the console when you start your instance: incus start <instance_name> --console incus start <instance_name> --console=vga On virtual machines, log on to the console to get graphical output. Using the console you can, for example, install an operating system using a graphical interface or run a desktop environment. An additional advantage is that the console is available even if the `incus-agent` process is not running. This means that you can access the VM through the console before the `incus-agent` starts up, and also if the `incus-agent` is not available at all. To start the VGA console with graphical output for your VM, you must install a SPICE client (for example, `virt-viewer` or `spice-gtk-client`). Then enter the following command: incus console <vm_name> --type vga"
}
] |
{
"category": "Runtime",
"file_name": "instances_console.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Copyright (C) 2018-2019 Matt Layher Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE."
}
] |
{
"category": "Runtime",
"file_name": "LICENSE.md",
"project_name": "Kilo",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Manage label configuration of endpoint ``` cilium-dbg endpoint labels [flags] ``` ``` -a, --add strings Add/enable labels -d, --delete strings Delete/disable labels -h, --help help for labels ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage endpoints"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_endpoint_labels.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium-agent --cmdref, do not edit manually--> Check whether the cilium-health API is up ``` cilium-health ping [flags] ``` ``` -h, --help help for ping ``` ``` -D, --debug Enable debug messages -H, --host string URI to cilium-health server API --log-driver strings Logging endpoints to use for example syslog --log-opt map Log driver options for cilium-health e.g. syslog.level=info,syslog.facility=local5,syslog.tag=cilium-agent ``` - Cilium Health Client"
}
] |
{
"category": "Runtime",
"file_name": "cilium-health_ping.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "<p>Packages:</p> <ul> <li> <a href=\"#ceph.rook.io%2fv1\">ceph.rook.io/v1</a> </li> </ul> <h2 id=\"ceph.rook.io/v1\">ceph.rook.io/v1</h2> <div> <p>Package v1 is the v1 version of the API.</p> </div> Resource Types: <ul><li> <a href=\"#ceph.rook.io/v1.CephBlockPool\">CephBlockPool</a> </li><li> <a href=\"#ceph.rook.io/v1.CephBucketNotification\">CephBucketNotification</a> </li><li> <a href=\"#ceph.rook.io/v1.CephBucketTopic\">CephBucketTopic</a> </li><li> <a href=\"#ceph.rook.io/v1.CephCOSIDriver\">CephCOSIDriver</a> </li><li> <a href=\"#ceph.rook.io/v1.CephClient\">CephClient</a> </li><li> <a href=\"#ceph.rook.io/v1.CephCluster\">CephCluster</a> </li><li> <a href=\"#ceph.rook.io/v1.CephFilesystem\">CephFilesystem</a> </li><li> <a href=\"#ceph.rook.io/v1.CephFilesystemMirror\">CephFilesystemMirror</a> </li><li> <a href=\"#ceph.rook.io/v1.CephFilesystemSubVolumeGroup\">CephFilesystemSubVolumeGroup</a> </li><li> <a href=\"#ceph.rook.io/v1.CephNFS\">CephNFS</a> </li><li> <a href=\"#ceph.rook.io/v1.CephObjectRealm\">CephObjectRealm</a> </li><li> <a href=\"#ceph.rook.io/v1.CephObjectStore\">CephObjectStore</a> </li><li> <a href=\"#ceph.rook.io/v1.CephObjectStoreUser\">CephObjectStoreUser</a> </li><li> <a href=\"#ceph.rook.io/v1.CephObjectZone\">CephObjectZone</a> </li><li> <a href=\"#ceph.rook.io/v1.CephObjectZoneGroup\">CephObjectZoneGroup</a> </li><li> <a href=\"#ceph.rook.io/v1.CephRBDMirror\">CephRBDMirror</a> </li></ul> <h3 id=\"ceph.rook.io/v1.CephBlockPool\">CephBlockPool </h3> <div> <p>CephBlockPool represents a Ceph Storage Pool</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>apiVersion</code><br/> string</td> <td> <code> ceph.rook.io/v1 </code> </td> </tr> <tr> <td> <code>kind</code><br/> string </td> <td><code>CephBlockPool</code></td> </tr> <tr> <td> <code>metadata</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta\"> Kubernetes meta/v1.ObjectMeta </a> </em> </td> <td> Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field. </td> </tr> <tr> <td> <code>spec</code><br/> <em> <a href=\"#ceph.rook.io/v1.NamedBlockPoolSpec\"> NamedBlockPoolSpec </a> </em> </td> <td> <br/> <br/> <table> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The desired name of the pool if different from the CephBlockPool CR name.</p> </td> </tr> <tr> <td> <code>PoolSpec</code><br/> <em> <a href=\"#ceph.rook.io/v1.PoolSpec\"> PoolSpec </a> </em> </td> <td> <p> (Members of <code>PoolSpec</code> are embedded into this type.) </p> <p>The core pool configuration</p> </td> </tr> </table> </td> </tr> <tr> <td> <code>status</code><br/> <em> <a href=\"#ceph.rook.io/v1.CephBlockPoolStatus\"> CephBlockPoolStatus </a> </em> </td> <td> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephBucketNotification\">CephBucketNotification </h3> <div> <p>CephBucketNotification represents a Bucket Notifications</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>apiVersion</code><br/> string</td> <td> <code> ceph.rook.io/v1 </code> </td> </tr> <tr> <td> <code>kind</code><br/> string </td> <td><code>CephBucketNotification</code></td> </tr> <tr> <td> <code>metadata</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta\"> Kubernetes meta/v1.ObjectMeta </a> </em> </td> <td> Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field. </td> </tr> <tr> <td> <code>spec</code><br/> <em> <a href=\"#ceph.rook.io/v1.BucketNotificationSpec\"> BucketNotificationSpec </a> </em> </td> <td> <br/> <br/> <table> <tr> <td> <code>topic</code><br/> <em> string </em> </td> <td> <p>The name of the topic associated with this notification</p> </td> </tr> <tr> <td> <code>events</code><br/> <em> <a href=\"#ceph.rook.io/v1.BucketNotificationEvent\"> []BucketNotificationEvent </a> </em> </td> <td> <em>(Optional)</em> <p>List of events that should trigger the notification</p> </td> </tr> <tr> <td> <code>filter</code><br/> <em> <a href=\"#ceph.rook.io/v1.NotificationFilterSpec\"> NotificationFilterSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Spec of notification filter</p> </td> </tr> </table> </td> </tr> <tr> <td> <code>status</code><br/> <em> <a href=\"#ceph.rook.io/v1.Status\"> Status </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephBucketTopic\">CephBucketTopic </h3> <div> <p>CephBucketTopic represents a Ceph Object Topic for Bucket Notifications</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>apiVersion</code><br/> string</td> <td> <code> ceph.rook.io/v1 </code> </td> </tr> <tr> <td> <code>kind</code><br/> string </td> <td><code>CephBucketTopic</code></td> </tr> <tr> <td> <code>metadata</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta\"> Kubernetes meta/v1.ObjectMeta </a> </em> </td> <td> Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field. </td> </tr> <tr> <td> <code>spec</code><br/> <em> <a href=\"#ceph.rook.io/v1.BucketTopicSpec\"> BucketTopicSpec </a> </em> </td> <td> <br/> <br/> <table> <tr> <td> <code>objectStoreName</code><br/> <em> string </em> </td> <td> <p>The name of the object store on which to define the topic</p> </td> </tr> <tr> <td> <code>objectStoreNamespace</code><br/> <em> string </em> </td> <td> <p>The namespace of the object store on which to define the topic</p> </td> </tr> <tr> <td> <code>opaqueData</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Data which is sent in each event</p> </td> </tr> <tr> <td> <code>persistent</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Indication whether notifications to this endpoint are persistent or not</p> </td> </tr> <tr> <td> <code>endpoint</code><br/> <em> <a href=\"#ceph.rook.io/v1.TopicEndpointSpec\"> TopicEndpointSpec </a> </em> </td> <td> <p>Contains the endpoint spec of the topic</p> </td> </tr> </table> </td> </tr> <tr> <td> <code>status</code><br/> <em> <a href=\"#ceph.rook.io/v1.BucketTopicStatus\"> BucketTopicStatus </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> </tbody> </table> <h3"
},
{
"data": "</h3> <div> <p>CephCOSIDriver represents the CRD for the Ceph COSI Driver Deployment</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>apiVersion</code><br/> string</td> <td> <code> ceph.rook.io/v1 </code> </td> </tr> <tr> <td> <code>kind</code><br/> string </td> <td><code>CephCOSIDriver</code></td> </tr> <tr> <td> <code>metadata</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta\"> Kubernetes meta/v1.ObjectMeta </a> </em> </td> <td> Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field. </td> </tr> <tr> <td> <code>spec</code><br/> <em> <a href=\"#ceph.rook.io/v1.CephCOSIDriverSpec\"> CephCOSIDriverSpec </a> </em> </td> <td> <p>Spec represents the specification of a Ceph COSI Driver</p> <br/> <br/> <table> <tr> <td> <code>image</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Image is the container image to run the Ceph COSI driver</p> </td> </tr> <tr> <td> <code>objectProvisionerImage</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>ObjectProvisionerImage is the container image to run the COSI driver sidecar</p> </td> </tr> <tr> <td> <code>deploymentStrategy</code><br/> <em> <a href=\"#ceph.rook.io/v1.COSIDeploymentStrategy\"> COSIDeploymentStrategy </a> </em> </td> <td> <em>(Optional)</em> <p>DeploymentStrategy is the strategy to use to deploy the COSI driver.</p> </td> </tr> <tr> <td> <code>placement</code><br/> <em> <a href=\"#ceph.rook.io/v1.Placement\"> Placement </a> </em> </td> <td> <em>(Optional)</em> <p>Placement is the placement strategy to use for the COSI driver</p> </td> </tr> <tr> <td> <code>resources</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core\"> Kubernetes core/v1.ResourceRequirements </a> </em> </td> <td> <em>(Optional)</em> <p>Resources is the resource requirements for the COSI driver</p> </td> </tr> </table> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephClient\">CephClient </h3> <div> <p>CephClient represents a Ceph Client</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>apiVersion</code><br/> string</td> <td> <code> ceph.rook.io/v1 </code> </td> </tr> <tr> <td> <code>kind</code><br/> string </td> <td><code>CephClient</code></td> </tr> <tr> <td> <code>metadata</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta\"> Kubernetes meta/v1.ObjectMeta </a> </em> </td> <td> Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field. </td> </tr> <tr> <td> <code>spec</code><br/> <em> <a href=\"#ceph.rook.io/v1.ClientSpec\"> ClientSpec </a> </em> </td> <td> <p>Spec represents the specification of a Ceph Client</p> <br/> <br/> <table> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>caps</code><br/> <em> map[string]string </em> </td> <td> </td> </tr> </table> </td> </tr> <tr> <td> <code>status</code><br/> <em> <a href=\"#ceph.rook.io/v1.CephClientStatus\"> CephClientStatus </a> </em> </td> <td> <em>(Optional)</em> <p>Status represents the status of a Ceph Client</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephCluster\">CephCluster </h3> <div> <p>CephCluster is a Ceph storage cluster</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>apiVersion</code><br/> string</td> <td> <code> ceph.rook.io/v1 </code> </td> </tr> <tr> <td> <code>kind</code><br/> string </td> <td><code>CephCluster</code></td> </tr> <tr> <td> <code>metadata</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta\"> Kubernetes meta/v1.ObjectMeta </a> </em> </td> <td> Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field. </td> </tr> <tr> <td> <code>spec</code><br/> <em> <a href=\"#ceph.rook.io/v1.ClusterSpec\"> ClusterSpec </a> </em> </td> <td> <br/> <br/> <table> <tr> <td> <code>cephVersion</code><br/> <em> <a href=\"#ceph.rook.io/v1.CephVersionSpec\"> CephVersionSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The version information that instructs Rook to orchestrate a particular version of Ceph.</p> </td> </tr> <tr> <td> <code>storage</code><br/> <em> <a href=\"#ceph.rook.io/v1.StorageScopeSpec\"> StorageScopeSpec </a> </em> </td> <td> <em>(Optional)</em> <p>A spec for available storage in the cluster and how it should be used</p> </td> </tr> <tr> <td> <code>annotations</code><br/> <em> <a href=\"#ceph.rook.io/v1.AnnotationsSpec\"> AnnotationsSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The annotations-related configuration to add/set on each Pod related object.</p> </td> </tr> <tr> <td> <code>labels</code><br/> <em> <a href=\"#ceph.rook.io/v1.LabelsSpec\"> LabelsSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The labels-related configuration to add/set on each Pod related object.</p> </td> </tr> <tr> <td> <code>placement</code><br/> <em> <a href=\"#ceph.rook.io/v1.PlacementSpec\"> PlacementSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The placement-related configuration to pass to kubernetes (affinity, node selector, tolerations).</p> </td> </tr> <tr> <td> <code>network</code><br/> <em> <a href=\"#ceph.rook.io/v1.NetworkSpec\"> NetworkSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Network related configuration</p> </td> </tr> <tr> <td> <code>resources</code><br/> <em> <a href=\"#ceph.rook.io/v1.ResourceSpec\"> ResourceSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Resources set resource requests and limits</p> </td> </tr> <tr> <td> <code>priorityClassNames</code><br/> <em>"
},
{
"data": "href=\"#ceph.rook.io/v1.PriorityClassNamesSpec\"> PriorityClassNamesSpec </a> </em> </td> <td> <em>(Optional)</em> <p>PriorityClassNames sets priority classes on components</p> </td> </tr> <tr> <td> <code>dataDirHostPath</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The path on the host where config and data can be persisted</p> </td> </tr> <tr> <td> <code>skipUpgradeChecks</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>SkipUpgradeChecks defines if an upgrade should be forced even if one of the check fails</p> </td> </tr> <tr> <td> <code>continueUpgradeAfterChecksEvenIfNotHealthy</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>ContinueUpgradeAfterChecksEvenIfNotHealthy defines if an upgrade should continue even if PGs are not clean</p> </td> </tr> <tr> <td> <code>waitTimeoutForHealthyOSDInMinutes</code><br/> <em> time.Duration </em> </td> <td> <em>(Optional)</em> <p>WaitTimeoutForHealthyOSDInMinutes defines the time the operator would wait before an OSD can be stopped for upgrade or restart. If the timeout exceeds and OSD is not ok to stop, then the operator would skip upgrade for the current OSD and proceed with the next one if <code>continueUpgradeAfterChecksEvenIfNotHealthy</code> is <code>false</code>. If <code>continueUpgradeAfterChecksEvenIfNotHealthy</code> is <code>true</code>, then operator would continue with the upgrade of an OSD even if its not ok to stop after the timeout. This timeout won’t be applied if <code>skipUpgradeChecks</code> is <code>true</code>. The default wait timeout is 10 minutes.</p> </td> </tr> <tr> <td> <code>upgradeOSDRequiresHealthyPGs</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>UpgradeOSDRequiresHealthyPGs defines if OSD upgrade requires PGs are clean. If set to <code>true</code> OSD upgrade process won’t start until PGs are healthy. This configuration will be ignored if <code>skipUpgradeChecks</code> is <code>true</code>. Default is false.</p> </td> </tr> <tr> <td> <code>disruptionManagement</code><br/> <em> <a href=\"#ceph.rook.io/v1.DisruptionManagementSpec\"> DisruptionManagementSpec </a> </em> </td> <td> <em>(Optional)</em> <p>A spec for configuring disruption management.</p> </td> </tr> <tr> <td> <code>mon</code><br/> <em> <a href=\"#ceph.rook.io/v1.MonSpec\"> MonSpec </a> </em> </td> <td> <em>(Optional)</em> <p>A spec for mon related options</p> </td> </tr> <tr> <td> <code>crashCollector</code><br/> <em> <a href=\"#ceph.rook.io/v1.CrashCollectorSpec\"> CrashCollectorSpec </a> </em> </td> <td> <em>(Optional)</em> <p>A spec for the crash controller</p> </td> </tr> <tr> <td> <code>dashboard</code><br/> <em> <a href=\"#ceph.rook.io/v1.DashboardSpec\"> DashboardSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Dashboard settings</p> </td> </tr> <tr> <td> <code>monitoring</code><br/> <em> <a href=\"#ceph.rook.io/v1.MonitoringSpec\"> MonitoringSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Prometheus based Monitoring settings</p> </td> </tr> <tr> <td> <code>external</code><br/> <em> <a href=\"#ceph.rook.io/v1.ExternalSpec\"> ExternalSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Whether the Ceph Cluster is running external to this Kubernetes cluster mon, mgr, osd, mds, and discover daemons will not be created for external clusters.</p> </td> </tr> <tr> <td> <code>mgr</code><br/> <em> <a href=\"#ceph.rook.io/v1.MgrSpec\"> MgrSpec </a> </em> </td> <td> <em>(Optional)</em> <p>A spec for mgr related options</p> </td> </tr> <tr> <td> <code>removeOSDsIfOutAndSafeToRemove</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Remove the OSD that is out and safe to remove only if this option is true</p> </td> </tr> <tr> <td> <code>cleanupPolicy</code><br/> <em> <a href=\"#ceph.rook.io/v1.CleanupPolicySpec\"> CleanupPolicySpec </a> </em> </td> <td> <em>(Optional)</em> <p>Indicates user intent when deleting a cluster; blocks orchestration and should not be set if cluster deletion is not imminent.</p> </td> </tr> <tr> <td> <code>healthCheck</code><br/> <em> <a href=\"#ceph.rook.io/v1.CephClusterHealthCheckSpec\"> CephClusterHealthCheckSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Internal daemon healthchecks and liveness probe</p> </td> </tr> <tr> <td> <code>security</code><br/> <em> <a href=\"#ceph.rook.io/v1.SecuritySpec\"> SecuritySpec </a> </em> </td> <td> <em>(Optional)</em> <p>Security represents security settings</p> </td> </tr> <tr> <td> <code>logCollector</code><br/> <em> <a href=\"#ceph.rook.io/v1.LogCollectorSpec\"> LogCollectorSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Logging represents loggings settings</p> </td> </tr> <tr> <td> <code>csi</code><br/> <em> <a href=\"#ceph.rook.io/v1.CSIDriverSpec\"> CSIDriverSpec </a> </em> </td> <td> <em>(Optional)</em> <p>CSI Driver Options applied per cluster.</p> </td> </tr> <tr> <td> <code>cephConfig</code><br/> <em> map[string]map[string]string </em> </td> <td> <em>(Optional)</em> <p>Ceph Config options</p> </td> </tr> </table> </td> </tr> <tr> <td> <code>status</code><br/> <em> <a href=\"#ceph.rook.io/v1.ClusterStatus\"> ClusterStatus </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephFilesystem\">CephFilesystem </h3> <div> <p>CephFilesystem represents a Ceph Filesystem</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>apiVersion</code><br/> string</td> <td> <code>"
},
{
"data": "</code> </td> </tr> <tr> <td> <code>kind</code><br/> string </td> <td><code>CephFilesystem</code></td> </tr> <tr> <td> <code>metadata</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta\"> Kubernetes meta/v1.ObjectMeta </a> </em> </td> <td> Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field. </td> </tr> <tr> <td> <code>spec</code><br/> <em> <a href=\"#ceph.rook.io/v1.FilesystemSpec\"> FilesystemSpec </a> </em> </td> <td> <br/> <br/> <table> <tr> <td> <code>metadataPool</code><br/> <em> <a href=\"#ceph.rook.io/v1.PoolSpec\"> PoolSpec </a> </em> </td> <td> <p>The metadata pool settings</p> </td> </tr> <tr> <td> <code>dataPools</code><br/> <em> <a href=\"#ceph.rook.io/v1.NamedPoolSpec\"> []NamedPoolSpec </a> </em> </td> <td> <p>The data pool settings, with optional predefined pool name.</p> </td> </tr> <tr> <td> <code>preservePoolsOnDelete</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Preserve pools on filesystem deletion</p> </td> </tr> <tr> <td> <code>preserveFilesystemOnDelete</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Preserve the fs in the cluster on CephFilesystem CR deletion. Setting this to true automatically implies PreservePoolsOnDelete is true.</p> </td> </tr> <tr> <td> <code>metadataServer</code><br/> <em> <a href=\"#ceph.rook.io/v1.MetadataServerSpec\"> MetadataServerSpec </a> </em> </td> <td> <p>The mds pod info</p> </td> </tr> <tr> <td> <code>mirroring</code><br/> <em> <a href=\"#ceph.rook.io/v1.FSMirroringSpec\"> FSMirroringSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The mirroring settings</p> </td> </tr> <tr> <td> <code>statusCheck</code><br/> <em> <a href=\"#ceph.rook.io/v1.MirrorHealthCheckSpec\"> MirrorHealthCheckSpec </a> </em> </td> <td> <p>The mirroring statusCheck</p> </td> </tr> </table> </td> </tr> <tr> <td> <code>status</code><br/> <em> <a href=\"#ceph.rook.io/v1.CephFilesystemStatus\"> CephFilesystemStatus </a> </em> </td> <td> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephFilesystemMirror\">CephFilesystemMirror </h3> <div> <p>CephFilesystemMirror is the Ceph Filesystem Mirror object definition</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>apiVersion</code><br/> string</td> <td> <code> ceph.rook.io/v1 </code> </td> </tr> <tr> <td> <code>kind</code><br/> string </td> <td><code>CephFilesystemMirror</code></td> </tr> <tr> <td> <code>metadata</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta\"> Kubernetes meta/v1.ObjectMeta </a> </em> </td> <td> Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field. </td> </tr> <tr> <td> <code>spec</code><br/> <em> <a href=\"#ceph.rook.io/v1.FilesystemMirroringSpec\"> FilesystemMirroringSpec </a> </em> </td> <td> <br/> <br/> <table> <tr> <td> <code>placement</code><br/> <em> <a href=\"#ceph.rook.io/v1.Placement\"> Placement </a> </em> </td> <td> <em>(Optional)</em> <p>The affinity to place the rgw pods (default is to place on any available node)</p> </td> </tr> <tr> <td> <code>annotations</code><br/> <em> <a href=\"#ceph.rook.io/v1.Annotations\"> Annotations </a> </em> </td> <td> <em>(Optional)</em> <p>The annotations-related configuration to add/set on each Pod related object.</p> </td> </tr> <tr> <td> <code>labels</code><br/> <em> <a href=\"#ceph.rook.io/v1.Labels\"> Labels </a> </em> </td> <td> <em>(Optional)</em> <p>The labels-related configuration to add/set on each Pod related object.</p> </td> </tr> <tr> <td> <code>resources</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core\"> Kubernetes core/v1.ResourceRequirements </a> </em> </td> <td> <em>(Optional)</em> <p>The resource requirements for the cephfs-mirror pods</p> </td> </tr> <tr> <td> <code>priorityClassName</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>PriorityClassName sets priority class on the cephfs-mirror pods</p> </td> </tr> </table> </td> </tr> <tr> <td> <code>status</code><br/> <em> <a href=\"#ceph.rook.io/v1.Status\"> Status </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephFilesystemSubVolumeGroup\">CephFilesystemSubVolumeGroup </h3> <div> <p>CephFilesystemSubVolumeGroup represents a Ceph Filesystem SubVolumeGroup</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>apiVersion</code><br/> string</td> <td> <code> ceph.rook.io/v1 </code> </td> </tr> <tr> <td> <code>kind</code><br/> string </td> <td><code>CephFilesystemSubVolumeGroup</code></td> </tr> <tr> <td> <code>metadata</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta\"> Kubernetes meta/v1.ObjectMeta </a> </em> </td> <td> Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field. </td> </tr> <tr> <td> <code>spec</code><br/> <em> <a href=\"#ceph.rook.io/v1.CephFilesystemSubVolumeGroupSpec\"> CephFilesystemSubVolumeGroupSpec </a> </em> </td> <td> <p>Spec represents the specification of a Ceph Filesystem SubVolumeGroup</p> <br/> <br/> <table> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The name of the subvolume group. If not set, the default is the name of the subvolumeGroup CR.</p> </td> </tr> <tr> <td> <code>filesystemName</code><br/> <em> string </em> </td> <td> <p>FilesystemName is the name of Ceph Filesystem SubVolumeGroup volume name. Typically it’s the name of the CephFilesystem CR. If not coming from the CephFilesystem CR, it can be retrieved from the list of Ceph Filesystem volumes with <code>ceph fs volume ls</code>. To learn more about Ceph Filesystem abstractions see"
},
{
"data": "href=\"https://docs.ceph.com/en/latest/cephfs/fs-volumes/#fs-volumes-and-subvolumes\">https://docs.ceph.com/en/latest/cephfs/fs-volumes/#fs-volumes-and-subvolumes</a></p> </td> </tr> <tr> <td> <code>pinning</code><br/> <em> <a href=\"#ceph.rook.io/v1.CephFilesystemSubVolumeGroupSpecPinning\"> CephFilesystemSubVolumeGroupSpecPinning </a> </em> </td> <td> <em>(Optional)</em> <p>Pinning configuration of CephFilesystemSubVolumeGroup, reference <a href=\"https://docs.ceph.com/en/latest/cephfs/fs-volumes/#pinning-subvolumes-and-subvolume-groups\">https://docs.ceph.com/en/latest/cephfs/fs-volumes/#pinning-subvolumes-and-subvolume-groups</a> only one out of (export, distributed, random) can be set at a time</p> </td> </tr> <tr> <td> <code>quota</code><br/> <em> k8s.io/apimachinery/pkg/api/resource.Quantity </em> </td> <td> <em>(Optional)</em> <p>Quota size of the Ceph Filesystem subvolume group.</p> </td> </tr> <tr> <td> <code>dataPoolName</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The data pool name for the Ceph Filesystem subvolume group layout, if the default CephFS pool is not desired.</p> </td> </tr> </table> </td> </tr> <tr> <td> <code>status</code><br/> <em> <a href=\"#ceph.rook.io/v1.CephFilesystemSubVolumeGroupStatus\"> CephFilesystemSubVolumeGroupStatus </a> </em> </td> <td> <em>(Optional)</em> <p>Status represents the status of a CephFilesystem SubvolumeGroup</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephNFS\">CephNFS </h3> <div> <p>CephNFS represents a Ceph NFS</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>apiVersion</code><br/> string</td> <td> <code> ceph.rook.io/v1 </code> </td> </tr> <tr> <td> <code>kind</code><br/> string </td> <td><code>CephNFS</code></td> </tr> <tr> <td> <code>metadata</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta\"> Kubernetes meta/v1.ObjectMeta </a> </em> </td> <td> Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field. </td> </tr> <tr> <td> <code>spec</code><br/> <em> <a href=\"#ceph.rook.io/v1.NFSGaneshaSpec\"> NFSGaneshaSpec </a> </em> </td> <td> <br/> <br/> <table> <tr> <td> <code>rados</code><br/> <em> <a href=\"#ceph.rook.io/v1.GaneshaRADOSSpec\"> GaneshaRADOSSpec </a> </em> </td> <td> <em>(Optional)</em> <p>RADOS is the Ganesha RADOS specification</p> </td> </tr> <tr> <td> <code>server</code><br/> <em> <a href=\"#ceph.rook.io/v1.GaneshaServerSpec\"> GaneshaServerSpec </a> </em> </td> <td> <p>Server is the Ganesha Server specification</p> </td> </tr> <tr> <td> <code>security</code><br/> <em> <a href=\"#ceph.rook.io/v1.NFSSecuritySpec\"> NFSSecuritySpec </a> </em> </td> <td> <em>(Optional)</em> <p>Security allows specifying security configurations for the NFS cluster</p> </td> </tr> </table> </td> </tr> <tr> <td> <code>status</code><br/> <em> <a href=\"#ceph.rook.io/v1.Status\"> Status </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephObjectRealm\">CephObjectRealm </h3> <div> <p>CephObjectRealm represents a Ceph Object Store Gateway Realm</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>apiVersion</code><br/> string</td> <td> <code> ceph.rook.io/v1 </code> </td> </tr> <tr> <td> <code>kind</code><br/> string </td> <td><code>CephObjectRealm</code></td> </tr> <tr> <td> <code>metadata</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta\"> Kubernetes meta/v1.ObjectMeta </a> </em> </td> <td> Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field. </td> </tr> <tr> <td> <code>spec</code><br/> <em> <a href=\"#ceph.rook.io/v1.ObjectRealmSpec\"> ObjectRealmSpec </a> </em> </td> <td> <em>(Optional)</em> <br/> <br/> <table> <tr> <td> <code>pull</code><br/> <em> <a href=\"#ceph.rook.io/v1.PullSpec\"> PullSpec </a> </em> </td> <td> </td> </tr> </table> </td> </tr> <tr> <td> <code>status</code><br/> <em> <a href=\"#ceph.rook.io/v1.Status\"> Status </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephObjectStore\">CephObjectStore </h3> <div> <p>CephObjectStore represents a Ceph Object Store Gateway</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>apiVersion</code><br/> string</td> <td> <code> ceph.rook.io/v1 </code> </td> </tr> <tr> <td> <code>kind</code><br/> string </td> <td><code>CephObjectStore</code></td> </tr> <tr> <td> <code>metadata</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta\"> Kubernetes meta/v1.ObjectMeta </a> </em> </td> <td> Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field. </td> </tr> <tr> <td> <code>spec</code><br/> <em> <a href=\"#ceph.rook.io/v1.ObjectStoreSpec\"> ObjectStoreSpec </a> </em> </td> <td> <br/> <br/> <table> <tr> <td> <code>metadataPool</code><br/> <em> <a href=\"#ceph.rook.io/v1.PoolSpec\"> PoolSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The metadata pool settings</p> </td> </tr> <tr> <td> <code>dataPool</code><br/> <em> <a href=\"#ceph.rook.io/v1.PoolSpec\"> PoolSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The data pool settings</p> </td> </tr> <tr> <td> <code>sharedPools</code><br/> <em> <a href=\"#ceph.rook.io/v1.ObjectSharedPoolsSpec\"> ObjectSharedPoolsSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The pool information when configuring RADOS namespaces in existing pools.</p> </td> </tr> <tr> <td> <code>preservePoolsOnDelete</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Preserve pools on object store deletion</p> </td> </tr> <tr> <td> <code>gateway</code><br/> <em> <a href=\"#ceph.rook.io/v1.GatewaySpec\"> GatewaySpec </a> </em> </td> <td> <em>(Optional)</em> <p>The rgw pod info</p> </td> </tr> <tr> <td> <code>zone</code><br/> <em> <a href=\"#ceph.rook.io/v1.ZoneSpec\"> ZoneSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The multisite info</p> </td> </tr> <tr> <td> <code>healthCheck</code><br/> <em>"
},
{
"data": "href=\"#ceph.rook.io/v1.ObjectHealthCheckSpec\"> ObjectHealthCheckSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The RGW health probes</p> </td> </tr> <tr> <td> <code>security</code><br/> <em> <a href=\"#ceph.rook.io/v1.ObjectStoreSecuritySpec\"> ObjectStoreSecuritySpec </a> </em> </td> <td> <em>(Optional)</em> <p>Security represents security settings</p> </td> </tr> <tr> <td> <code>allowUsersInNamespaces</code><br/> <em> []string </em> </td> <td> <em>(Optional)</em> <p>The list of allowed namespaces in addition to the object store namespace where ceph object store users may be created. Specify “*” to allow all namespaces, otherwise list individual namespaces that are to be allowed. This is useful for applications that need object store credentials to be created in their own namespace, where neither OBCs nor COSI is being used to create buckets. The default is empty.</p> </td> </tr> <tr> <td> <code>hosting</code><br/> <em> <a href=\"#ceph.rook.io/v1.ObjectStoreHostingSpec\"> ObjectStoreHostingSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Hosting settings for the object store</p> </td> </tr> </table> </td> </tr> <tr> <td> <code>status</code><br/> <em> <a href=\"#ceph.rook.io/v1.ObjectStoreStatus\"> ObjectStoreStatus </a> </em> </td> <td> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephObjectStoreUser\">CephObjectStoreUser </h3> <div> <p>CephObjectStoreUser represents a Ceph Object Store Gateway User</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>apiVersion</code><br/> string</td> <td> <code> ceph.rook.io/v1 </code> </td> </tr> <tr> <td> <code>kind</code><br/> string </td> <td><code>CephObjectStoreUser</code></td> </tr> <tr> <td> <code>metadata</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta\"> Kubernetes meta/v1.ObjectMeta </a> </em> </td> <td> Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field. </td> </tr> <tr> <td> <code>spec</code><br/> <em> <a href=\"#ceph.rook.io/v1.ObjectStoreUserSpec\"> ObjectStoreUserSpec </a> </em> </td> <td> <br/> <br/> <table> <tr> <td> <code>store</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The store the user will be created in</p> </td> </tr> <tr> <td> <code>displayName</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The display name for the ceph users</p> </td> </tr> <tr> <td> <code>capabilities</code><br/> <em> <a href=\"#ceph.rook.io/v1.ObjectUserCapSpec\"> ObjectUserCapSpec </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>quotas</code><br/> <em> <a href=\"#ceph.rook.io/v1.ObjectUserQuotaSpec\"> ObjectUserQuotaSpec </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>clusterNamespace</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The namespace where the parent CephCluster and CephObjectStore are found</p> </td> </tr> </table> </td> </tr> <tr> <td> <code>status</code><br/> <em> <a href=\"#ceph.rook.io/v1.ObjectStoreUserStatus\"> ObjectStoreUserStatus </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephObjectZone\">CephObjectZone </h3> <div> <p>CephObjectZone represents a Ceph Object Store Gateway Zone</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>apiVersion</code><br/> string</td> <td> <code> ceph.rook.io/v1 </code> </td> </tr> <tr> <td> <code>kind</code><br/> string </td> <td><code>CephObjectZone</code></td> </tr> <tr> <td> <code>metadata</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta\"> Kubernetes meta/v1.ObjectMeta </a> </em> </td> <td> Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field. </td> </tr> <tr> <td> <code>spec</code><br/> <em> <a href=\"#ceph.rook.io/v1.ObjectZoneSpec\"> ObjectZoneSpec </a> </em> </td> <td> <br/> <br/> <table> <tr> <td> <code>zoneGroup</code><br/> <em> string </em> </td> <td> <p>The display name for the ceph users</p> </td> </tr> <tr> <td> <code>metadataPool</code><br/> <em> <a href=\"#ceph.rook.io/v1.PoolSpec\"> PoolSpec </a> </em> </td> <td> <p>The metadata pool settings</p> </td> </tr> <tr> <td> <code>dataPool</code><br/> <em> <a href=\"#ceph.rook.io/v1.PoolSpec\"> PoolSpec </a> </em> </td> <td> <p>The data pool settings</p> </td> </tr> <tr> <td> <code>sharedPools</code><br/> <em> <a href=\"#ceph.rook.io/v1.ObjectSharedPoolsSpec\"> ObjectSharedPoolsSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The pool information when configuring RADOS namespaces in existing pools.</p> </td> </tr> <tr> <td> <code>customEndpoints</code><br/> <em> []string </em> </td> <td> <em>(Optional)</em> <p>If this zone cannot be accessed from other peer Ceph clusters via the ClusterIP Service endpoint created by Rook, you must set this to the externally reachable endpoint(s). You may include the port in the definition. For example: “<a href=\"https://my-object-store.my-domain.net:443"\">https://my-object-store.my-domain.net:443”</a>. In many cases, you should set this to the endpoint of the ingress resource that makes the CephObjectStore associated with this CephObjectStoreZone reachable to peer clusters. The list can have one or more endpoints pointing to different RGW servers in the zone.</p> <p>If a CephObjectStore endpoint is omitted from this list, that object store’s gateways will not receive multisite replication data (see"
},
{
"data": "</td> </tr> <tr> <td> <code>preservePoolsOnDelete</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Preserve pools on object zone deletion</p> </td> </tr> </table> </td> </tr> <tr> <td> <code>status</code><br/> <em> <a href=\"#ceph.rook.io/v1.Status\"> Status </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephObjectZoneGroup\">CephObjectZoneGroup </h3> <div> <p>CephObjectZoneGroup represents a Ceph Object Store Gateway Zone Group</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>apiVersion</code><br/> string</td> <td> <code> ceph.rook.io/v1 </code> </td> </tr> <tr> <td> <code>kind</code><br/> string </td> <td><code>CephObjectZoneGroup</code></td> </tr> <tr> <td> <code>metadata</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta\"> Kubernetes meta/v1.ObjectMeta </a> </em> </td> <td> Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field. </td> </tr> <tr> <td> <code>spec</code><br/> <em> <a href=\"#ceph.rook.io/v1.ObjectZoneGroupSpec\"> ObjectZoneGroupSpec </a> </em> </td> <td> <br/> <br/> <table> <tr> <td> <code>realm</code><br/> <em> string </em> </td> <td> <p>The display name for the ceph users</p> </td> </tr> </table> </td> </tr> <tr> <td> <code>status</code><br/> <em> <a href=\"#ceph.rook.io/v1.Status\"> Status </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephRBDMirror\">CephRBDMirror </h3> <div> <p>CephRBDMirror represents a Ceph RBD Mirror</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>apiVersion</code><br/> string</td> <td> <code> ceph.rook.io/v1 </code> </td> </tr> <tr> <td> <code>kind</code><br/> string </td> <td><code>CephRBDMirror</code></td> </tr> <tr> <td> <code>metadata</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta\"> Kubernetes meta/v1.ObjectMeta </a> </em> </td> <td> Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field. </td> </tr> <tr> <td> <code>spec</code><br/> <em> <a href=\"#ceph.rook.io/v1.RBDMirroringSpec\"> RBDMirroringSpec </a> </em> </td> <td> <br/> <br/> <table> <tr> <td> <code>count</code><br/> <em> int </em> </td> <td> <p>Count represents the number of rbd mirror instance to run</p> </td> </tr> <tr> <td> <code>peers</code><br/> <em> <a href=\"#ceph.rook.io/v1.MirroringPeerSpec\"> MirroringPeerSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Peers represents the peers spec</p> </td> </tr> <tr> <td> <code>placement</code><br/> <em> <a href=\"#ceph.rook.io/v1.Placement\"> Placement </a> </em> </td> <td> <em>(Optional)</em> <p>The affinity to place the rgw pods (default is to place on any available node)</p> </td> </tr> <tr> <td> <code>annotations</code><br/> <em> <a href=\"#ceph.rook.io/v1.Annotations\"> Annotations </a> </em> </td> <td> <em>(Optional)</em> <p>The annotations-related configuration to add/set on each Pod related object.</p> </td> </tr> <tr> <td> <code>labels</code><br/> <em> <a href=\"#ceph.rook.io/v1.Labels\"> Labels </a> </em> </td> <td> <em>(Optional)</em> <p>The labels-related configuration to add/set on each Pod related object.</p> </td> </tr> <tr> <td> <code>resources</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core\"> Kubernetes core/v1.ResourceRequirements </a> </em> </td> <td> <em>(Optional)</em> <p>The resource requirements for the rbd mirror pods</p> </td> </tr> <tr> <td> <code>priorityClassName</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>PriorityClassName sets priority class on the rbd mirror pods</p> </td> </tr> </table> </td> </tr> <tr> <td> <code>status</code><br/> <em> <a href=\"#ceph.rook.io/v1.Status\"> Status </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.AMQPEndpointSpec\">AMQPEndpointSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.TopicEndpointSpec\">TopicEndpointSpec</a>) </p> <div> <p>AMQPEndpointSpec represent the spec of an AMQP endpoint of a Bucket Topic</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>uri</code><br/> <em> string </em> </td> <td> <p>The URI of the AMQP endpoint to push notification to</p> </td> </tr> <tr> <td> <code>exchange</code><br/> <em> string </em> </td> <td> <p>Name of the exchange that is used to route messages based on topics</p> </td> </tr> <tr> <td> <code>disableVerifySSL</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Indicate whether the server certificate is validated by the client or not</p> </td> </tr> <tr> <td> <code>ackLevel</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The ack level required for this topic (none/broker/routeable)</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.AddressRangesSpec\">AddressRangesSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.NetworkSpec\">NetworkSpec</a>) </p> <div> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>public</code><br/> <em> <a href=\"#ceph.rook.io/v1.CIDRList\"> CIDRList </a> </em> </td> <td> <em>(Optional)</em> <p>Public defines a list of CIDRs to use for Ceph public network communication.</p> </td> </tr> <tr> <td> <code>cluster</code><br/> <em>"
},
{
"data": "href=\"#ceph.rook.io/v1.CIDRList\"> CIDRList </a> </em> </td> <td> <em>(Optional)</em> <p>Cluster defines a list of CIDRs to use for Ceph cluster network communication.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.Annotations\">Annotations (<code>map[string]string</code> alias)</h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.FilesystemMirroringSpec\">FilesystemMirroringSpec</a>, <a href=\"#ceph.rook.io/v1.GaneshaServerSpec\">GaneshaServerSpec</a>, <a href=\"#ceph.rook.io/v1.GatewaySpec\">GatewaySpec</a>, <a href=\"#ceph.rook.io/v1.MetadataServerSpec\">MetadataServerSpec</a>, <a href=\"#ceph.rook.io/v1.RBDMirroringSpec\">RBDMirroringSpec</a>, <a href=\"#ceph.rook.io/v1.RGWServiceSpec\">RGWServiceSpec</a>) </p> <div> <p>Annotations are annotations</p> </div> <h3 id=\"ceph.rook.io/v1.AnnotationsSpec\">AnnotationsSpec (<code>map[github.com/rook/rook/pkg/apis/ceph.rook.io/v1.KeyType]github.com/rook/rook/pkg/apis/ceph.rook.io/v1.Annotations</code> alias)</h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ClusterSpec\">ClusterSpec</a>) </p> <div> <p>AnnotationsSpec is the main spec annotation for all daemons</p> </div> <h3 id=\"ceph.rook.io/v1.BucketNotificationEvent\">BucketNotificationEvent (<code>string</code> alias)</h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.BucketNotificationSpec\">BucketNotificationSpec</a>) </p> <div> <p>BucketNotificationSpec represent the event type of the bucket notification</p> </div> <h3 id=\"ceph.rook.io/v1.BucketNotificationSpec\">BucketNotificationSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephBucketNotification\">CephBucketNotification</a>) </p> <div> <p>BucketNotificationSpec represent the spec of a Bucket Notification</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>topic</code><br/> <em> string </em> </td> <td> <p>The name of the topic associated with this notification</p> </td> </tr> <tr> <td> <code>events</code><br/> <em> <a href=\"#ceph.rook.io/v1.BucketNotificationEvent\"> []BucketNotificationEvent </a> </em> </td> <td> <em>(Optional)</em> <p>List of events that should trigger the notification</p> </td> </tr> <tr> <td> <code>filter</code><br/> <em> <a href=\"#ceph.rook.io/v1.NotificationFilterSpec\"> NotificationFilterSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Spec of notification filter</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.BucketTopicSpec\">BucketTopicSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephBucketTopic\">CephBucketTopic</a>) </p> <div> <p>BucketTopicSpec represent the spec of a Bucket Topic</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>objectStoreName</code><br/> <em> string </em> </td> <td> <p>The name of the object store on which to define the topic</p> </td> </tr> <tr> <td> <code>objectStoreNamespace</code><br/> <em> string </em> </td> <td> <p>The namespace of the object store on which to define the topic</p> </td> </tr> <tr> <td> <code>opaqueData</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Data which is sent in each event</p> </td> </tr> <tr> <td> <code>persistent</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Indication whether notifications to this endpoint are persistent or not</p> </td> </tr> <tr> <td> <code>endpoint</code><br/> <em> <a href=\"#ceph.rook.io/v1.TopicEndpointSpec\"> TopicEndpointSpec </a> </em> </td> <td> <p>Contains the endpoint spec of the topic</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.BucketTopicStatus\">BucketTopicStatus </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephBucketTopic\">CephBucketTopic</a>) </p> <div> <p>BucketTopicStatus represents the Status of a CephBucketTopic</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>phase</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>ARN</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The ARN of the topic generated by the RGW</p> </td> </tr> <tr> <td> <code>observedGeneration</code><br/> <em> int64 </em> </td> <td> <em>(Optional)</em> <p>ObservedGeneration is the latest generation observed by the controller.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CIDR\">CIDR (<code>string</code> alias)</h3> <div> <p>An IPv4 or IPv6 network CIDR.</p> <p>This naive kubebuilder regex provides immediate feedback for some typos and for a common problem case where the range spec is forgotten (e.g., /24). Rook does in-depth validation in code.</p> </div> <h3 id=\"ceph.rook.io/v1.COSIDeploymentStrategy\">COSIDeploymentStrategy (<code>string</code> alias)</h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephCOSIDriverSpec\">CephCOSIDriverSpec</a>) </p> <div> <p>COSIDeploymentStrategy represents the strategy to use to deploy the Ceph COSI driver</p> </div> <table> <thead> <tr> <th>Value</th> <th>Description</th> </tr> </thead> <tbody><tr><td><p>"Always"</p></td> <td><p>Always means the Ceph COSI driver will be deployed even if the object store is not present</p> </td> </tr><tr><td><p>"Auto"</p></td> <td><p>Auto means the Ceph COSI driver will be deployed automatically if object store is present</p> </td> </tr><tr><td><p>"Never"</p></td> <td><p>Never means the Ceph COSI driver will never deployed</p> </td> </tr></tbody> </table> <h3 id=\"ceph.rook.io/v1.CSICephFSSpec\">CSICephFSSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CSIDriverSpec\">CSIDriverSpec</a>) </p> <div> <p>CSICephFSSpec defines the settings for CephFS CSI driver.</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>kernelMountOptions</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>KernelMountOptions defines the mount options for kernel mounter.</p> </td> </tr> <tr> <td> <code>fuseMountOptions</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>FuseMountOptions defines the mount options for ceph fuse mounter.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CSIDriverSpec\">CSIDriverSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ClusterSpec\">ClusterSpec</a>) </p> <div> <p>CSIDriverSpec defines CSI Driver settings applied per"
},
{
"data": "</div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>readAffinity</code><br/> <em> <a href=\"#ceph.rook.io/v1.ReadAffinitySpec\"> ReadAffinitySpec </a> </em> </td> <td> <em>(Optional)</em> <p>ReadAffinity defines the read affinity settings for CSI driver.</p> </td> </tr> <tr> <td> <code>cephfs</code><br/> <em> <a href=\"#ceph.rook.io/v1.CSICephFSSpec\"> CSICephFSSpec </a> </em> </td> <td> <em>(Optional)</em> <p>CephFS defines CSI Driver settings for CephFS driver.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.Capacity\">Capacity </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephStatus\">CephStatus</a>) </p> <div> <p>Capacity is the capacity information of a Ceph Cluster</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>bytesTotal</code><br/> <em> uint64 </em> </td> <td> </td> </tr> <tr> <td> <code>bytesUsed</code><br/> <em> uint64 </em> </td> <td> </td> </tr> <tr> <td> <code>bytesAvailable</code><br/> <em> uint64 </em> </td> <td> </td> </tr> <tr> <td> <code>lastUpdated</code><br/> <em> string </em> </td> <td> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephBlockPoolRadosNamespace\">CephBlockPoolRadosNamespace </h3> <div> <p>CephBlockPoolRadosNamespace represents a Ceph BlockPool Rados Namespace</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>metadata</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta\"> Kubernetes meta/v1.ObjectMeta </a> </em> </td> <td> Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field. </td> </tr> <tr> <td> <code>spec</code><br/> <em> <a href=\"#ceph.rook.io/v1.CephBlockPoolRadosNamespaceSpec\"> CephBlockPoolRadosNamespaceSpec </a> </em> </td> <td> <p>Spec represents the specification of a Ceph BlockPool Rados Namespace</p> <br/> <br/> <table> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The name of the CephBlockPoolRadosNamespaceSpec namespace. If not set, the default is the name of the CR.</p> </td> </tr> <tr> <td> <code>blockPoolName</code><br/> <em> string </em> </td> <td> <p>BlockPoolName is the name of Ceph BlockPool. Typically it’s the name of the CephBlockPool CR.</p> </td> </tr> </table> </td> </tr> <tr> <td> <code>status</code><br/> <em> <a href=\"#ceph.rook.io/v1.CephBlockPoolRadosNamespaceStatus\"> CephBlockPoolRadosNamespaceStatus </a> </em> </td> <td> <em>(Optional)</em> <p>Status represents the status of a CephBlockPool Rados Namespace</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephBlockPoolRadosNamespaceSpec\">CephBlockPoolRadosNamespaceSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephBlockPoolRadosNamespace\">CephBlockPoolRadosNamespace</a>) </p> <div> <p>CephBlockPoolRadosNamespaceSpec represents the specification of a CephBlockPool Rados Namespace</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The name of the CephBlockPoolRadosNamespaceSpec namespace. If not set, the default is the name of the CR.</p> </td> </tr> <tr> <td> <code>blockPoolName</code><br/> <em> string </em> </td> <td> <p>BlockPoolName is the name of Ceph BlockPool. Typically it’s the name of the CephBlockPool CR.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephBlockPoolRadosNamespaceStatus\">CephBlockPoolRadosNamespaceStatus </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephBlockPoolRadosNamespace\">CephBlockPoolRadosNamespace</a>) </p> <div> <p>CephBlockPoolRadosNamespaceStatus represents the Status of Ceph BlockPool Rados Namespace</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>phase</code><br/> <em> <a href=\"#ceph.rook.io/v1.ConditionType\"> ConditionType </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>info</code><br/> <em> map[string]string </em> </td> <td> <em>(Optional)</em> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephBlockPoolStatus\">CephBlockPoolStatus </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephBlockPool\">CephBlockPool</a>) </p> <div> <p>CephBlockPoolStatus represents the mirroring status of Ceph Storage Pool</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>phase</code><br/> <em> <a href=\"#ceph.rook.io/v1.ConditionType\"> ConditionType </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>mirroringStatus</code><br/> <em> <a href=\"#ceph.rook.io/v1.MirroringStatusSpec\"> MirroringStatusSpec </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>mirroringInfo</code><br/> <em> <a href=\"#ceph.rook.io/v1.MirroringInfoSpec\"> MirroringInfoSpec </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>snapshotScheduleStatus</code><br/> <em> <a href=\"#ceph.rook.io/v1.SnapshotScheduleStatusSpec\"> SnapshotScheduleStatusSpec </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>info</code><br/> <em> map[string]string </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>observedGeneration</code><br/> <em> int64 </em> </td> <td> <em>(Optional)</em> <p>ObservedGeneration is the latest generation observed by the controller.</p> </td> </tr> <tr> <td> <code>conditions</code><br/> <em> <a href=\"#ceph.rook.io/v1.Condition\"> []Condition </a> </em> </td> <td> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephCOSIDriverSpec\">CephCOSIDriverSpec </h3> <p> (<em>Appears"
},
{
"data": "href=\"#ceph.rook.io/v1.CephCOSIDriver\">CephCOSIDriver</a>) </p> <div> <p>CephCOSIDriverSpec represents the specification of a Ceph COSI Driver</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>image</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Image is the container image to run the Ceph COSI driver</p> </td> </tr> <tr> <td> <code>objectProvisionerImage</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>ObjectProvisionerImage is the container image to run the COSI driver sidecar</p> </td> </tr> <tr> <td> <code>deploymentStrategy</code><br/> <em> <a href=\"#ceph.rook.io/v1.COSIDeploymentStrategy\"> COSIDeploymentStrategy </a> </em> </td> <td> <em>(Optional)</em> <p>DeploymentStrategy is the strategy to use to deploy the COSI driver.</p> </td> </tr> <tr> <td> <code>placement</code><br/> <em> <a href=\"#ceph.rook.io/v1.Placement\"> Placement </a> </em> </td> <td> <em>(Optional)</em> <p>Placement is the placement strategy to use for the COSI driver</p> </td> </tr> <tr> <td> <code>resources</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core\"> Kubernetes core/v1.ResourceRequirements </a> </em> </td> <td> <em>(Optional)</em> <p>Resources is the resource requirements for the COSI driver</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephClientStatus\">CephClientStatus </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephClient\">CephClient</a>) </p> <div> <p>CephClientStatus represents the Status of Ceph Client</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>phase</code><br/> <em> <a href=\"#ceph.rook.io/v1.ConditionType\"> ConditionType </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>info</code><br/> <em> map[string]string </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>observedGeneration</code><br/> <em> int64 </em> </td> <td> <em>(Optional)</em> <p>ObservedGeneration is the latest generation observed by the controller.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephClusterHealthCheckSpec\">CephClusterHealthCheckSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ClusterSpec\">ClusterSpec</a>) </p> <div> <p>CephClusterHealthCheckSpec represent the healthcheck for Ceph daemons</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>daemonHealth</code><br/> <em> <a href=\"#ceph.rook.io/v1.DaemonHealthSpec\"> DaemonHealthSpec </a> </em> </td> <td> <em>(Optional)</em> <p>DaemonHealth is the health check for a given daemon</p> </td> </tr> <tr> <td> <code>livenessProbe</code><br/> <em> <a href=\"#ceph.rook.io/v1.*github.com/rook/rook/pkg/apis/ceph.rook.io/v1.ProbeSpec\"> map[github.com/rook/rook/pkg/apis/ceph.rook.io/v1.KeyType]*github.com/rook/rook/pkg/apis/ceph.rook.io/v1.ProbeSpec </a> </em> </td> <td> <em>(Optional)</em> <p>LivenessProbe allows changing the livenessProbe configuration for a given daemon</p> </td> </tr> <tr> <td> <code>startupProbe</code><br/> <em> <a href=\"#ceph.rook.io/v1.*github.com/rook/rook/pkg/apis/ceph.rook.io/v1.ProbeSpec\"> map[github.com/rook/rook/pkg/apis/ceph.rook.io/v1.KeyType]*github.com/rook/rook/pkg/apis/ceph.rook.io/v1.ProbeSpec </a> </em> </td> <td> <em>(Optional)</em> <p>StartupProbe allows changing the startupProbe configuration for a given daemon</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephDaemonsVersions\">CephDaemonsVersions </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephStatus\">CephStatus</a>) </p> <div> <p>CephDaemonsVersions show the current ceph version for different ceph daemons</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>mon</code><br/> <em> map[string]int </em> </td> <td> <em>(Optional)</em> <p>Mon shows Mon Ceph version</p> </td> </tr> <tr> <td> <code>mgr</code><br/> <em> map[string]int </em> </td> <td> <em>(Optional)</em> <p>Mgr shows Mgr Ceph version</p> </td> </tr> <tr> <td> <code>osd</code><br/> <em> map[string]int </em> </td> <td> <em>(Optional)</em> <p>Osd shows Osd Ceph version</p> </td> </tr> <tr> <td> <code>rgw</code><br/> <em> map[string]int </em> </td> <td> <em>(Optional)</em> <p>Rgw shows Rgw Ceph version</p> </td> </tr> <tr> <td> <code>mds</code><br/> <em> map[string]int </em> </td> <td> <em>(Optional)</em> <p>Mds shows Mds Ceph version</p> </td> </tr> <tr> <td> <code>rbd-mirror</code><br/> <em> map[string]int </em> </td> <td> <em>(Optional)</em> <p>RbdMirror shows RbdMirror Ceph version</p> </td> </tr> <tr> <td> <code>cephfs-mirror</code><br/> <em> map[string]int </em> </td> <td> <em>(Optional)</em> <p>CephFSMirror shows CephFSMirror Ceph version</p> </td> </tr> <tr> <td> <code>overall</code><br/> <em> map[string]int </em> </td> <td> <em>(Optional)</em> <p>Overall shows overall Ceph version</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephFilesystemStatus\">CephFilesystemStatus </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephFilesystem\">CephFilesystem</a>) </p> <div> <p>CephFilesystemStatus represents the status of a Ceph Filesystem</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>phase</code><br/> <em> <a href=\"#ceph.rook.io/v1.ConditionType\"> ConditionType </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>snapshotScheduleStatus</code><br/> <em> <a href=\"#ceph.rook.io/v1.FilesystemSnapshotScheduleStatusSpec\"> FilesystemSnapshotScheduleStatusSpec </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>info</code><br/> <em> map[string]string </em> </td> <td> <em>(Optional)</em> <p>Use only info and put mirroringStatus in it?</p> </td> </tr> <tr> <td> <code>mirroringStatus</code><br/> <em> <a href=\"#ceph.rook.io/v1.FilesystemMirroringInfoSpec\"> FilesystemMirroringInfoSpec </a> </em> </td> <td> <em>(Optional)</em> <p>MirroringStatus is the filesystem mirroring status</p> </td> </tr> <tr> <td> <code>conditions</code><br/> <em> <a href=\"#ceph.rook.io/v1.Condition\"> []Condition </a> </em> </td> <td> </td> </tr> <tr> <td> <code>observedGeneration</code><br/> <em> int64 </em> </td> <td> <em>(Optional)</em> <p>ObservedGeneration is the latest generation observed by the controller.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephFilesystemSubVolumeGroupSpec\">CephFilesystemSubVolumeGroupSpec </h3> <p> (<em>Appears"
},
{
"data": "href=\"#ceph.rook.io/v1.CephFilesystemSubVolumeGroup\">CephFilesystemSubVolumeGroup</a>) </p> <div> <p>CephFilesystemSubVolumeGroupSpec represents the specification of a Ceph Filesystem SubVolumeGroup</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The name of the subvolume group. If not set, the default is the name of the subvolumeGroup CR.</p> </td> </tr> <tr> <td> <code>filesystemName</code><br/> <em> string </em> </td> <td> <p>FilesystemName is the name of Ceph Filesystem SubVolumeGroup volume name. Typically it’s the name of the CephFilesystem CR. If not coming from the CephFilesystem CR, it can be retrieved from the list of Ceph Filesystem volumes with <code>ceph fs volume ls</code>. To learn more about Ceph Filesystem abstractions see <a href=\"https://docs.ceph.com/en/latest/cephfs/fs-volumes/#fs-volumes-and-subvolumes\">https://docs.ceph.com/en/latest/cephfs/fs-volumes/#fs-volumes-and-subvolumes</a></p> </td> </tr> <tr> <td> <code>pinning</code><br/> <em> <a href=\"#ceph.rook.io/v1.CephFilesystemSubVolumeGroupSpecPinning\"> CephFilesystemSubVolumeGroupSpecPinning </a> </em> </td> <td> <em>(Optional)</em> <p>Pinning configuration of CephFilesystemSubVolumeGroup, reference <a href=\"https://docs.ceph.com/en/latest/cephfs/fs-volumes/#pinning-subvolumes-and-subvolume-groups\">https://docs.ceph.com/en/latest/cephfs/fs-volumes/#pinning-subvolumes-and-subvolume-groups</a> only one out of (export, distributed, random) can be set at a time</p> </td> </tr> <tr> <td> <code>quota</code><br/> <em> k8s.io/apimachinery/pkg/api/resource.Quantity </em> </td> <td> <em>(Optional)</em> <p>Quota size of the Ceph Filesystem subvolume group.</p> </td> </tr> <tr> <td> <code>dataPoolName</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The data pool name for the Ceph Filesystem subvolume group layout, if the default CephFS pool is not desired.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephFilesystemSubVolumeGroupSpecPinning\">CephFilesystemSubVolumeGroupSpecPinning </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephFilesystemSubVolumeGroupSpec\">CephFilesystemSubVolumeGroupSpec</a>) </p> <div> <p>CephFilesystemSubVolumeGroupSpecPinning represents the pinning configuration of SubVolumeGroup</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>export</code><br/> <em> int </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>distributed</code><br/> <em> int </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>random,</code><br/> <em> float64 </em> </td> <td> <em>(Optional)</em> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephFilesystemSubVolumeGroupStatus\">CephFilesystemSubVolumeGroupStatus </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephFilesystemSubVolumeGroup\">CephFilesystemSubVolumeGroup</a>) </p> <div> <p>CephFilesystemSubVolumeGroupStatus represents the Status of Ceph Filesystem SubVolumeGroup</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>phase</code><br/> <em> <a href=\"#ceph.rook.io/v1.ConditionType\"> ConditionType </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>info</code><br/> <em> map[string]string </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>observedGeneration</code><br/> <em> int64 </em> </td> <td> <em>(Optional)</em> <p>ObservedGeneration is the latest generation observed by the controller.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephHealthMessage\">CephHealthMessage </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephStatus\">CephStatus</a>) </p> <div> <p>CephHealthMessage represents the health message of a Ceph Cluster</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>severity</code><br/> <em> string </em> </td> <td> </td> </tr> <tr> <td> <code>message</code><br/> <em> string </em> </td> <td> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephNetworkType\">CephNetworkType (<code>string</code> alias)</h3> <div> <p>CephNetworkType should be “public” or “cluster”. Allow any string so that over-specified legacy clusters do not break on CRD update.</p> </div> <table> <thead> <tr> <th>Value</th> <th>Description</th> </tr> </thead> <tbody><tr><td><p>"cluster"</p></td> <td></td> </tr><tr><td><p>"public"</p></td> <td></td> </tr></tbody> </table> <h3 id=\"ceph.rook.io/v1.CephStatus\">CephStatus </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ClusterStatus\">ClusterStatus</a>) </p> <div> <p>CephStatus is the details health of a Ceph Cluster</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>health</code><br/> <em> string </em> </td> <td> </td> </tr> <tr> <td> <code>details</code><br/> <em> <a href=\"#ceph.rook.io/v1.CephHealthMessage\"> map[string]github.com/rook/rook/pkg/apis/ceph.rook.io/v1.CephHealthMessage </a> </em> </td> <td> </td> </tr> <tr> <td> <code>lastChecked</code><br/> <em> string </em> </td> <td> </td> </tr> <tr> <td> <code>lastChanged</code><br/> <em> string </em> </td> <td> </td> </tr> <tr> <td> <code>previousHealth</code><br/> <em> string </em> </td> <td> </td> </tr> <tr> <td> <code>capacity</code><br/> <em> <a href=\"#ceph.rook.io/v1.Capacity\"> Capacity </a> </em> </td> <td> </td> </tr> <tr> <td> <code>versions</code><br/> <em> <a href=\"#ceph.rook.io/v1.CephDaemonsVersions\"> CephDaemonsVersions </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>fsid</code><br/> <em> string </em> </td> <td> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephStorage\">CephStorage </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ClusterStatus\">ClusterStatus</a>) </p> <div> <p>CephStorage represents flavors of Ceph Cluster Storage</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>deviceClasses</code><br/> <em> <a href=\"#ceph.rook.io/v1.DeviceClasses\"> []DeviceClasses </a> </em> </td> <td> </td> </tr> <tr> <td> <code>osd</code><br/> <em> <a href=\"#ceph.rook.io/v1.OSDStatus\"> OSDStatus </a> </em> </td> <td> </td> </tr> <tr> <td> <code>deprecatedOSDs</code><br/> <em> mapint </em> </td> <td> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CephVersionSpec\">CephVersionSpec </h3> <p> (<em>Appears"
},
{
"data": "href=\"#ceph.rook.io/v1.ClusterSpec\">ClusterSpec</a>) </p> <div> <p>CephVersionSpec represents the settings for the Ceph version that Rook is orchestrating.</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>image</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Image is the container image used to launch the ceph daemons, such as quay.io/ceph/ceph:<tag> The full list of images can be found at <a href=\"https://quay.io/repository/ceph/ceph?tab=tags\">https://quay.io/repository/ceph/ceph?tab=tags</a></p> </td> </tr> <tr> <td> <code>allowUnsupported</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Whether to allow unsupported versions (do not set to true in production)</p> </td> </tr> <tr> <td> <code>imagePullPolicy</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#pullpolicy-v1-core\"> Kubernetes core/v1.PullPolicy </a> </em> </td> <td> <em>(Optional)</em> <p>ImagePullPolicy describes a policy for if/when to pull a container image One of Always, Never, IfNotPresent.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CleanupConfirmationProperty\">CleanupConfirmationProperty (<code>string</code> alias)</h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CleanupPolicySpec\">CleanupPolicySpec</a>) </p> <div> <p>CleanupConfirmationProperty represents the cleanup confirmation</p> </div> <table> <thead> <tr> <th>Value</th> <th>Description</th> </tr> </thead> <tbody><tr><td><p>"yes-really-destroy-data"</p></td> <td><p>DeleteDataDirOnHostsConfirmation represents the validation to destroy dataDirHostPath</p> </td> </tr></tbody> </table> <h3 id=\"ceph.rook.io/v1.CleanupPolicySpec\">CleanupPolicySpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ClusterSpec\">ClusterSpec</a>) </p> <div> <p>CleanupPolicySpec represents a Ceph Cluster cleanup policy</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>confirmation</code><br/> <em> <a href=\"#ceph.rook.io/v1.CleanupConfirmationProperty\"> CleanupConfirmationProperty </a> </em> </td> <td> <em>(Optional)</em> <p>Confirmation represents the cleanup confirmation</p> </td> </tr> <tr> <td> <code>sanitizeDisks</code><br/> <em> <a href=\"#ceph.rook.io/v1.SanitizeDisksSpec\"> SanitizeDisksSpec </a> </em> </td> <td> <em>(Optional)</em> <p>SanitizeDisks represents way we sanitize disks</p> </td> </tr> <tr> <td> <code>allowUninstallWithVolumes</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>AllowUninstallWithVolumes defines whether we can proceed with the uninstall if they are RBD images still present</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ClientSpec\">ClientSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephClient\">CephClient</a>) </p> <div> <p>ClientSpec represents the specification of a Ceph Client</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>caps</code><br/> <em> map[string]string </em> </td> <td> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ClusterSpec\">ClusterSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephCluster\">CephCluster</a>) </p> <div> <p>ClusterSpec represents the specification of Ceph Cluster</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>cephVersion</code><br/> <em> <a href=\"#ceph.rook.io/v1.CephVersionSpec\"> CephVersionSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The version information that instructs Rook to orchestrate a particular version of Ceph.</p> </td> </tr> <tr> <td> <code>storage</code><br/> <em> <a href=\"#ceph.rook.io/v1.StorageScopeSpec\"> StorageScopeSpec </a> </em> </td> <td> <em>(Optional)</em> <p>A spec for available storage in the cluster and how it should be used</p> </td> </tr> <tr> <td> <code>annotations</code><br/> <em> <a href=\"#ceph.rook.io/v1.AnnotationsSpec\"> AnnotationsSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The annotations-related configuration to add/set on each Pod related object.</p> </td> </tr> <tr> <td> <code>labels</code><br/> <em> <a href=\"#ceph.rook.io/v1.LabelsSpec\"> LabelsSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The labels-related configuration to add/set on each Pod related object.</p> </td> </tr> <tr> <td> <code>placement</code><br/> <em> <a href=\"#ceph.rook.io/v1.PlacementSpec\"> PlacementSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The placement-related configuration to pass to kubernetes (affinity, node selector, tolerations).</p> </td> </tr> <tr> <td> <code>network</code><br/> <em> <a href=\"#ceph.rook.io/v1.NetworkSpec\"> NetworkSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Network related configuration</p> </td> </tr> <tr> <td> <code>resources</code><br/> <em> <a href=\"#ceph.rook.io/v1.ResourceSpec\"> ResourceSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Resources set resource requests and limits</p> </td> </tr> <tr> <td> <code>priorityClassNames</code><br/> <em> <a href=\"#ceph.rook.io/v1.PriorityClassNamesSpec\"> PriorityClassNamesSpec </a> </em> </td> <td> <em>(Optional)</em> <p>PriorityClassNames sets priority classes on components</p> </td> </tr> <tr> <td> <code>dataDirHostPath</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The path on the host where config and data can be persisted</p> </td> </tr> <tr> <td> <code>skipUpgradeChecks</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>SkipUpgradeChecks defines if an upgrade should be forced even if one of the check fails</p> </td> </tr> <tr> <td> <code>continueUpgradeAfterChecksEvenIfNotHealthy</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>ContinueUpgradeAfterChecksEvenIfNotHealthy defines if an upgrade should continue even if PGs are not clean</p> </td> </tr> <tr> <td> <code>waitTimeoutForHealthyOSDInMinutes</code><br/> <em>"
},
{
"data": "</em> </td> <td> <em>(Optional)</em> <p>WaitTimeoutForHealthyOSDInMinutes defines the time the operator would wait before an OSD can be stopped for upgrade or restart. If the timeout exceeds and OSD is not ok to stop, then the operator would skip upgrade for the current OSD and proceed with the next one if <code>continueUpgradeAfterChecksEvenIfNotHealthy</code> is <code>false</code>. If <code>continueUpgradeAfterChecksEvenIfNotHealthy</code> is <code>true</code>, then operator would continue with the upgrade of an OSD even if its not ok to stop after the timeout. This timeout won’t be applied if <code>skipUpgradeChecks</code> is <code>true</code>. The default wait timeout is 10 minutes.</p> </td> </tr> <tr> <td> <code>upgradeOSDRequiresHealthyPGs</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>UpgradeOSDRequiresHealthyPGs defines if OSD upgrade requires PGs are clean. If set to <code>true</code> OSD upgrade process won’t start until PGs are healthy. This configuration will be ignored if <code>skipUpgradeChecks</code> is <code>true</code>. Default is false.</p> </td> </tr> <tr> <td> <code>disruptionManagement</code><br/> <em> <a href=\"#ceph.rook.io/v1.DisruptionManagementSpec\"> DisruptionManagementSpec </a> </em> </td> <td> <em>(Optional)</em> <p>A spec for configuring disruption management.</p> </td> </tr> <tr> <td> <code>mon</code><br/> <em> <a href=\"#ceph.rook.io/v1.MonSpec\"> MonSpec </a> </em> </td> <td> <em>(Optional)</em> <p>A spec for mon related options</p> </td> </tr> <tr> <td> <code>crashCollector</code><br/> <em> <a href=\"#ceph.rook.io/v1.CrashCollectorSpec\"> CrashCollectorSpec </a> </em> </td> <td> <em>(Optional)</em> <p>A spec for the crash controller</p> </td> </tr> <tr> <td> <code>dashboard</code><br/> <em> <a href=\"#ceph.rook.io/v1.DashboardSpec\"> DashboardSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Dashboard settings</p> </td> </tr> <tr> <td> <code>monitoring</code><br/> <em> <a href=\"#ceph.rook.io/v1.MonitoringSpec\"> MonitoringSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Prometheus based Monitoring settings</p> </td> </tr> <tr> <td> <code>external</code><br/> <em> <a href=\"#ceph.rook.io/v1.ExternalSpec\"> ExternalSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Whether the Ceph Cluster is running external to this Kubernetes cluster mon, mgr, osd, mds, and discover daemons will not be created for external clusters.</p> </td> </tr> <tr> <td> <code>mgr</code><br/> <em> <a href=\"#ceph.rook.io/v1.MgrSpec\"> MgrSpec </a> </em> </td> <td> <em>(Optional)</em> <p>A spec for mgr related options</p> </td> </tr> <tr> <td> <code>removeOSDsIfOutAndSafeToRemove</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Remove the OSD that is out and safe to remove only if this option is true</p> </td> </tr> <tr> <td> <code>cleanupPolicy</code><br/> <em> <a href=\"#ceph.rook.io/v1.CleanupPolicySpec\"> CleanupPolicySpec </a> </em> </td> <td> <em>(Optional)</em> <p>Indicates user intent when deleting a cluster; blocks orchestration and should not be set if cluster deletion is not imminent.</p> </td> </tr> <tr> <td> <code>healthCheck</code><br/> <em> <a href=\"#ceph.rook.io/v1.CephClusterHealthCheckSpec\"> CephClusterHealthCheckSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Internal daemon healthchecks and liveness probe</p> </td> </tr> <tr> <td> <code>security</code><br/> <em> <a href=\"#ceph.rook.io/v1.SecuritySpec\"> SecuritySpec </a> </em> </td> <td> <em>(Optional)</em> <p>Security represents security settings</p> </td> </tr> <tr> <td> <code>logCollector</code><br/> <em> <a href=\"#ceph.rook.io/v1.LogCollectorSpec\"> LogCollectorSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Logging represents loggings settings</p> </td> </tr> <tr> <td> <code>csi</code><br/> <em> <a href=\"#ceph.rook.io/v1.CSIDriverSpec\"> CSIDriverSpec </a> </em> </td> <td> <em>(Optional)</em> <p>CSI Driver Options applied per cluster.</p> </td> </tr> <tr> <td> <code>cephConfig</code><br/> <em> map[string]map[string]string </em> </td> <td> <em>(Optional)</em> <p>Ceph Config options</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ClusterState\">ClusterState (<code>string</code> alias)</h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ClusterStatus\">ClusterStatus</a>) </p> <div> <p>ClusterState represents the state of a Ceph Cluster</p> </div> <table> <thead> <tr> <th>Value</th> <th>Description</th> </tr> </thead> <tbody><tr><td><p>"Connected"</p></td> <td><p>ClusterStateConnected represents the Connected state of a Ceph Cluster</p> </td> </tr><tr><td><p>"Connecting"</p></td> <td><p>ClusterStateConnecting represents the Connecting state of a Ceph Cluster</p> </td> </tr><tr><td><p>"Created"</p></td> <td><p>ClusterStateCreated represents the Created state of a Ceph Cluster</p> </td> </tr><tr><td><p>"Creating"</p></td> <td><p>ClusterStateCreating represents the Creating state of a Ceph Cluster</p> </td> </tr><tr><td><p>"Error"</p></td> <td><p>ClusterStateError represents the Error state of a Ceph Cluster</p> </td> </tr><tr><td><p>"Updating"</p></td> <td><p>ClusterStateUpdating represents the Updating state of a Ceph Cluster</p> </td> </tr></tbody> </table> <h3 id=\"ceph.rook.io/v1.ClusterStatus\">ClusterStatus </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephCluster\">CephCluster</a>) </p> <div> <p>ClusterStatus represents the status of a Ceph cluster</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>state</code><br/> <em> <a href=\"#ceph.rook.io/v1.ClusterState\"> ClusterState </a> </em> </td> <td> </td> </tr> <tr> <td> <code>phase</code><br/> <em>"
},
{
"data": "href=\"#ceph.rook.io/v1.ConditionType\"> ConditionType </a> </em> </td> <td> </td> </tr> <tr> <td> <code>message</code><br/> <em> string </em> </td> <td> </td> </tr> <tr> <td> <code>conditions</code><br/> <em> <a href=\"#ceph.rook.io/v1.Condition\"> []Condition </a> </em> </td> <td> </td> </tr> <tr> <td> <code>ceph</code><br/> <em> <a href=\"#ceph.rook.io/v1.CephStatus\"> CephStatus </a> </em> </td> <td> </td> </tr> <tr> <td> <code>storage</code><br/> <em> <a href=\"#ceph.rook.io/v1.CephStorage\"> CephStorage </a> </em> </td> <td> </td> </tr> <tr> <td> <code>version</code><br/> <em> <a href=\"#ceph.rook.io/v1.ClusterVersion\"> ClusterVersion </a> </em> </td> <td> </td> </tr> <tr> <td> <code>observedGeneration</code><br/> <em> int64 </em> </td> <td> <em>(Optional)</em> <p>ObservedGeneration is the latest generation observed by the controller.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ClusterVersion\">ClusterVersion </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ClusterStatus\">ClusterStatus</a>) </p> <div> <p>ClusterVersion represents the version of a Ceph Cluster</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>image</code><br/> <em> string </em> </td> <td> </td> </tr> <tr> <td> <code>version</code><br/> <em> string </em> </td> <td> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CompressionSpec\">CompressionSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ConnectionsSpec\">ConnectionsSpec</a>) </p> <div> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>enabled</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Whether to compress the data in transit across the wire. The default is not set.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.Condition\">Condition </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephBlockPoolStatus\">CephBlockPoolStatus</a>, <a href=\"#ceph.rook.io/v1.CephFilesystemStatus\">CephFilesystemStatus</a>, <a href=\"#ceph.rook.io/v1.ClusterStatus\">ClusterStatus</a>, <a href=\"#ceph.rook.io/v1.ObjectStoreStatus\">ObjectStoreStatus</a>, <a href=\"#ceph.rook.io/v1.Status\">Status</a>) </p> <div> <p>Condition represents a status condition on any Rook-Ceph Custom Resource.</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>type</code><br/> <em> <a href=\"#ceph.rook.io/v1.ConditionType\"> ConditionType </a> </em> </td> <td> </td> </tr> <tr> <td> <code>status</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#conditionstatus-v1-core\"> Kubernetes core/v1.ConditionStatus </a> </em> </td> <td> </td> </tr> <tr> <td> <code>reason</code><br/> <em> <a href=\"#ceph.rook.io/v1.ConditionReason\"> ConditionReason </a> </em> </td> <td> </td> </tr> <tr> <td> <code>message</code><br/> <em> string </em> </td> <td> </td> </tr> <tr> <td> <code>lastHeartbeatTime</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#time-v1-meta\"> Kubernetes meta/v1.Time </a> </em> </td> <td> </td> </tr> <tr> <td> <code>lastTransitionTime</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#time-v1-meta\"> Kubernetes meta/v1.Time </a> </em> </td> <td> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ConditionReason\">ConditionReason (<code>string</code> alias)</h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.Condition\">Condition</a>) </p> <div> <p>ConditionReason is a reason for a condition</p> </div> <table> <thead> <tr> <th>Value</th> <th>Description</th> </tr> </thead> <tbody><tr><td><p>"ClusterConnected"</p></td> <td><p>ClusterConnectedReason is cluster connected reason</p> </td> </tr><tr><td><p>"ClusterConnecting"</p></td> <td><p>ClusterConnectingReason is cluster connecting reason</p> </td> </tr><tr><td><p>"ClusterCreated"</p></td> <td><p>ClusterCreatedReason is cluster created reason</p> </td> </tr><tr><td><p>"ClusterDeleting"</p></td> <td><p>ClusterDeletingReason is cluster deleting reason</p> </td> </tr><tr><td><p>"ClusterProgressing"</p></td> <td><p>ClusterProgressingReason is cluster progressing reason</p> </td> </tr><tr><td><p>"Deleting"</p></td> <td><p>DeletingReason represents when Rook has detected a resource object should be deleted.</p> </td> </tr><tr><td><p>"ObjectHasDependents"</p></td> <td><p>ObjectHasDependentsReason represents when a resource object has dependents that are blocking deletion.</p> </td> </tr><tr><td><p>"ObjectHasNoDependents"</p></td> <td><p>ObjectHasNoDependentsReason represents when a resource object has no dependents that are blocking deletion.</p> </td> </tr><tr><td><p>"ReconcileFailed"</p></td> <td><p>ReconcileFailed represents when a resource reconciliation failed.</p> </td> </tr><tr><td><p>"ReconcileStarted"</p></td> <td><p>ReconcileStarted represents when a resource reconciliation started.</p> </td> </tr><tr><td><p>"ReconcileSucceeded"</p></td> <td><p>ReconcileSucceeded represents when a resource reconciliation was successful.</p> </td> </tr></tbody> </table> <h3 id=\"ceph.rook.io/v1.ConditionType\">ConditionType (<code>string</code> alias)</h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephBlockPoolRadosNamespaceStatus\">CephBlockPoolRadosNamespaceStatus</a>, <a href=\"#ceph.rook.io/v1.CephBlockPoolStatus\">CephBlockPoolStatus</a>, <a href=\"#ceph.rook.io/v1.CephClientStatus\">CephClientStatus</a>, <a href=\"#ceph.rook.io/v1.CephFilesystemStatus\">CephFilesystemStatus</a>, <a href=\"#ceph.rook.io/v1.CephFilesystemSubVolumeGroupStatus\">CephFilesystemSubVolumeGroupStatus</a>, <a href=\"#ceph.rook.io/v1.ClusterStatus\">ClusterStatus</a>, <a href=\"#ceph.rook.io/v1.Condition\">Condition</a>, <a href=\"#ceph.rook.io/v1.ObjectStoreStatus\">ObjectStoreStatus</a>) </p> <div> <p>ConditionType represent a resource’s status</p> </div> <table> <thead> <tr> <th>Value</th> <th>Description</th> </tr> </thead> <tbody><tr><td><p>"Connected"</p></td> <td><p>ConditionConnected represents Connected state of an object</p> </td> </tr><tr><td><p>"Connecting"</p></td> <td><p>ConditionConnecting represents Connecting state of an object</p> </td> </tr><tr><td><p>"Deleting"</p></td> <td><p>ConditionDeleting represents Deleting state of an object</p> </td> </tr><tr><td><p>"DeletionIsBlocked"</p></td> <td><p>ConditionDeletionIsBlocked represents when deletion of the object is blocked.</p> </td> </tr><tr><td><p>"Failure"</p></td> <td><p>ConditionFailure represents Failure state of an object</p> </td> </tr><tr><td><p>"Progressing"</p></td> <td><p>ConditionProgressing represents Progressing state of an object</p> </td> </tr><tr><td><p>"Ready"</p></td> <td><p>ConditionReady represents Ready state of an object</p> </td> </tr></tbody> </table> <h3 id=\"ceph.rook.io/v1.ConfigFileVolumeSource\">ConfigFileVolumeSource </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.KerberosConfigFiles\">KerberosConfigFiles</a>, <a href=\"#ceph.rook.io/v1.KerberosKeytabFile\">KerberosKeytabFile</a>, <a href=\"#ceph.rook.io/v1.SSSDSidecarAdditionalFile\">SSSDSidecarAdditionalFile</a>, <a href=\"#ceph.rook.io/v1.SSSDSidecarConfigFile\">SSSDSidecarConfigFile</a>) </p> <div> <p>Represents the source of a volume to mount. Only one of its members may be specified. This is a subset of the full Kubernetes API’s VolumeSource that is reduced to what is most likely to be useful for mounting config files/dirs into Rook"
},
{
"data": "</div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>hostPath</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#hostpathvolumesource-v1-core\"> Kubernetes core/v1.HostPathVolumeSource </a> </em> </td> <td> <em>(Optional)</em> <p>hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: <a href=\"https://kubernetes.io/docs/concepts/storage/volumes#hostpath\">https://kubernetes.io/docs/concepts/storage/volumes#hostpath</a></p> <hr /> </td> </tr> <tr> <td> <code>emptyDir</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#emptydirvolumesource-v1-core\"> Kubernetes core/v1.EmptyDirVolumeSource </a> </em> </td> <td> <em>(Optional)</em> <p>emptyDir represents a temporary directory that shares a pod’s lifetime. More info: <a href=\"https://kubernetes.io/docs/concepts/storage/volumes#emptydir\">https://kubernetes.io/docs/concepts/storage/volumes#emptydir</a></p> </td> </tr> <tr> <td> <code>secret</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secretvolumesource-v1-core\"> Kubernetes core/v1.SecretVolumeSource </a> </em> </td> <td> <em>(Optional)</em> <p>secret represents a secret that should populate this volume. More info: <a href=\"https://kubernetes.io/docs/concepts/storage/volumes#secret\">https://kubernetes.io/docs/concepts/storage/volumes#secret</a></p> </td> </tr> <tr> <td> <code>persistentVolumeClaim</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#persistentvolumeclaimvolumesource-v1-core\"> Kubernetes core/v1.PersistentVolumeClaimVolumeSource </a> </em> </td> <td> <em>(Optional)</em> <p>persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: <a href=\"https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\">https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims</a></p> </td> </tr> <tr> <td> <code>configMap</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#configmapvolumesource-v1-core\"> Kubernetes core/v1.ConfigMapVolumeSource </a> </em> </td> <td> <em>(Optional)</em> <p>configMap represents a configMap that should populate this volume</p> </td> </tr> <tr> <td> <code>projected</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#projectedvolumesource-v1-core\"> Kubernetes core/v1.ProjectedVolumeSource </a> </em> </td> <td> <p>projected items for all in one resources secrets, configmaps, and downward API</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ConnectionsSpec\">ConnectionsSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.NetworkSpec\">NetworkSpec</a>) </p> <div> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>encryption</code><br/> <em> <a href=\"#ceph.rook.io/v1.EncryptionSpec\"> EncryptionSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Encryption settings for the network connections.</p> </td> </tr> <tr> <td> <code>compression</code><br/> <em> <a href=\"#ceph.rook.io/v1.CompressionSpec\"> CompressionSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Compression settings for the network connections.</p> </td> </tr> <tr> <td> <code>requireMsgr2</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Whether to require msgr2 (port 3300) even if compression or encryption are not enabled. If true, the msgr1 port (6789) will be disabled. Requires a kernel that supports msgr2 (kernel 5.11 or CentOS 8.4 or newer).</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.CrashCollectorSpec\">CrashCollectorSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ClusterSpec\">ClusterSpec</a>) </p> <div> <p>CrashCollectorSpec represents options to configure the crash controller</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>disable</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Disable determines whether we should enable the crash collector</p> </td> </tr> <tr> <td> <code>daysToRetain</code><br/> <em> uint </em> </td> <td> <em>(Optional)</em> <p>DaysToRetain represents the number of days to retain crash until they get pruned</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.DaemonHealthSpec\">DaemonHealthSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephClusterHealthCheckSpec\">CephClusterHealthCheckSpec</a>) </p> <div> <p>DaemonHealthSpec is a daemon health check</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>status</code><br/> <em> <a href=\"#ceph.rook.io/v1.HealthCheckSpec\"> HealthCheckSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Status represents the health check settings for the Ceph health</p> </td> </tr> <tr> <td> <code>mon</code><br/> <em> <a href=\"#ceph.rook.io/v1.HealthCheckSpec\"> HealthCheckSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Monitor represents the health check settings for the Ceph monitor</p> </td> </tr> <tr> <td> <code>osd</code><br/> <em> <a href=\"#ceph.rook.io/v1.HealthCheckSpec\"> HealthCheckSpec </a> </em> </td> <td> <em>(Optional)</em> <p>ObjectStorageDaemon represents the health check settings for the Ceph OSDs</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.DashboardSpec\">DashboardSpec </h3> <p> (<em>Appears"
},
{
"data": "href=\"#ceph.rook.io/v1.ClusterSpec\">ClusterSpec</a>) </p> <div> <p>DashboardSpec represents the settings for the Ceph dashboard</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>enabled</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Enabled determines whether to enable the dashboard</p> </td> </tr> <tr> <td> <code>urlPrefix</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>URLPrefix is a prefix for all URLs to use the dashboard with a reverse proxy</p> </td> </tr> <tr> <td> <code>port</code><br/> <em> int </em> </td> <td> <em>(Optional)</em> <p>Port is the dashboard webserver port</p> </td> </tr> <tr> <td> <code>ssl</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>SSL determines whether SSL should be used</p> </td> </tr> <tr> <td> <code>prometheusEndpoint</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Endpoint for the Prometheus host</p> </td> </tr> <tr> <td> <code>prometheusEndpointSSLVerify</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Whether to verify the ssl endpoint for prometheus. Set to false for a self-signed cert.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.Device\">Device </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.Selection\">Selection</a>) </p> <div> <p>Device represents a disk to use in the cluster</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>fullpath</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>config</code><br/> <em> map[string]string </em> </td> <td> <em>(Optional)</em> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.DeviceClasses\">DeviceClasses </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephStorage\">CephStorage</a>) </p> <div> <p>DeviceClasses represents device classes of a Ceph Cluster</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.DisruptionManagementSpec\">DisruptionManagementSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ClusterSpec\">ClusterSpec</a>) </p> <div> <p>DisruptionManagementSpec configures management of daemon disruptions</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>managePodBudgets</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>This enables management of poddisruptionbudgets</p> </td> </tr> <tr> <td> <code>osdMaintenanceTimeout</code><br/> <em> time.Duration </em> </td> <td> <em>(Optional)</em> <p>OSDMaintenanceTimeout sets how many additional minutes the DOWN/OUT interval is for drained failure domains it only works if managePodBudgets is true. the default is 30 minutes</p> </td> </tr> <tr> <td> <code>pgHealthCheckTimeout</code><br/> <em> time.Duration </em> </td> <td> <em>(Optional)</em> <p>PGHealthCheckTimeout is the time (in minutes) that the operator will wait for the placement groups to become healthy (active+clean) after a drain was completed and OSDs came back up. Rook will continue with the next drain if the timeout exceeds. It only works if managePodBudgets is true. No values or 0 means that the operator will wait until the placement groups are healthy before unblocking the next drain.</p> </td> </tr> <tr> <td> <code>pgHealthyRegex</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>PgHealthyRegex is the regular expression that is used to determine which PG states should be considered healthy. The default is <code>^(active\\+clean|active\\+clean\\+scrubbing|active\\+clean\\+scrubbing\\+deep)$</code></p> </td> </tr> <tr> <td> <code>manageMachineDisruptionBudgets</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Deprecated. This enables management of machinedisruptionbudgets.</p> </td> </tr> <tr> <td> <code>machineDisruptionBudgetNamespace</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Deprecated. Namespace to look for MDBs by the machineDisruptionBudgetController</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.EncryptionSpec\">EncryptionSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ConnectionsSpec\">ConnectionsSpec</a>) </p> <div> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>enabled</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Whether to encrypt the data in transit across the wire to prevent eavesdropping the data on the network. The default is not set. Even if encryption is not enabled, clients still establish a strong initial authentication for the connection and data integrity is still validated with a crc check. When encryption is enabled, all communication between clients and Ceph daemons, or between Ceph daemons will be encrypted.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.EndpointAddress\">EndpointAddress </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.GatewaySpec\">GatewaySpec</a>) </p> <div> <p>EndpointAddress is a tuple that describes a single IP address or host name. This is a subset of Kubernetes’s v1.EndpointAddress.</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>ip</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The IP of this endpoint. As a legacy behavior, this supports being given a DNS-addressable hostname as well.</p> </td> </tr> <tr> <td> <code>hostname</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The DNS-addressable Hostname of this endpoint. This field will be preferred over IP if both are given.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ErasureCodedSpec\">ErasureCodedSpec </h3> <p> (<em>Appears"
},
{
"data": "href=\"#ceph.rook.io/v1.PoolSpec\">PoolSpec</a>) </p> <div> <p>ErasureCodedSpec represents the spec for erasure code in a pool</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>codingChunks</code><br/> <em> uint </em> </td> <td> <p>Number of coding chunks per object in an erasure coded storage pool (required for erasure-coded pool type). This is the number of OSDs that can be lost simultaneously before data cannot be recovered.</p> </td> </tr> <tr> <td> <code>dataChunks</code><br/> <em> uint </em> </td> <td> <p>Number of data chunks per object in an erasure coded storage pool (required for erasure-coded pool type). The number of chunks required to recover an object when any single OSD is lost is the same as dataChunks so be aware that the larger the number of data chunks, the higher the cost of recovery.</p> </td> </tr> <tr> <td> <code>algorithm</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The algorithm for erasure coding</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ExternalSpec\">ExternalSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ClusterSpec\">ClusterSpec</a>) </p> <div> <p>ExternalSpec represents the options supported by an external cluster</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>enable</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Enable determines whether external mode is enabled or not</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.FSMirroringSpec\">FSMirroringSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.FilesystemSpec\">FilesystemSpec</a>) </p> <div> <p>FSMirroringSpec represents the setting for a mirrored filesystem</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>enabled</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Enabled whether this filesystem is mirrored or not</p> </td> </tr> <tr> <td> <code>peers</code><br/> <em> <a href=\"#ceph.rook.io/v1.MirroringPeerSpec\"> MirroringPeerSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Peers represents the peers spec</p> </td> </tr> <tr> <td> <code>snapshotSchedules</code><br/> <em> <a href=\"#ceph.rook.io/v1.SnapshotScheduleSpec\"> []SnapshotScheduleSpec </a> </em> </td> <td> <em>(Optional)</em> <p>SnapshotSchedules is the scheduling of snapshot for mirrored filesystems</p> </td> </tr> <tr> <td> <code>snapshotRetention</code><br/> <em> <a href=\"#ceph.rook.io/v1.SnapshotScheduleRetentionSpec\"> []SnapshotScheduleRetentionSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Retention is the retention policy for a snapshot schedule One path has exactly one retention policy. A policy can however contain multiple count-time period pairs in order to specify complex retention policies</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.FilesystemMirrorInfoPeerSpec\">FilesystemMirrorInfoPeerSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.FilesystemsSpec\">FilesystemsSpec</a>) </p> <div> <p>FilesystemMirrorInfoPeerSpec is the specification of a filesystem peer mirror</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>uuid</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>UUID is the peer unique identifier</p> </td> </tr> <tr> <td> <code>remote</code><br/> <em> <a href=\"#ceph.rook.io/v1.PeerRemoteSpec\"> PeerRemoteSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Remote are the remote cluster information</p> </td> </tr> <tr> <td> <code>stats</code><br/> <em> <a href=\"#ceph.rook.io/v1.PeerStatSpec\"> PeerStatSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Stats are the stat a peer mirror</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.FilesystemMirroringInfo\">FilesystemMirroringInfo </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.FilesystemMirroringInfoSpec\">FilesystemMirroringInfoSpec</a>) </p> <div> <p>FilesystemMirrorInfoSpec is the filesystem mirror status of a given filesystem</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>daemon_id</code><br/> <em> int </em> </td> <td> <em>(Optional)</em> <p>DaemonID is the cephfs-mirror name</p> </td> </tr> <tr> <td> <code>filesystems</code><br/> <em> <a href=\"#ceph.rook.io/v1.FilesystemsSpec\"> []FilesystemsSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Filesystems is the list of filesystems managed by a given cephfs-mirror daemon</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.FilesystemMirroringInfoSpec\">FilesystemMirroringInfoSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephFilesystemStatus\">CephFilesystemStatus</a>) </p> <div> <p>FilesystemMirroringInfo is the status of the pool mirroring</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>daemonsStatus</code><br/> <em> <a href=\"#ceph.rook.io/v1.FilesystemMirroringInfo\"> []FilesystemMirroringInfo </a> </em> </td> <td> <em>(Optional)</em> <p>PoolMirroringStatus is the mirroring status of a filesystem</p> </td> </tr> <tr> <td> <code>lastChecked</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>LastChecked is the last time time the status was checked</p> </td> </tr> <tr> <td> <code>lastChanged</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>LastChanged is the last time time the status last changed</p> </td> </tr> <tr> <td> <code>details</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Details contains potential status errors</p> </td> </tr> </tbody> </table> <h3"
},
{
"data": "</h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephFilesystemMirror\">CephFilesystemMirror</a>) </p> <div> <p>FilesystemMirroringSpec is the filesystem mirroring specification</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>placement</code><br/> <em> <a href=\"#ceph.rook.io/v1.Placement\"> Placement </a> </em> </td> <td> <em>(Optional)</em> <p>The affinity to place the rgw pods (default is to place on any available node)</p> </td> </tr> <tr> <td> <code>annotations</code><br/> <em> <a href=\"#ceph.rook.io/v1.Annotations\"> Annotations </a> </em> </td> <td> <em>(Optional)</em> <p>The annotations-related configuration to add/set on each Pod related object.</p> </td> </tr> <tr> <td> <code>labels</code><br/> <em> <a href=\"#ceph.rook.io/v1.Labels\"> Labels </a> </em> </td> <td> <em>(Optional)</em> <p>The labels-related configuration to add/set on each Pod related object.</p> </td> </tr> <tr> <td> <code>resources</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core\"> Kubernetes core/v1.ResourceRequirements </a> </em> </td> <td> <em>(Optional)</em> <p>The resource requirements for the cephfs-mirror pods</p> </td> </tr> <tr> <td> <code>priorityClassName</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>PriorityClassName sets priority class on the cephfs-mirror pods</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.FilesystemSnapshotScheduleStatusRetention\">FilesystemSnapshotScheduleStatusRetention </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.FilesystemSnapshotSchedulesSpec\">FilesystemSnapshotSchedulesSpec</a>) </p> <div> <p>FilesystemSnapshotScheduleStatusRetention is the retention specification for a filesystem snapshot schedule</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>start</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Start is when the snapshot schedule starts</p> </td> </tr> <tr> <td> <code>created</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Created is when the snapshot schedule was created</p> </td> </tr> <tr> <td> <code>first</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>First is when the first snapshot schedule was taken</p> </td> </tr> <tr> <td> <code>last</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Last is when the last snapshot schedule was taken</p> </td> </tr> <tr> <td> <code>last_pruned</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>LastPruned is when the last snapshot schedule was pruned</p> </td> </tr> <tr> <td> <code>created_count</code><br/> <em> int </em> </td> <td> <em>(Optional)</em> <p>CreatedCount is total amount of snapshots</p> </td> </tr> <tr> <td> <code>pruned_count</code><br/> <em> int </em> </td> <td> <em>(Optional)</em> <p>PrunedCount is total amount of pruned snapshots</p> </td> </tr> <tr> <td> <code>active</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Active is whether the scheduled is active or not</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.FilesystemSnapshotScheduleStatusSpec\">FilesystemSnapshotScheduleStatusSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephFilesystemStatus\">CephFilesystemStatus</a>) </p> <div> <p>FilesystemSnapshotScheduleStatusSpec is the status of the snapshot schedule</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>snapshotSchedules</code><br/> <em> <a href=\"#ceph.rook.io/v1.FilesystemSnapshotSchedulesSpec\"> []FilesystemSnapshotSchedulesSpec </a> </em> </td> <td> <em>(Optional)</em> <p>SnapshotSchedules is the list of snapshots scheduled</p> </td> </tr> <tr> <td> <code>lastChecked</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>LastChecked is the last time time the status was checked</p> </td> </tr> <tr> <td> <code>lastChanged</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>LastChanged is the last time time the status last changed</p> </td> </tr> <tr> <td> <code>details</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Details contains potential status errors</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.FilesystemSnapshotSchedulesSpec\">FilesystemSnapshotSchedulesSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.FilesystemSnapshotScheduleStatusSpec\">FilesystemSnapshotScheduleStatusSpec</a>) </p> <div> <p>FilesystemSnapshotSchedulesSpec is the list of snapshot scheduled for images in a pool</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>fs</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Fs is the name of the Ceph Filesystem</p> </td> </tr> <tr> <td> <code>subvol</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Subvol is the name of the sub volume</p> </td> </tr> <tr> <td> <code>path</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Path is the path on the filesystem</p> </td> </tr> <tr> <td> <code>rel_path</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>schedule</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>retention</code><br/> <em> <a href=\"#ceph.rook.io/v1.FilesystemSnapshotScheduleStatusRetention\"> FilesystemSnapshotScheduleStatusRetention </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.FilesystemSpec\">FilesystemSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephFilesystem\">CephFilesystem</a>) </p> <div> <p>FilesystemSpec represents the spec of a file system</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>metadataPool</code><br/> <em>"
},
{
"data": "href=\"#ceph.rook.io/v1.PoolSpec\"> PoolSpec </a> </em> </td> <td> <p>The metadata pool settings</p> </td> </tr> <tr> <td> <code>dataPools</code><br/> <em> <a href=\"#ceph.rook.io/v1.NamedPoolSpec\"> []NamedPoolSpec </a> </em> </td> <td> <p>The data pool settings, with optional predefined pool name.</p> </td> </tr> <tr> <td> <code>preservePoolsOnDelete</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Preserve pools on filesystem deletion</p> </td> </tr> <tr> <td> <code>preserveFilesystemOnDelete</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Preserve the fs in the cluster on CephFilesystem CR deletion. Setting this to true automatically implies PreservePoolsOnDelete is true.</p> </td> </tr> <tr> <td> <code>metadataServer</code><br/> <em> <a href=\"#ceph.rook.io/v1.MetadataServerSpec\"> MetadataServerSpec </a> </em> </td> <td> <p>The mds pod info</p> </td> </tr> <tr> <td> <code>mirroring</code><br/> <em> <a href=\"#ceph.rook.io/v1.FSMirroringSpec\"> FSMirroringSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The mirroring settings</p> </td> </tr> <tr> <td> <code>statusCheck</code><br/> <em> <a href=\"#ceph.rook.io/v1.MirrorHealthCheckSpec\"> MirrorHealthCheckSpec </a> </em> </td> <td> <p>The mirroring statusCheck</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.FilesystemsSpec\">FilesystemsSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.FilesystemMirroringInfo\">FilesystemMirroringInfo</a>) </p> <div> <p>FilesystemsSpec is spec for the mirrored filesystem</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>filesystem_id</code><br/> <em> int </em> </td> <td> <em>(Optional)</em> <p>FilesystemID is the filesystem identifier</p> </td> </tr> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Name is name of the filesystem</p> </td> </tr> <tr> <td> <code>directory_count</code><br/> <em> int </em> </td> <td> <em>(Optional)</em> <p>DirectoryCount is the number of directories in the filesystem</p> </td> </tr> <tr> <td> <code>peers</code><br/> <em> <a href=\"#ceph.rook.io/v1.FilesystemMirrorInfoPeerSpec\"> []FilesystemMirrorInfoPeerSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Peers represents the mirroring peers</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.GaneshaRADOSSpec\">GaneshaRADOSSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.NFSGaneshaSpec\">NFSGaneshaSpec</a>) </p> <div> <p>GaneshaRADOSSpec represents the specification of a Ganesha RADOS object</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>pool</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The Ceph pool used store the shared configuration for NFS-Ganesha daemons. This setting is deprecated, as it is internally required to be “.nfs”.</p> </td> </tr> <tr> <td> <code>namespace</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The namespace inside the Ceph pool (set by ‘pool’) where shared NFS-Ganesha config is stored. This setting is deprecated as it is internally set to the name of the CephNFS.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.GaneshaServerSpec\">GaneshaServerSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.NFSGaneshaSpec\">NFSGaneshaSpec</a>) </p> <div> <p>GaneshaServerSpec represents the specification of a Ganesha Server</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>active</code><br/> <em> int </em> </td> <td> <p>The number of active Ganesha servers</p> </td> </tr> <tr> <td> <code>placement</code><br/> <em> <a href=\"#ceph.rook.io/v1.Placement\"> Placement </a> </em> </td> <td> <em>(Optional)</em> <p>The affinity to place the ganesha pods</p> </td> </tr> <tr> <td> <code>annotations</code><br/> <em> <a href=\"#ceph.rook.io/v1.Annotations\"> Annotations </a> </em> </td> <td> <em>(Optional)</em> <p>The annotations-related configuration to add/set on each Pod related object.</p> </td> </tr> <tr> <td> <code>labels</code><br/> <em> <a href=\"#ceph.rook.io/v1.Labels\"> Labels </a> </em> </td> <td> <em>(Optional)</em> <p>The labels-related configuration to add/set on each Pod related object.</p> </td> </tr> <tr> <td> <code>resources</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core\"> Kubernetes core/v1.ResourceRequirements </a> </em> </td> <td> <em>(Optional)</em> <p>Resources set resource requests and limits</p> </td> </tr> <tr> <td> <code>priorityClassName</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>PriorityClassName sets the priority class on the pods</p> </td> </tr> <tr> <td> <code>logLevel</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>LogLevel set logging level</p> </td> </tr> <tr> <td> <code>hostNetwork</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Whether host networking is enabled for the Ganesha server. If not set, the network settings from the cluster CR will be applied.</p> </td> </tr> <tr> <td> <code>livenessProbe</code><br/> <em> <a href=\"#ceph.rook.io/v1.ProbeSpec\"> ProbeSpec </a> </em> </td> <td> <em>(Optional)</em> <p>A liveness-probe to verify that Ganesha server has valid run-time state. If LivenessProbe.Disabled is false and LivenessProbe.Probe is nil uses default probe.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.GatewaySpec\">GatewaySpec </h3> <p> (<em>Appears"
},
{
"data": "href=\"#ceph.rook.io/v1.ObjectStoreSpec\">ObjectStoreSpec</a>) </p> <div> <p>GatewaySpec represents the specification of Ceph Object Store Gateway</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>port</code><br/> <em> int32 </em> </td> <td> <em>(Optional)</em> <p>The port the rgw service will be listening on (http)</p> </td> </tr> <tr> <td> <code>securePort</code><br/> <em> int32 </em> </td> <td> <em>(Optional)</em> <p>The port the rgw service will be listening on (https)</p> </td> </tr> <tr> <td> <code>instances</code><br/> <em> int32 </em> </td> <td> <em>(Optional)</em> <p>The number of pods in the rgw replicaset.</p> </td> </tr> <tr> <td> <code>sslCertificateRef</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The name of the secret that stores the ssl certificate for secure rgw connections</p> </td> </tr> <tr> <td> <code>caBundleRef</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The name of the secret that stores custom ca-bundle with root and intermediate certificates.</p> </td> </tr> <tr> <td> <code>placement</code><br/> <em> <a href=\"#ceph.rook.io/v1.Placement\"> Placement </a> </em> </td> <td> <em>(Optional)</em> <p>The affinity to place the rgw pods (default is to place on any available node)</p> </td> </tr> <tr> <td> <code>disableMultisiteSyncTraffic</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>DisableMultisiteSyncTraffic, when true, prevents this object store’s gateways from transmitting multisite replication data. Note that this value does not affect whether gateways receive multisite replication traffic: see ObjectZone.spec.customEndpoints for that. If false or unset, this object store’s gateways will be able to transmit multisite replication data.</p> </td> </tr> <tr> <td> <code>annotations</code><br/> <em> <a href=\"#ceph.rook.io/v1.Annotations\"> Annotations </a> </em> </td> <td> <em>(Optional)</em> <p>The annotations-related configuration to add/set on each Pod related object.</p> </td> </tr> <tr> <td> <code>labels</code><br/> <em> <a href=\"#ceph.rook.io/v1.Labels\"> Labels </a> </em> </td> <td> <em>(Optional)</em> <p>The labels-related configuration to add/set on each Pod related object.</p> </td> </tr> <tr> <td> <code>resources</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core\"> Kubernetes core/v1.ResourceRequirements </a> </em> </td> <td> <em>(Optional)</em> <p>The resource requirements for the rgw pods</p> </td> </tr> <tr> <td> <code>priorityClassName</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>PriorityClassName sets priority classes on the rgw pods</p> </td> </tr> <tr> <td> <code>externalRgwEndpoints</code><br/> <em> <a href=\"#ceph.rook.io/v1.EndpointAddress\"> []EndpointAddress </a> </em> </td> <td> <em>(Optional)</em> <p>ExternalRgwEndpoints points to external RGW endpoint(s). Multiple endpoints can be given, but for stability of ObjectBucketClaims, we highly recommend that users give only a single external RGW endpoint that is a load balancer that sends requests to the multiple RGWs.</p> </td> </tr> <tr> <td> <code>service</code><br/> <em> <a href=\"#ceph.rook.io/v1.RGWServiceSpec\"> RGWServiceSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The configuration related to add/set on each rgw service.</p> </td> </tr> <tr> <td> <code>hostNetwork</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Whether host networking is enabled for the rgw daemon. If not set, the network settings from the cluster CR will be applied.</p> </td> </tr> <tr> <td> <code>dashboardEnabled</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Whether rgw dashboard is enabled for the rgw daemon. If not set, the rgw dashboard will be enabled.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.HTTPEndpointSpec\">HTTPEndpointSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.TopicEndpointSpec\">TopicEndpointSpec</a>) </p> <div> <p>HTTPEndpointSpec represent the spec of an HTTP endpoint of a Bucket Topic</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>uri</code><br/> <em> string </em> </td> <td> <p>The URI of the HTTP endpoint to push notification to</p> </td> </tr> <tr> <td> <code>disableVerifySSL</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Indicate whether the server certificate is validated by the client or not</p> </td> </tr> <tr> <td> <code>sendCloudEvents</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Send the notifications with the CloudEvents header: <a href=\"https://github.com/cloudevents/spec/blob/main/cloudevents/adapters/aws-s3.md\">https://github.com/cloudevents/spec/blob/main/cloudevents/adapters/aws-s3.md</a></p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.HealthCheckSpec\">HealthCheckSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.DaemonHealthSpec\">DaemonHealthSpec</a>, <a href=\"#ceph.rook.io/v1.MirrorHealthCheckSpec\">MirrorHealthCheckSpec</a>) </p> <div> <p>HealthCheckSpec represents the health check of an object store bucket</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>disabled</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>interval</code><br/> <em> <a href=\"https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration\"> Kubernetes"
},
{
"data": "</a> </em> </td> <td> <em>(Optional)</em> <p>Interval is the internal in second or minute for the health check to run like 60s for 60 seconds</p> </td> </tr> <tr> <td> <code>timeout</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.HybridStorageSpec\">HybridStorageSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ReplicatedSpec\">ReplicatedSpec</a>) </p> <div> <p>HybridStorageSpec represents the settings for hybrid storage pool</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>primaryDeviceClass</code><br/> <em> string </em> </td> <td> <p>PrimaryDeviceClass represents high performance tier (for example SSD or NVME) for Primary OSD</p> </td> </tr> <tr> <td> <code>secondaryDeviceClass</code><br/> <em> string </em> </td> <td> <p>SecondaryDeviceClass represents low performance tier (for example HDDs) for remaining OSDs</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.IPFamilyType\">IPFamilyType (<code>string</code> alias)</h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.NetworkSpec\">NetworkSpec</a>) </p> <div> <p>IPFamilyType represents the single stack Ipv4 or Ipv6 protocol.</p> </div> <table> <thead> <tr> <th>Value</th> <th>Description</th> </tr> </thead> <tbody><tr><td><p>"IPv4"</p></td> <td><p>IPv4 internet protocol version</p> </td> </tr><tr><td><p>"IPv6"</p></td> <td><p>IPv6 internet protocol version</p> </td> </tr></tbody> </table> <h3 id=\"ceph.rook.io/v1.KafkaEndpointSpec\">KafkaEndpointSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.TopicEndpointSpec\">TopicEndpointSpec</a>) </p> <div> <p>KafkaEndpointSpec represent the spec of a Kafka endpoint of a Bucket Topic</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>uri</code><br/> <em> string </em> </td> <td> <p>The URI of the Kafka endpoint to push notification to</p> </td> </tr> <tr> <td> <code>useSSL</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Indicate whether to use SSL when communicating with the broker</p> </td> </tr> <tr> <td> <code>disableVerifySSL</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Indicate whether the server certificate is validated by the client or not</p> </td> </tr> <tr> <td> <code>ackLevel</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The ack level required for this topic (none/broker)</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.KerberosConfigFiles\">KerberosConfigFiles </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.KerberosSpec\">KerberosSpec</a>) </p> <div> <p>KerberosConfigFiles represents the source(s) from which Kerberos configuration should come.</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>volumeSource</code><br/> <em> <a href=\"#ceph.rook.io/v1.ConfigFileVolumeSource\"> ConfigFileVolumeSource </a> </em> </td> <td> <p>VolumeSource accepts a pared down version of the standard Kubernetes VolumeSource for Kerberos configuration files like what is normally used to configure Volumes for a Pod. For example, a ConfigMap, Secret, or HostPath. The volume may contain multiple files, all of which will be loaded.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.KerberosKeytabFile\">KerberosKeytabFile </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.KerberosSpec\">KerberosSpec</a>) </p> <div> <p>KerberosKeytabFile represents the source(s) from which the Kerberos keytab file should come.</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>volumeSource</code><br/> <em> <a href=\"#ceph.rook.io/v1.ConfigFileVolumeSource\"> ConfigFileVolumeSource </a> </em> </td> <td> <p>VolumeSource accepts a pared down version of the standard Kubernetes VolumeSource for the Kerberos keytab file like what is normally used to configure Volumes for a Pod. For example, a Secret or HostPath. There are two requirements for the source’s content: The config file must be mountable via <code>subPath: krb5.keytab</code>. For example, in a Secret, the data item must be named <code>krb5.keytab</code>, or <code>items</code> must be defined to select the key and give it path <code>krb5.keytab</code>. A HostPath directory must have the <code>krb5.keytab</code> file. The volume or config file must have mode 0600.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.KerberosSpec\">KerberosSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.NFSSecuritySpec\">NFSSecuritySpec</a>) </p> <div> <p>KerberosSpec represents configuration for Kerberos.</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>principalName</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>PrincipalName corresponds directly to NFS-Ganesha’s NFS_KRB5:PrincipalName config. In practice, this is the service prefix of the principal name. The default is “nfs”. This value is combined with (a) the namespace and name of the CephNFS (with a hyphen between) and (b) the Realm configured in the user-provided krb5.conf to determine the full principal name: <principalName>/<namespace>-<name>@<realm>. e.g., nfs/[email protected]. See <a href=\"https://github.com/nfs-ganesha/nfs-ganesha/wiki/RPCSECGSS\">https://github.com/nfs-ganesha/nfs-ganesha/wiki/RPCSECGSS</a> for more"
},
{
"data": "</td> </tr> <tr> <td> <code>domainName</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>DomainName should be set to the Kerberos Realm.</p> </td> </tr> <tr> <td> <code>configFiles</code><br/> <em> <a href=\"#ceph.rook.io/v1.KerberosConfigFiles\"> KerberosConfigFiles </a> </em> </td> <td> <em>(Optional)</em> <p>ConfigFiles defines where the Kerberos configuration should be sourced from. Config files will be placed into the <code>/etc/krb5.conf.rook/</code> directory.</p> <p>If this is left empty, Rook will not add any files. This allows you to manage the files yourself however you wish. For example, you may build them into your custom Ceph container image or use the Vault agent injector to securely add the files via annotations on the CephNFS spec (passed to the NFS server pods).</p> <p>Rook configures Kerberos to log to stderr. We suggest removing logging sections from config files to avoid consuming unnecessary disk space from logging to files.</p> </td> </tr> <tr> <td> <code>keytabFile</code><br/> <em> <a href=\"#ceph.rook.io/v1.KerberosKeytabFile\"> KerberosKeytabFile </a> </em> </td> <td> <em>(Optional)</em> <p>KeytabFile defines where the Kerberos keytab should be sourced from. The keytab file will be placed into <code>/etc/krb5.keytab</code>. If this is left empty, Rook will not add the file. This allows you to manage the <code>krb5.keytab</code> file yourself however you wish. For example, you may build it into your custom Ceph container image or use the Vault agent injector to securely add the file via annotations on the CephNFS spec (passed to the NFS server pods).</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.KeyManagementServiceSpec\">KeyManagementServiceSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ObjectStoreSecuritySpec\">ObjectStoreSecuritySpec</a>, <a href=\"#ceph.rook.io/v1.SecuritySpec\">SecuritySpec</a>) </p> <div> <p>KeyManagementServiceSpec represent various details of the KMS server</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>connectionDetails</code><br/> <em> map[string]string </em> </td> <td> <em>(Optional)</em> <p>ConnectionDetails contains the KMS connection details (address, port etc)</p> </td> </tr> <tr> <td> <code>tokenSecretName</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>TokenSecretName is the kubernetes secret containing the KMS token</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.KeyRotationSpec\">KeyRotationSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.SecuritySpec\">SecuritySpec</a>) </p> <div> <p>KeyRotationSpec represents the settings for Key Rotation.</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>enabled</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Enabled represents whether the key rotation is enabled.</p> </td> </tr> <tr> <td> <code>schedule</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Schedule represents the cron schedule for key rotation.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.KeyType\">KeyType (<code>string</code> alias)</h3> <div> <p>KeyType type safety</p> </div> <table> <thead> <tr> <th>Value</th> <th>Description</th> </tr> </thead> <tbody><tr><td><p>"exporter"</p></td> <td></td> </tr><tr><td><p>"cleanup"</p></td> <td></td> </tr><tr><td><p>"clusterMetadata"</p></td> <td></td> </tr><tr><td><p>"crashcollector"</p></td> <td></td> </tr><tr><td><p>"dashboard"</p></td> <td></td> </tr><tr><td><p>"mds"</p></td> <td></td> </tr><tr><td><p>"mgr"</p></td> <td></td> </tr><tr><td><p>"mon"</p></td> <td></td> </tr><tr><td><p>"arbiter"</p></td> <td></td> </tr><tr><td><p>"monitoring"</p></td> <td></td> </tr><tr><td><p>"osd"</p></td> <td></td> </tr><tr><td><p>"prepareosd"</p></td> <td></td> </tr><tr><td><p>"rgw"</p></td> <td></td> </tr><tr><td><p>"keyrotation"</p></td> <td></td> </tr></tbody> </table> <h3 id=\"ceph.rook.io/v1.Labels\">Labels (<code>map[string]string</code> alias)</h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.FilesystemMirroringSpec\">FilesystemMirroringSpec</a>, <a href=\"#ceph.rook.io/v1.GaneshaServerSpec\">GaneshaServerSpec</a>, <a href=\"#ceph.rook.io/v1.GatewaySpec\">GatewaySpec</a>, <a href=\"#ceph.rook.io/v1.MetadataServerSpec\">MetadataServerSpec</a>, <a href=\"#ceph.rook.io/v1.RBDMirroringSpec\">RBDMirroringSpec</a>) </p> <div> <p>Labels are label for a given daemons</p> </div> <h3 id=\"ceph.rook.io/v1.LabelsSpec\">LabelsSpec (<code>map[github.com/rook/rook/pkg/apis/ceph.rook.io/v1.KeyType]github.com/rook/rook/pkg/apis/ceph.rook.io/v1.Labels</code> alias)</h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ClusterSpec\">ClusterSpec</a>) </p> <div> <p>LabelsSpec is the main spec label for all daemons</p> </div> <h3 id=\"ceph.rook.io/v1.LogCollectorSpec\">LogCollectorSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ClusterSpec\">ClusterSpec</a>) </p> <div> <p>LogCollectorSpec is the logging spec</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>enabled</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Enabled represents whether the log collector is enabled</p> </td> </tr> <tr> <td> <code>periodicity</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Periodicity is the periodicity of the log rotation.</p> </td> </tr> <tr> <td> <code>maxLogSize</code><br/> <em> k8s.io/apimachinery/pkg/api/resource.Quantity </em> </td> <td> <em>(Optional)</em> <p>MaxLogSize is the maximum size of the log per ceph daemons. Must be at least 1M.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.MetadataServerSpec\">MetadataServerSpec </h3> <p> (<em>Appears"
},
{
"data": "href=\"#ceph.rook.io/v1.FilesystemSpec\">FilesystemSpec</a>) </p> <div> <p>MetadataServerSpec represents the specification of a Ceph Metadata Server</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>activeCount</code><br/> <em> int32 </em> </td> <td> <p>The number of metadata servers that are active. The remaining servers in the cluster will be in standby mode.</p> </td> </tr> <tr> <td> <code>activeStandby</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Whether each active MDS instance will have an active standby with a warm metadata cache for faster failover. If false, standbys will still be available, but will not have a warm metadata cache.</p> </td> </tr> <tr> <td> <code>placement</code><br/> <em> <a href=\"#ceph.rook.io/v1.Placement\"> Placement </a> </em> </td> <td> <em>(Optional)</em> <p>The affinity to place the mds pods (default is to place on all available node) with a daemonset</p> </td> </tr> <tr> <td> <code>annotations</code><br/> <em> <a href=\"#ceph.rook.io/v1.Annotations\"> Annotations </a> </em> </td> <td> <em>(Optional)</em> <p>The annotations-related configuration to add/set on each Pod related object.</p> </td> </tr> <tr> <td> <code>labels</code><br/> <em> <a href=\"#ceph.rook.io/v1.Labels\"> Labels </a> </em> </td> <td> <em>(Optional)</em> <p>The labels-related configuration to add/set on each Pod related object.</p> </td> </tr> <tr> <td> <code>resources</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core\"> Kubernetes core/v1.ResourceRequirements </a> </em> </td> <td> <em>(Optional)</em> <p>The resource requirements for the mds pods</p> </td> </tr> <tr> <td> <code>priorityClassName</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>PriorityClassName sets priority classes on components</p> </td> </tr> <tr> <td> <code>livenessProbe</code><br/> <em> <a href=\"#ceph.rook.io/v1.ProbeSpec\"> ProbeSpec </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>startupProbe</code><br/> <em> <a href=\"#ceph.rook.io/v1.ProbeSpec\"> ProbeSpec </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.MgrSpec\">MgrSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ClusterSpec\">ClusterSpec</a>) </p> <div> <p>MgrSpec represents options to configure a ceph mgr</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>count</code><br/> <em> int </em> </td> <td> <em>(Optional)</em> <p>Count is the number of manager daemons to run</p> </td> </tr> <tr> <td> <code>allowMultiplePerNode</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>AllowMultiplePerNode allows to run multiple managers on the same node (not recommended)</p> </td> </tr> <tr> <td> <code>modules</code><br/> <em> <a href=\"#ceph.rook.io/v1.Module\"> []Module </a> </em> </td> <td> <em>(Optional)</em> <p>Modules is the list of ceph manager modules to enable/disable</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.MirrorHealthCheckSpec\">MirrorHealthCheckSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.FilesystemSpec\">FilesystemSpec</a>, <a href=\"#ceph.rook.io/v1.PoolSpec\">PoolSpec</a>) </p> <div> <p>MirrorHealthCheckSpec represents the health specification of a Ceph Storage Pool mirror</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>mirror</code><br/> <em> <a href=\"#ceph.rook.io/v1.HealthCheckSpec\"> HealthCheckSpec </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.MirroringInfoSpec\">MirroringInfoSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephBlockPoolStatus\">CephBlockPoolStatus</a>) </p> <div> <p>MirroringInfoSpec is the status of the pool mirroring</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>PoolMirroringInfo</code><br/> <em> <a href=\"#ceph.rook.io/v1.PoolMirroringInfo\"> PoolMirroringInfo </a> </em> </td> <td> <p> (Members of <code>PoolMirroringInfo</code> are embedded into this type.) </p> <em>(Optional)</em> </td> </tr> <tr> <td> <code>lastChecked</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>lastChanged</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>details</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.MirroringPeerSpec\">MirroringPeerSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.FSMirroringSpec\">FSMirroringSpec</a>, <a href=\"#ceph.rook.io/v1.MirroringSpec\">MirroringSpec</a>, <a href=\"#ceph.rook.io/v1.RBDMirroringSpec\">RBDMirroringSpec</a>) </p> <div> <p>MirroringPeerSpec represents the specification of a mirror peer</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>secretNames</code><br/> <em> []string </em> </td> <td> <em>(Optional)</em> <p>SecretNames represents the Kubernetes Secret names to add rbd-mirror or cephfs-mirror peers</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.MirroringSpec\">MirroringSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.PoolSpec\">PoolSpec</a>) </p> <div> <p>MirroringSpec represents the setting for a mirrored pool</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>enabled</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Enabled whether this pool is mirrored or not</p> </td> </tr> <tr> <td> <code>mode</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Mode is the mirroring mode: either pool or image</p> </td> </tr> <tr> <td> <code>snapshotSchedules</code><br/> <em> <a href=\"#ceph.rook.io/v1.SnapshotScheduleSpec\"> []SnapshotScheduleSpec </a> </em> </td> <td> <em>(Optional)</em> <p>SnapshotSchedules is the scheduling of snapshot for mirrored images/pools</p> </td> </tr> <tr> <td> <code>peers</code><br/> <em>"
},
{
"data": "href=\"#ceph.rook.io/v1.MirroringPeerSpec\"> MirroringPeerSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Peers represents the peers spec</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.MirroringStatusSpec\">MirroringStatusSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephBlockPoolStatus\">CephBlockPoolStatus</a>) </p> <div> <p>MirroringStatusSpec is the status of the pool mirroring</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>PoolMirroringStatus</code><br/> <em> <a href=\"#ceph.rook.io/v1.PoolMirroringStatus\"> PoolMirroringStatus </a> </em> </td> <td> <p> (Members of <code>PoolMirroringStatus</code> are embedded into this type.) </p> <em>(Optional)</em> <p>PoolMirroringStatus is the mirroring status of a pool</p> </td> </tr> <tr> <td> <code>lastChecked</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>LastChecked is the last time time the status was checked</p> </td> </tr> <tr> <td> <code>lastChanged</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>LastChanged is the last time time the status last changed</p> </td> </tr> <tr> <td> <code>details</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Details contains potential status errors</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.Module\">Module </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.MgrSpec\">MgrSpec</a>) </p> <div> <p>Module represents mgr modules that the user wants to enable or disable</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Name is the name of the ceph manager module</p> </td> </tr> <tr> <td> <code>enabled</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Enabled determines whether a module should be enabled or not</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.MonSpec\">MonSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ClusterSpec\">ClusterSpec</a>) </p> <div> <p>MonSpec represents the specification of the monitor</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>count</code><br/> <em> int </em> </td> <td> <em>(Optional)</em> <p>Count is the number of Ceph monitors</p> </td> </tr> <tr> <td> <code>allowMultiplePerNode</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>AllowMultiplePerNode determines if we can run multiple monitors on the same node (not recommended)</p> </td> </tr> <tr> <td> <code>failureDomainLabel</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>zones</code><br/> <em> <a href=\"#ceph.rook.io/v1.MonZoneSpec\"> []MonZoneSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Zones are specified when we want to provide zonal awareness to mons</p> </td> </tr> <tr> <td> <code>stretchCluster</code><br/> <em> <a href=\"#ceph.rook.io/v1.StretchClusterSpec\"> StretchClusterSpec </a> </em> </td> <td> <em>(Optional)</em> <p>StretchCluster is the stretch cluster specification</p> </td> </tr> <tr> <td> <code>volumeClaimTemplate</code><br/> <em> <a href=\"#ceph.rook.io/v1.VolumeClaimTemplate\"> VolumeClaimTemplate </a> </em> </td> <td> <em>(Optional)</em> <p>VolumeClaimTemplate is the PVC definition</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.MonZoneSpec\">MonZoneSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.MonSpec\">MonSpec</a>, <a href=\"#ceph.rook.io/v1.StretchClusterSpec\">StretchClusterSpec</a>) </p> <div> <p>MonZoneSpec represents the specification of a zone in a Ceph Cluster</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Name is the name of the zone</p> </td> </tr> <tr> <td> <code>arbiter</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Arbiter determines if the zone contains the arbiter used for stretch cluster mode</p> </td> </tr> <tr> <td> <code>volumeClaimTemplate</code><br/> <em> <a href=\"#ceph.rook.io/v1.VolumeClaimTemplate\"> VolumeClaimTemplate </a> </em> </td> <td> <em>(Optional)</em> <p>VolumeClaimTemplate is the PVC template</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.MonitoringSpec\">MonitoringSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ClusterSpec\">ClusterSpec</a>) </p> <div> <p>MonitoringSpec represents the settings for Prometheus based Ceph monitoring</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>enabled</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Enabled determines whether to create the prometheus rules for the ceph cluster. If true, the prometheus types must exist or the creation will fail. Default is false.</p> </td> </tr> <tr> <td> <code>metricsDisabled</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Whether to disable the metrics reported by Ceph. If false, the prometheus mgr module and Ceph exporter are enabled. If true, the prometheus mgr module and Ceph exporter are both disabled. Default is false.</p> </td> </tr> <tr> <td> <code>externalMgrEndpoints</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#endpointaddress-v1-core\"> []Kubernetes"
},
{
"data": "</a> </em> </td> <td> <em>(Optional)</em> <p>ExternalMgrEndpoints points to an existing Ceph prometheus exporter endpoint</p> </td> </tr> <tr> <td> <code>externalMgrPrometheusPort</code><br/> <em> uint16 </em> </td> <td> <em>(Optional)</em> <p>ExternalMgrPrometheusPort Prometheus exporter port</p> </td> </tr> <tr> <td> <code>port</code><br/> <em> int </em> </td> <td> <em>(Optional)</em> <p>Port is the prometheus server port</p> </td> </tr> <tr> <td> <code>interval</code><br/> <em> <a href=\"https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Duration\"> Kubernetes meta/v1.Duration </a> </em> </td> <td> <em>(Optional)</em> <p>Interval determines prometheus scrape interval</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.MultiClusterServiceSpec\">MultiClusterServiceSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.NetworkSpec\">NetworkSpec</a>) </p> <div> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>enabled</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Enable multiClusterService to export the mon and OSD services to peer cluster. Ensure that peer clusters are connected using an MCS API compatible application, like Globalnet Submariner.</p> </td> </tr> <tr> <td> <code>clusterID</code><br/> <em> string </em> </td> <td> <p>ClusterID uniquely identifies a cluster. It is used as a prefix to nslookup exported services. For example: <clusterid>.<svc>.<ns>.svc.clusterset.local</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.NFSGaneshaSpec\">NFSGaneshaSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephNFS\">CephNFS</a>) </p> <div> <p>NFSGaneshaSpec represents the spec of an nfs ganesha server</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>rados</code><br/> <em> <a href=\"#ceph.rook.io/v1.GaneshaRADOSSpec\"> GaneshaRADOSSpec </a> </em> </td> <td> <em>(Optional)</em> <p>RADOS is the Ganesha RADOS specification</p> </td> </tr> <tr> <td> <code>server</code><br/> <em> <a href=\"#ceph.rook.io/v1.GaneshaServerSpec\"> GaneshaServerSpec </a> </em> </td> <td> <p>Server is the Ganesha Server specification</p> </td> </tr> <tr> <td> <code>security</code><br/> <em> <a href=\"#ceph.rook.io/v1.NFSSecuritySpec\"> NFSSecuritySpec </a> </em> </td> <td> <em>(Optional)</em> <p>Security allows specifying security configurations for the NFS cluster</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.NFSSecuritySpec\">NFSSecuritySpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.NFSGaneshaSpec\">NFSGaneshaSpec</a>) </p> <div> <p>NFSSecuritySpec represents security configurations for an NFS server pod</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>sssd</code><br/> <em> <a href=\"#ceph.rook.io/v1.SSSDSpec\"> SSSDSpec </a> </em> </td> <td> <em>(Optional)</em> <p>SSSD enables integration with System Security Services Daemon (SSSD). SSSD can be used to provide user ID mapping from a number of sources. See <a href=\"https://sssd.io\">https://sssd.io</a> for more information about the SSSD project.</p> </td> </tr> <tr> <td> <code>kerberos</code><br/> <em> <a href=\"#ceph.rook.io/v1.KerberosSpec\"> KerberosSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Kerberos configures NFS-Ganesha to secure NFS client connections with Kerberos.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.NamedBlockPoolSpec\">NamedBlockPoolSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephBlockPool\">CephBlockPool</a>) </p> <div> <p>NamedBlockPoolSpec allows a block pool to be created with a non-default name. This is more specific than the NamedPoolSpec so we get schema validation on the allowed pool names that can be specified.</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The desired name of the pool if different from the CephBlockPool CR name.</p> </td> </tr> <tr> <td> <code>PoolSpec</code><br/> <em> <a href=\"#ceph.rook.io/v1.PoolSpec\"> PoolSpec </a> </em> </td> <td> <p> (Members of <code>PoolSpec</code> are embedded into this type.) </p> <p>The core pool configuration</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.NamedPoolSpec\">NamedPoolSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.FilesystemSpec\">FilesystemSpec</a>) </p> <div> <p>NamedPoolSpec represents the named ceph pool spec</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <p>Name of the pool</p> </td> </tr> <tr> <td> <code>PoolSpec</code><br/> <em> <a href=\"#ceph.rook.io/v1.PoolSpec\"> PoolSpec </a> </em> </td> <td> <p> (Members of <code>PoolSpec</code> are embedded into this type.) </p> <p>PoolSpec represents the spec of ceph pool</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.NetworkProviderType\">NetworkProviderType (<code>string</code> alias)</h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.NetworkSpec\">NetworkSpec</a>) </p> <div> <p>NetworkProviderType defines valid network providers for Rook.</p> </div> <table> <thead> <tr> <th>Value</th> <th>Description</th> </tr> </thead> <tbody><tr><td><p>""</p></td> <td></td> </tr><tr><td><p>"host"</p></td> <td></td> </tr><tr><td><p>"multus"</p></td> <td></td> </tr></tbody> </table> <h3 id=\"ceph.rook.io/v1.NetworkSpec\">NetworkSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ClusterSpec\">ClusterSpec</a>) </p> <div> <p>NetworkSpec for Ceph includes backward compatibility code</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>provider</code><br/> <em> <a href=\"#ceph.rook.io/v1.NetworkProviderType\"> NetworkProviderType </a> </em> </td> <td> <em>(Optional)</em> <p>Provider is what provides network connectivity to the cluster e.g. “host” or"
},
{
"data": "If the Provider is updated from being empty to “host” on a running cluster, then the operator will automatically fail over all the mons to apply the “host” network settings.</p> </td> </tr> <tr> <td> <code>selectors</code><br/> <em> map[github.com/rook/rook/pkg/apis/ceph.rook.io/v1.CephNetworkType]string </em> </td> <td> <em>(Optional)</em> <p>Selectors define NetworkAttachmentDefinitions to be used for Ceph public and/or cluster networks when the “multus” network provider is used. This config section is not used for other network providers.</p> <p>Valid keys are “public” and “cluster”. Refer to Ceph networking documentation for more: <a href=\"https://docs.ceph.com/en/reef/rados/configuration/network-config-ref/\">https://docs.ceph.com/en/reef/rados/configuration/network-config-ref/</a></p> <p>Refer to Multus network annotation documentation for help selecting values: <a href=\"https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/how-to-use.md#run-pod-with-network-annotation\">https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/how-to-use.md#run-pod-with-network-annotation</a></p> <p>Rook will make a best-effort attempt to automatically detect CIDR address ranges for given network attachment definitions. Rook’s methods are robust but may be imprecise for sufficiently complicated networks. Rook’s auto-detection process obtains a new IP address lease for each CephCluster reconcile. If Rook fails to detect, incorrectly detects, only partially detects, or if underlying networks do not support reusing old IP addresses, it is best to use the ‘addressRanges’ config section to specify CIDR ranges for the Ceph cluster.</p> <p>As a contrived example, one can use a theoretical Kubernetes-wide network for Ceph client traffic and a theoretical Rook-only network for Ceph replication traffic as shown: selectors: public: “default/cluster-fast-net” cluster: “rook-ceph/ceph-backend-net”</p> </td> </tr> <tr> <td> <code>addressRanges</code><br/> <em> <a href=\"#ceph.rook.io/v1.AddressRangesSpec\"> AddressRangesSpec </a> </em> </td> <td> <em>(Optional)</em> <p>AddressRanges specify a list of CIDRs that Rook will apply to Ceph’s ‘public_network’ and/or ‘cluster_network’ configurations. This config section may be used for the “host” or “multus” network providers.</p> </td> </tr> <tr> <td> <code>connections</code><br/> <em> <a href=\"#ceph.rook.io/v1.ConnectionsSpec\"> ConnectionsSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Settings for network connections such as compression and encryption across the wire.</p> </td> </tr> <tr> <td> <code>hostNetwork</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>HostNetwork to enable host network. If host networking is enabled or disabled on a running cluster, then the operator will automatically fail over all the mons to apply the new network settings.</p> </td> </tr> <tr> <td> <code>ipFamily</code><br/> <em> <a href=\"#ceph.rook.io/v1.IPFamilyType\"> IPFamilyType </a> </em> </td> <td> <em>(Optional)</em> <p>IPFamily is the single stack IPv6 or IPv4 protocol</p> </td> </tr> <tr> <td> <code>dualStack</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>DualStack determines whether Ceph daemons should listen on both IPv4 and IPv6</p> </td> </tr> <tr> <td> <code>multiClusterService</code><br/> <em> <a href=\"#ceph.rook.io/v1.MultiClusterServiceSpec\"> MultiClusterServiceSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Enable multiClusterService to export the Services between peer clusters</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.Node\">Node </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.StorageScopeSpec\">StorageScopeSpec</a>) </p> <div> <p>Node is a storage nodes</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>resources</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core\"> Kubernetes core/v1.ResourceRequirements </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>config</code><br/> <em> map[string]string </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>Selection</code><br/> <em> <a href=\"#ceph.rook.io/v1.Selection\"> Selection </a> </em> </td> <td> <p> (Members of <code>Selection</code> are embedded into this type.) </p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.NodesByName\">NodesByName (<code>[]github.com/rook/rook/pkg/apis/ceph.rook.io/v1.Node</code> alias)</h3> <div> <p>NodesByName implements an interface to sort nodes by name</p> </div> <h3 id=\"ceph.rook.io/v1.NotificationFilterRule\">NotificationFilterRule </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.NotificationFilterSpec\">NotificationFilterSpec</a>) </p> <div> <p>NotificationFilterRule represent a single rule in the Notification Filter spec</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <p>Name of the metadata or tag</p> </td> </tr> <tr> <td> <code>value</code><br/> <em> string </em> </td> <td> <p>Value to filter on</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.NotificationFilterSpec\">NotificationFilterSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.BucketNotificationSpec\">BucketNotificationSpec</a>) </p> <div> <p>NotificationFilterSpec represent the spec of a Bucket Notification filter</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>keyFilters</code><br/> <em>"
},
{
"data": "href=\"#ceph.rook.io/v1.NotificationKeyFilterRule\"> []NotificationKeyFilterRule </a> </em> </td> <td> <em>(Optional)</em> <p>Filters based on the object’s key</p> </td> </tr> <tr> <td> <code>metadataFilters</code><br/> <em> <a href=\"#ceph.rook.io/v1.NotificationFilterRule\"> []NotificationFilterRule </a> </em> </td> <td> <em>(Optional)</em> <p>Filters based on the object’s metadata</p> </td> </tr> <tr> <td> <code>tagFilters</code><br/> <em> <a href=\"#ceph.rook.io/v1.NotificationFilterRule\"> []NotificationFilterRule </a> </em> </td> <td> <em>(Optional)</em> <p>Filters based on the object’s tags</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.NotificationKeyFilterRule\">NotificationKeyFilterRule </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.NotificationFilterSpec\">NotificationFilterSpec</a>) </p> <div> <p>NotificationKeyFilterRule represent a single key rule in the Notification Filter spec</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <p>Name of the filter - prefix/suffix/regex</p> </td> </tr> <tr> <td> <code>value</code><br/> <em> string </em> </td> <td> <p>Value to filter on</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.OSDStatus\">OSDStatus </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephStorage\">CephStorage</a>) </p> <div> <p>OSDStatus represents OSD status of the ceph Cluster</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>storeType</code><br/> <em> map[string]int </em> </td> <td> <p>StoreType is a mapping between the OSD backend stores and number of OSDs using these stores</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.OSDStore\">OSDStore </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.StorageScopeSpec\">StorageScopeSpec</a>) </p> <div> <p>OSDStore is the backend storage type used for creating the OSDs</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>type</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Type of backend storage to be used while creating OSDs. If empty, then bluestore will be used</p> </td> </tr> <tr> <td> <code>updateStore</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>UpdateStore updates the backend store for existing OSDs. It destroys each OSD one at a time, cleans up the backing disk and prepares same OSD on that disk</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ObjectEndpoints\">ObjectEndpoints </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ObjectStoreStatus\">ObjectStoreStatus</a>) </p> <div> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>insecure</code><br/> <em> []string </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>secure</code><br/> <em> []string </em> </td> <td> <em>(Optional)</em> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ObjectHealthCheckSpec\">ObjectHealthCheckSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ObjectStoreSpec\">ObjectStoreSpec</a>) </p> <div> <p>ObjectHealthCheckSpec represents the health check of an object store</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>readinessProbe</code><br/> <em> <a href=\"#ceph.rook.io/v1.ProbeSpec\"> ProbeSpec </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>startupProbe</code><br/> <em> <a href=\"#ceph.rook.io/v1.ProbeSpec\"> ProbeSpec </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ObjectRealmSpec\">ObjectRealmSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephObjectRealm\">CephObjectRealm</a>) </p> <div> <p>ObjectRealmSpec represent the spec of an ObjectRealm</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>pull</code><br/> <em> <a href=\"#ceph.rook.io/v1.PullSpec\"> PullSpec </a> </em> </td> <td> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ObjectSharedPoolsSpec\">ObjectSharedPoolsSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ObjectStoreSpec\">ObjectStoreSpec</a>, <a href=\"#ceph.rook.io/v1.ObjectZoneSpec\">ObjectZoneSpec</a>) </p> <div> <p>ObjectSharedPoolsSpec represents object store pool info when configuring RADOS namespaces in existing pools.</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>metadataPoolName</code><br/> <em> string </em> </td> <td> <p>The metadata pool used for creating RADOS namespaces in the object store</p> </td> </tr> <tr> <td> <code>dataPoolName</code><br/> <em> string </em> </td> <td> <p>The data pool used for creating RADOS namespaces in the object store</p> </td> </tr> <tr> <td> <code>preserveRadosNamespaceDataOnDelete</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Whether the RADOS namespaces should be preserved on deletion of the object store</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ObjectStoreHostingSpec\">ObjectStoreHostingSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ObjectStoreSpec\">ObjectStoreSpec</a>) </p> <div> <p>ObjectStoreHostingSpec represents the hosting settings for the object store</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>dnsNames</code><br/> <em> []string </em> </td> <td> <em>(Optional)</em> <p>A list of DNS names in which bucket can be accessed via virtual host path. These names need to valid according RFC-1123. Each domain requires wildcard support like ingress loadbalancer. Do not include the wildcard itself in the list of hostnames (e.g. use"
},
{
"data": "instead of “*.mystore.example.com”). Add all hostnames including user-created Kubernetes Service endpoints to the list. CephObjectStore Service Endpoints and CephObjectZone customEndpoints are automatically added to the list. The feature is supported only for Ceph v18 and later versions.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ObjectStoreSecuritySpec\">ObjectStoreSecuritySpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ObjectStoreSpec\">ObjectStoreSpec</a>) </p> <div> <p>ObjectStoreSecuritySpec is spec to define security features like encryption</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>SecuritySpec</code><br/> <em> <a href=\"#ceph.rook.io/v1.SecuritySpec\"> SecuritySpec </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>s3</code><br/> <em> <a href=\"#ceph.rook.io/v1.KeyManagementServiceSpec\"> KeyManagementServiceSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The settings for supporting AWS-SSE:S3 with RGW</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ObjectStoreSpec\">ObjectStoreSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephObjectStore\">CephObjectStore</a>) </p> <div> <p>ObjectStoreSpec represent the spec of a pool</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>metadataPool</code><br/> <em> <a href=\"#ceph.rook.io/v1.PoolSpec\"> PoolSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The metadata pool settings</p> </td> </tr> <tr> <td> <code>dataPool</code><br/> <em> <a href=\"#ceph.rook.io/v1.PoolSpec\"> PoolSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The data pool settings</p> </td> </tr> <tr> <td> <code>sharedPools</code><br/> <em> <a href=\"#ceph.rook.io/v1.ObjectSharedPoolsSpec\"> ObjectSharedPoolsSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The pool information when configuring RADOS namespaces in existing pools.</p> </td> </tr> <tr> <td> <code>preservePoolsOnDelete</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Preserve pools on object store deletion</p> </td> </tr> <tr> <td> <code>gateway</code><br/> <em> <a href=\"#ceph.rook.io/v1.GatewaySpec\"> GatewaySpec </a> </em> </td> <td> <em>(Optional)</em> <p>The rgw pod info</p> </td> </tr> <tr> <td> <code>zone</code><br/> <em> <a href=\"#ceph.rook.io/v1.ZoneSpec\"> ZoneSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The multisite info</p> </td> </tr> <tr> <td> <code>healthCheck</code><br/> <em> <a href=\"#ceph.rook.io/v1.ObjectHealthCheckSpec\"> ObjectHealthCheckSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The RGW health probes</p> </td> </tr> <tr> <td> <code>security</code><br/> <em> <a href=\"#ceph.rook.io/v1.ObjectStoreSecuritySpec\"> ObjectStoreSecuritySpec </a> </em> </td> <td> <em>(Optional)</em> <p>Security represents security settings</p> </td> </tr> <tr> <td> <code>allowUsersInNamespaces</code><br/> <em> []string </em> </td> <td> <em>(Optional)</em> <p>The list of allowed namespaces in addition to the object store namespace where ceph object store users may be created. Specify “*” to allow all namespaces, otherwise list individual namespaces that are to be allowed. This is useful for applications that need object store credentials to be created in their own namespace, where neither OBCs nor COSI is being used to create buckets. The default is empty.</p> </td> </tr> <tr> <td> <code>hosting</code><br/> <em> <a href=\"#ceph.rook.io/v1.ObjectStoreHostingSpec\"> ObjectStoreHostingSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Hosting settings for the object store</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ObjectStoreStatus\">ObjectStoreStatus </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephObjectStore\">CephObjectStore</a>) </p> <div> <p>ObjectStoreStatus represents the status of a Ceph Object Store resource</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>phase</code><br/> <em> <a href=\"#ceph.rook.io/v1.ConditionType\"> ConditionType </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>message</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>endpoints</code><br/> <em> <a href=\"#ceph.rook.io/v1.ObjectEndpoints\"> ObjectEndpoints </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>info</code><br/> <em> map[string]string </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>conditions</code><br/> <em> <a href=\"#ceph.rook.io/v1.Condition\"> []Condition </a> </em> </td> <td> </td> </tr> <tr> <td> <code>observedGeneration</code><br/> <em> int64 </em> </td> <td> <em>(Optional)</em> <p>ObservedGeneration is the latest generation observed by the controller.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ObjectStoreUserSpec\">ObjectStoreUserSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephObjectStoreUser\">CephObjectStoreUser</a>) </p> <div> <p>ObjectStoreUserSpec represent the spec of an Objectstoreuser</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>store</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The store the user will be created in</p> </td> </tr> <tr> <td> <code>displayName</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The display name for the ceph users</p> </td> </tr> <tr> <td> <code>capabilities</code><br/> <em> <a href=\"#ceph.rook.io/v1.ObjectUserCapSpec\"> ObjectUserCapSpec </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>quotas</code><br/> <em>"
},
{
"data": "href=\"#ceph.rook.io/v1.ObjectUserQuotaSpec\"> ObjectUserQuotaSpec </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>clusterNamespace</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The namespace where the parent CephCluster and CephObjectStore are found</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ObjectStoreUserStatus\">ObjectStoreUserStatus </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephObjectStoreUser\">CephObjectStoreUser</a>) </p> <div> <p>ObjectStoreUserStatus represents the status Ceph Object Store Gateway User</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>phase</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>info</code><br/> <em> map[string]string </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>observedGeneration</code><br/> <em> int64 </em> </td> <td> <em>(Optional)</em> <p>ObservedGeneration is the latest generation observed by the controller.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ObjectUserCapSpec\">ObjectUserCapSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ObjectStoreUserSpec\">ObjectStoreUserSpec</a>) </p> <div> <p>Additional admin-level capabilities for the Ceph object store user</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>user</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Admin capabilities to read/write Ceph object store users. Documented in <a href=\"https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities\">https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities</a></p> </td> </tr> <tr> <td> <code>users</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Admin capabilities to read/write Ceph object store users. Documented in <a href=\"https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities\">https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities</a></p> </td> </tr> <tr> <td> <code>bucket</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Admin capabilities to read/write Ceph object store buckets. Documented in <a href=\"https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities\">https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities</a></p> </td> </tr> <tr> <td> <code>buckets</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Admin capabilities to read/write Ceph object store buckets. Documented in <a href=\"https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities\">https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities</a></p> </td> </tr> <tr> <td> <code>metadata</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Admin capabilities to read/write Ceph object store metadata. Documented in <a href=\"https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities\">https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities</a></p> </td> </tr> <tr> <td> <code>usage</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Admin capabilities to read/write Ceph object store usage. Documented in <a href=\"https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities\">https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities</a></p> </td> </tr> <tr> <td> <code>zone</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Admin capabilities to read/write Ceph object store zones. Documented in <a href=\"https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities\">https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities</a></p> </td> </tr> <tr> <td> <code>roles</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Admin capabilities to read/write roles for user. Documented in <a href=\"https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities\">https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities</a></p> </td> </tr> <tr> <td> <code>info</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Admin capabilities to read/write information about the user. Documented in <a href=\"https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities\">https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities</a></p> </td> </tr> <tr> <td> <code>amz-cache</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Add capabilities for user to send request to RGW Cache API header. Documented in <a href=\"https://docs.ceph.com/en/quincy/radosgw/rgw-cache/#cache-api\">https://docs.ceph.com/en/quincy/radosgw/rgw-cache/#cache-api</a></p> </td> </tr> <tr> <td> <code>bilog</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Add capabilities for user to change bucket index logging. Documented in <a href=\"https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities\">https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities</a></p> </td> </tr> <tr> <td> <code>mdlog</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Add capabilities for user to change metadata logging. Documented in <a href=\"https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities\">https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities</a></p> </td> </tr> <tr> <td> <code>datalog</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Add capabilities for user to change data logging. Documented in <a href=\"https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities\">https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities</a></p> </td> </tr> <tr> <td> <code>user-policy</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Add capabilities for user to change user policies. Documented in <a href=\"https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities\">https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities</a></p> </td> </tr> <tr> <td> <code>oidc-provider</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Add capabilities for user to change oidc provider. Documented in <a href=\"https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities\">https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities</a></p> </td> </tr> <tr> <td> <code>ratelimit</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Add capabilities for user to set rate limiter for user and bucket. Documented in <a href=\"https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities\">https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities</a></p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ObjectUserQuotaSpec\">ObjectUserQuotaSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ObjectStoreUserSpec\">ObjectStoreUserSpec</a>) </p> <div> <p>ObjectUserQuotaSpec can be used to set quotas for the object store user to limit their usage. See the <a href=\"https://docs.ceph.com/en/latest/radosgw/admin/?#quota-management\">Ceph docs</a> for more</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>maxBuckets</code><br/> <em> int </em> </td> <td> <em>(Optional)</em> <p>Maximum bucket limit for the ceph user</p> </td> </tr> <tr> <td> <code>maxSize</code><br/> <em> k8s.io/apimachinery/pkg/api/resource.Quantity </em> </td> <td> <em>(Optional)</em> <p>Maximum size limit of all objects across all the user’s buckets See <a href=\"https://pkg.go.dev/k8s.io/apimachinery/pkg/api/resource#Quantity\">https://pkg.go.dev/k8s.io/apimachinery/pkg/api/resource#Quantity</a> for more"
},
{
"data": "</td> </tr> <tr> <td> <code>maxObjects</code><br/> <em> int64 </em> </td> <td> <em>(Optional)</em> <p>Maximum number of objects across all the user’s buckets</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ObjectZoneGroupSpec\">ObjectZoneGroupSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephObjectZoneGroup\">CephObjectZoneGroup</a>) </p> <div> <p>ObjectZoneGroupSpec represent the spec of an ObjectZoneGroup</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>realm</code><br/> <em> string </em> </td> <td> <p>The display name for the ceph users</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ObjectZoneSpec\">ObjectZoneSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephObjectZone\">CephObjectZone</a>) </p> <div> <p>ObjectZoneSpec represent the spec of an ObjectZone</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>zoneGroup</code><br/> <em> string </em> </td> <td> <p>The display name for the ceph users</p> </td> </tr> <tr> <td> <code>metadataPool</code><br/> <em> <a href=\"#ceph.rook.io/v1.PoolSpec\"> PoolSpec </a> </em> </td> <td> <p>The metadata pool settings</p> </td> </tr> <tr> <td> <code>dataPool</code><br/> <em> <a href=\"#ceph.rook.io/v1.PoolSpec\"> PoolSpec </a> </em> </td> <td> <p>The data pool settings</p> </td> </tr> <tr> <td> <code>sharedPools</code><br/> <em> <a href=\"#ceph.rook.io/v1.ObjectSharedPoolsSpec\"> ObjectSharedPoolsSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The pool information when configuring RADOS namespaces in existing pools.</p> </td> </tr> <tr> <td> <code>customEndpoints</code><br/> <em> []string </em> </td> <td> <em>(Optional)</em> <p>If this zone cannot be accessed from other peer Ceph clusters via the ClusterIP Service endpoint created by Rook, you must set this to the externally reachable endpoint(s). You may include the port in the definition. For example: “<a href=\"https://my-object-store.my-domain.net:443"\">https://my-object-store.my-domain.net:443”</a>. In many cases, you should set this to the endpoint of the ingress resource that makes the CephObjectStore associated with this CephObjectStoreZone reachable to peer clusters. The list can have one or more endpoints pointing to different RGW servers in the zone.</p> <p>If a CephObjectStore endpoint is omitted from this list, that object store’s gateways will not receive multisite replication data (see CephObjectStore.spec.gateway.disableMultisiteSyncTraffic).</p> </td> </tr> <tr> <td> <code>preservePoolsOnDelete</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Preserve pools on object zone deletion</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.PeerRemoteSpec\">PeerRemoteSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.FilesystemMirrorInfoPeerSpec\">FilesystemMirrorInfoPeerSpec</a>) </p> <div> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>client_name</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>ClientName is cephx name</p> </td> </tr> <tr> <td> <code>cluster_name</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>ClusterName is the name of the cluster</p> </td> </tr> <tr> <td> <code>fs_name</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>FsName is the filesystem name</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.PeerStatSpec\">PeerStatSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.FilesystemMirrorInfoPeerSpec\">FilesystemMirrorInfoPeerSpec</a>) </p> <div> <p>PeerStatSpec are the mirror stat with a given peer</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>failure_count</code><br/> <em> int </em> </td> <td> <em>(Optional)</em> <p>FailureCount is the number of mirroring failure</p> </td> </tr> <tr> <td> <code>recovery_count</code><br/> <em> int </em> </td> <td> <em>(Optional)</em> <p>RecoveryCount is the number of recovery attempted after failures</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.PeersSpec\">PeersSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.PoolMirroringInfo\">PoolMirroringInfo</a>) </p> <div> <p>PeersSpec contains peer details</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>uuid</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>UUID is the peer UUID</p> </td> </tr> <tr> <td> <code>direction</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Direction is the peer mirroring direction</p> </td> </tr> <tr> <td> <code>site_name</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>SiteName is the current site name</p> </td> </tr> <tr> <td> <code>mirror_uuid</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>MirrorUUID is the mirror UUID</p> </td> </tr> <tr> <td> <code>client_name</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>ClientName is the CephX user used to connect to the peer</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.Placement\">Placement </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephCOSIDriverSpec\">CephCOSIDriverSpec</a>, <a href=\"#ceph.rook.io/v1.FilesystemMirroringSpec\">FilesystemMirroringSpec</a>, <a href=\"#ceph.rook.io/v1.GaneshaServerSpec\">GaneshaServerSpec</a>, <a href=\"#ceph.rook.io/v1.GatewaySpec\">GatewaySpec</a>, <a href=\"#ceph.rook.io/v1.MetadataServerSpec\">MetadataServerSpec</a>, <a href=\"#ceph.rook.io/v1.RBDMirroringSpec\">RBDMirroringSpec</a>, <a href=\"#ceph.rook.io/v1.StorageClassDeviceSet\">StorageClassDeviceSet</a>) </p> <div> <p>Placement is the placement for an object</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>nodeAffinity</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#nodeaffinity-v1-core\"> Kubernetes"
},
{
"data": "</a> </em> </td> <td> <em>(Optional)</em> <p>NodeAffinity is a group of node affinity scheduling rules</p> </td> </tr> <tr> <td> <code>podAffinity</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#podaffinity-v1-core\"> Kubernetes core/v1.PodAffinity </a> </em> </td> <td> <em>(Optional)</em> <p>PodAffinity is a group of inter pod affinity scheduling rules</p> </td> </tr> <tr> <td> <code>podAntiAffinity</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#podantiaffinity-v1-core\"> Kubernetes core/v1.PodAntiAffinity </a> </em> </td> <td> <em>(Optional)</em> <p>PodAntiAffinity is a group of inter pod anti affinity scheduling rules</p> </td> </tr> <tr> <td> <code>tolerations</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#toleration-v1-core\"> []Kubernetes core/v1.Toleration </a> </em> </td> <td> <em>(Optional)</em> <p>The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator></p> </td> </tr> <tr> <td> <code>topologySpreadConstraints</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#topologyspreadconstraint-v1-core\"> []Kubernetes core/v1.TopologySpreadConstraint </a> </em> </td> <td> <em>(Optional)</em> <p>TopologySpreadConstraint specifies how to spread matching pods among the given topology</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.PlacementSpec\">PlacementSpec (<code>map[github.com/rook/rook/pkg/apis/ceph.rook.io/v1.KeyType]github.com/rook/rook/pkg/apis/ceph.rook.io/v1.Placement</code> alias)</h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ClusterSpec\">ClusterSpec</a>) </p> <div> <p>PlacementSpec is the placement for core ceph daemons part of the CephCluster CRD</p> </div> <h3 id=\"ceph.rook.io/v1.PoolMirroringInfo\">PoolMirroringInfo </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.MirroringInfoSpec\">MirroringInfoSpec</a>) </p> <div> <p>PoolMirroringInfo is the mirroring info of a given pool</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>mode</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Mode is the mirroring mode</p> </td> </tr> <tr> <td> <code>site_name</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>SiteName is the current site name</p> </td> </tr> <tr> <td> <code>peers</code><br/> <em> <a href=\"#ceph.rook.io/v1.PeersSpec\"> []PeersSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Peers are the list of peer sites connected to that cluster</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.PoolMirroringStatus\">PoolMirroringStatus </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.MirroringStatusSpec\">MirroringStatusSpec</a>) </p> <div> <p>PoolMirroringStatus is the pool mirror status</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>summary</code><br/> <em> <a href=\"#ceph.rook.io/v1.PoolMirroringStatusSummarySpec\"> PoolMirroringStatusSummarySpec </a> </em> </td> <td> <em>(Optional)</em> <p>Summary is the mirroring status summary</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.PoolMirroringStatusSummarySpec\">PoolMirroringStatusSummarySpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.PoolMirroringStatus\">PoolMirroringStatus</a>) </p> <div> <p>PoolMirroringStatusSummarySpec is the summary output of the command</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>health</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Health is the mirroring health</p> </td> </tr> <tr> <td> <code>daemon_health</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>DaemonHealth is the health of the mirroring daemon</p> </td> </tr> <tr> <td> <code>image_health</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>ImageHealth is the health of the mirrored image</p> </td> </tr> <tr> <td> <code>states</code><br/> <em> <a href=\"#ceph.rook.io/v1.StatesSpec\"> StatesSpec </a> </em> </td> <td> <em>(Optional)</em> <p>States is the various state for all mirrored images</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.PoolSpec\">PoolSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.FilesystemSpec\">FilesystemSpec</a>, <a href=\"#ceph.rook.io/v1.NamedBlockPoolSpec\">NamedBlockPoolSpec</a>, <a href=\"#ceph.rook.io/v1.NamedPoolSpec\">NamedPoolSpec</a>, <a href=\"#ceph.rook.io/v1.ObjectStoreSpec\">ObjectStoreSpec</a>, <a href=\"#ceph.rook.io/v1.ObjectZoneSpec\">ObjectZoneSpec</a>) </p> <div> <p>PoolSpec represents the spec of ceph pool</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>failureDomain</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The failure domain: osd/host/(region or zone if available) - technically also any type in the crush map</p> </td> </tr> <tr> <td> <code>crushRoot</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The root of the crush hierarchy utilized by the pool</p> </td> </tr> <tr> <td> <code>deviceClass</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The device class the OSD should set to for use in the pool</p> </td> </tr> <tr> <td> <code>compressionMode</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>DEPRECATED: use Parameters instead, e.g., Parameters[“compression_mode”] = “force” The inline compression mode in Bluestore OSD to set to (options are: none, passive, aggressive, force) Do NOT set a default value for kubebuilder as this will override the Parameters</p> </td> </tr> <tr> <td> <code>replicated</code><br/> <em> <a href=\"#ceph.rook.io/v1.ReplicatedSpec\"> ReplicatedSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The replication settings</p> </td> </tr> <tr> <td> <code>erasureCoded</code><br/> <em>"
},
{
"data": "href=\"#ceph.rook.io/v1.ErasureCodedSpec\"> ErasureCodedSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The erasure code settings</p> </td> </tr> <tr> <td> <code>parameters</code><br/> <em> map[string]string </em> </td> <td> <em>(Optional)</em> <p>Parameters is a list of properties to enable on a given pool</p> </td> </tr> <tr> <td> <code>enableRBDStats</code><br/> <em> bool </em> </td> <td> <p>EnableRBDStats is used to enable gathering of statistics for all RBD images in the pool</p> </td> </tr> <tr> <td> <code>mirroring</code><br/> <em> <a href=\"#ceph.rook.io/v1.MirroringSpec\"> MirroringSpec </a> </em> </td> <td> <p>The mirroring settings</p> </td> </tr> <tr> <td> <code>statusCheck</code><br/> <em> <a href=\"#ceph.rook.io/v1.MirrorHealthCheckSpec\"> MirrorHealthCheckSpec </a> </em> </td> <td> <p>The mirroring statusCheck</p> </td> </tr> <tr> <td> <code>quotas</code><br/> <em> <a href=\"#ceph.rook.io/v1.QuotaSpec\"> QuotaSpec </a> </em> </td> <td> <em>(Optional)</em> <p>The quota settings</p> </td> </tr> <tr> <td> <code>application</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>The application name to set on the pool. Only expected to be set for rgw pools.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.PriorityClassNamesSpec\">PriorityClassNamesSpec (<code>map[github.com/rook/rook/pkg/apis/ceph.rook.io/v1.KeyType]string</code> alias)</h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ClusterSpec\">ClusterSpec</a>) </p> <div> <p>PriorityClassNamesSpec is a map of priority class names to be assigned to components</p> </div> <h3 id=\"ceph.rook.io/v1.ProbeSpec\">ProbeSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.GaneshaServerSpec\">GaneshaServerSpec</a>, <a href=\"#ceph.rook.io/v1.MetadataServerSpec\">MetadataServerSpec</a>, <a href=\"#ceph.rook.io/v1.ObjectHealthCheckSpec\">ObjectHealthCheckSpec</a>) </p> <div> <p>ProbeSpec is a wrapper around Probe so it can be enabled or disabled for a Ceph daemon</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>disabled</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Disabled determines whether probe is disable or not</p> </td> </tr> <tr> <td> <code>probe</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#probe-v1-core\"> Kubernetes core/v1.Probe </a> </em> </td> <td> <em>(Optional)</em> <p>Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.PullSpec\">PullSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ObjectRealmSpec\">ObjectRealmSpec</a>) </p> <div> <p>PullSpec represents the pulling specification of a Ceph Object Storage Gateway Realm</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>endpoint</code><br/> <em> string </em> </td> <td> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.QuotaSpec\">QuotaSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.PoolSpec\">PoolSpec</a>) </p> <div> <p>QuotaSpec represents the spec for quotas in a pool</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>maxBytes</code><br/> <em> uint64 </em> </td> <td> <em>(Optional)</em> <p>MaxBytes represents the quota in bytes Deprecated in favor of MaxSize</p> </td> </tr> <tr> <td> <code>maxSize</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>MaxSize represents the quota in bytes as a string</p> </td> </tr> <tr> <td> <code>maxObjects</code><br/> <em> uint64 </em> </td> <td> <em>(Optional)</em> <p>MaxObjects represents the quota in objects</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.RBDMirroringSpec\">RBDMirroringSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephRBDMirror\">CephRBDMirror</a>) </p> <div> <p>RBDMirroringSpec represents the specification of an RBD mirror daemon</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>count</code><br/> <em> int </em> </td> <td> <p>Count represents the number of rbd mirror instance to run</p> </td> </tr> <tr> <td> <code>peers</code><br/> <em> <a href=\"#ceph.rook.io/v1.MirroringPeerSpec\"> MirroringPeerSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Peers represents the peers spec</p> </td> </tr> <tr> <td> <code>placement</code><br/> <em> <a href=\"#ceph.rook.io/v1.Placement\"> Placement </a> </em> </td> <td> <em>(Optional)</em> <p>The affinity to place the rgw pods (default is to place on any available node)</p> </td> </tr> <tr> <td> <code>annotations</code><br/> <em> <a href=\"#ceph.rook.io/v1.Annotations\"> Annotations </a> </em> </td> <td> <em>(Optional)</em> <p>The annotations-related configuration to add/set on each Pod related object.</p> </td> </tr> <tr> <td> <code>labels</code><br/> <em> <a href=\"#ceph.rook.io/v1.Labels\"> Labels </a> </em> </td> <td> <em>(Optional)</em> <p>The labels-related configuration to add/set on each Pod related object.</p> </td> </tr> <tr> <td> <code>resources</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core\"> Kubernetes core/v1.ResourceRequirements </a> </em> </td> <td> <em>(Optional)</em> <p>The resource requirements for the rbd mirror pods</p> </td> </tr> <tr> <td> <code>priorityClassName</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>PriorityClassName sets priority class on the rbd mirror pods</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.RGWServiceSpec\">RGWServiceSpec </h3> <p> (<em>Appears"
},
{
"data": "href=\"#ceph.rook.io/v1.GatewaySpec\">GatewaySpec</a>) </p> <div> <p>RGWServiceSpec represent the spec for RGW service</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>annotations</code><br/> <em> <a href=\"#ceph.rook.io/v1.Annotations\"> Annotations </a> </em> </td> <td> <p>The annotations-related configuration to add/set on each rgw service. nullable optional</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ReadAffinitySpec\">ReadAffinitySpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CSIDriverSpec\">CSIDriverSpec</a>) </p> <div> <p>ReadAffinitySpec defines the read affinity settings for CSI driver.</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>enabled</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Enables read affinity for CSI driver.</p> </td> </tr> <tr> <td> <code>crushLocationLabels</code><br/> <em> []string </em> </td> <td> <em>(Optional)</em> <p>CrushLocationLabels defines which node labels to use as CRUSH location. This should correspond to the values set in the CRUSH map.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ReplicatedSpec\">ReplicatedSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.PoolSpec\">PoolSpec</a>) </p> <div> <p>ReplicatedSpec represents the spec for replication in a pool</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>size</code><br/> <em> uint </em> </td> <td> <p>Size - Number of copies per object in a replicated storage pool, including the object itself (required for replicated pool type)</p> </td> </tr> <tr> <td> <code>targetSizeRatio</code><br/> <em> float64 </em> </td> <td> <em>(Optional)</em> <p>TargetSizeRatio gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity</p> </td> </tr> <tr> <td> <code>requireSafeReplicaSize</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>RequireSafeReplicaSize if false allows you to set replica 1</p> </td> </tr> <tr> <td> <code>replicasPerFailureDomain</code><br/> <em> uint </em> </td> <td> <em>(Optional)</em> <p>ReplicasPerFailureDomain the number of replica in the specified failure domain</p> </td> </tr> <tr> <td> <code>subFailureDomain</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>SubFailureDomain the name of the sub-failure domain</p> </td> </tr> <tr> <td> <code>hybridStorage</code><br/> <em> <a href=\"#ceph.rook.io/v1.HybridStorageSpec\"> HybridStorageSpec </a> </em> </td> <td> <em>(Optional)</em> <p>HybridStorage represents hybrid storage tier settings</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ResourceSpec\">ResourceSpec (<code>map[string]k8s.io/api/core/v1.ResourceRequirements</code> alias)</h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ClusterSpec\">ClusterSpec</a>) </p> <div> <p>ResourceSpec is a collection of ResourceRequirements that describes the compute resource requirements</p> </div> <h3 id=\"ceph.rook.io/v1.SSSDSidecar\">SSSDSidecar </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.SSSDSpec\">SSSDSpec</a>) </p> <div> <p>SSSDSidecar represents configuration when SSSD is run in a sidecar.</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>image</code><br/> <em> string </em> </td> <td> <p>Image defines the container image that should be used for the SSSD sidecar.</p> </td> </tr> <tr> <td> <code>sssdConfigFile</code><br/> <em> <a href=\"#ceph.rook.io/v1.SSSDSidecarConfigFile\"> SSSDSidecarConfigFile </a> </em> </td> <td> <em>(Optional)</em> <p>SSSDConfigFile defines where the SSSD configuration should be sourced from. The config file will be placed into <code>/etc/sssd/sssd.conf</code>. If this is left empty, Rook will not add the file. This allows you to manage the <code>sssd.conf</code> file yourself however you wish. For example, you may build it into your custom Ceph container image or use the Vault agent injector to securely add the file via annotations on the CephNFS spec (passed to the NFS server pods).</p> </td> </tr> <tr> <td> <code>additionalFiles</code><br/> <em> <a href=\"#ceph.rook.io/v1.SSSDSidecarAdditionalFile\"> []SSSDSidecarAdditionalFile </a> </em> </td> <td> <em>(Optional)</em> <p>AdditionalFiles defines any number of additional files that should be mounted into the SSSD sidecar. These files may be referenced by the sssd.conf config file.</p> </td> </tr> <tr> <td> <code>resources</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core\"> Kubernetes core/v1.ResourceRequirements </a> </em> </td> <td> <em>(Optional)</em> <p>Resources allow specifying resource requests/limits on the SSSD sidecar container.</p> </td> </tr> <tr> <td> <code>debugLevel</code><br/> <em> int </em> </td> <td> <em>(Optional)</em> <p>DebugLevel sets the debug level for SSSD. If unset or set to 0, Rook does nothing. Otherwise, this may be a value between 1 and 10. See SSSD docs for more info: <a href=\"https://sssd.io/troubleshooting/basics.html#sssd-debug-logs\">https://sssd.io/troubleshooting/basics.html#sssd-debug-logs</a></p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.SSSDSidecarAdditionalFile\">SSSDSidecarAdditionalFile </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.SSSDSidecar\">SSSDSidecar</a>) </p> <div> <p>SSSDSidecarAdditionalFile represents the source from where additional files for the the SSSD configuration should come from and are made"
},
{
"data": "</div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>subPath</code><br/> <em> string </em> </td> <td> <p>SubPath defines the sub-path in <code>/etc/sssd/rook-additional/</code> where the additional file(s) will be placed. Each subPath definition must be unique and must not contain ‘:’.</p> </td> </tr> <tr> <td> <code>volumeSource</code><br/> <em> <a href=\"#ceph.rook.io/v1.ConfigFileVolumeSource\"> ConfigFileVolumeSource </a> </em> </td> <td> <p>VolumeSource accepts a pared down version of the standard Kubernetes VolumeSource for the additional file(s) like what is normally used to configure Volumes for a Pod. Fore example, a ConfigMap, Secret, or HostPath. Each VolumeSource adds one or more additional files to the SSSD sidecar container in the <code>/etc/sssd/rook-additional/<subPath></code> directory. Be aware that some files may need to have a specific file mode like 0600 due to requirements by SSSD for some files. For example, CA or TLS certificates.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.SSSDSidecarConfigFile\">SSSDSidecarConfigFile </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.SSSDSidecar\">SSSDSidecar</a>) </p> <div> <p>SSSDSidecarConfigFile represents the source(s) from which the SSSD configuration should come.</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>volumeSource</code><br/> <em> <a href=\"#ceph.rook.io/v1.ConfigFileVolumeSource\"> ConfigFileVolumeSource </a> </em> </td> <td> <p>VolumeSource accepts a pared down version of the standard Kubernetes VolumeSource for the SSSD configuration file like what is normally used to configure Volumes for a Pod. For example, a ConfigMap, Secret, or HostPath. There are two requirements for the source’s content: The config file must be mountable via <code>subPath: sssd.conf</code>. For example, in a ConfigMap, the data item must be named <code>sssd.conf</code>, or <code>items</code> must be defined to select the key and give it path <code>sssd.conf</code>. A HostPath directory must have the <code>sssd.conf</code> file. The volume or config file must have mode 0600.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.SSSDSpec\">SSSDSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.NFSSecuritySpec\">NFSSecuritySpec</a>) </p> <div> <p>SSSDSpec represents configuration for System Security Services Daemon (SSSD).</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>sidecar</code><br/> <em> <a href=\"#ceph.rook.io/v1.SSSDSidecar\"> SSSDSidecar </a> </em> </td> <td> <em>(Optional)</em> <p>Sidecar tells Rook to run SSSD in a sidecar alongside the NFS-Ganesha server in each NFS pod.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.SanitizeDataSourceProperty\">SanitizeDataSourceProperty (<code>string</code> alias)</h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.SanitizeDisksSpec\">SanitizeDisksSpec</a>) </p> <div> <p>SanitizeDataSourceProperty represents a sanitizing data source</p> </div> <table> <thead> <tr> <th>Value</th> <th>Description</th> </tr> </thead> <tbody><tr><td><p>"random"</p></td> <td><p>SanitizeDataSourceRandom uses `shred’s default entropy source</p> </td> </tr><tr><td><p>"zero"</p></td> <td><p>SanitizeDataSourceZero uses /dev/zero as sanitize source</p> </td> </tr></tbody> </table> <h3 id=\"ceph.rook.io/v1.SanitizeDisksSpec\">SanitizeDisksSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CleanupPolicySpec\">CleanupPolicySpec</a>) </p> <div> <p>SanitizeDisksSpec represents a disk sanitizing specification</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>method</code><br/> <em> <a href=\"#ceph.rook.io/v1.SanitizeMethodProperty\"> SanitizeMethodProperty </a> </em> </td> <td> <em>(Optional)</em> <p>Method is the method we use to sanitize disks</p> </td> </tr> <tr> <td> <code>dataSource</code><br/> <em> <a href=\"#ceph.rook.io/v1.SanitizeDataSourceProperty\"> SanitizeDataSourceProperty </a> </em> </td> <td> <em>(Optional)</em> <p>DataSource is the data source to use to sanitize the disk with</p> </td> </tr> <tr> <td> <code>iteration</code><br/> <em> int32 </em> </td> <td> <em>(Optional)</em> <p>Iteration is the number of pass to apply the sanitizing</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.SanitizeMethodProperty\">SanitizeMethodProperty (<code>string</code> alias)</h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.SanitizeDisksSpec\">SanitizeDisksSpec</a>) </p> <div> <p>SanitizeMethodProperty represents a disk sanitizing method</p> </div> <table> <thead> <tr> <th>Value</th> <th>Description</th> </tr> </thead> <tbody><tr><td><p>"complete"</p></td> <td><p>SanitizeMethodComplete will sanitize everything on the disk</p> </td> </tr><tr><td><p>"quick"</p></td> <td><p>SanitizeMethodQuick will sanitize metadata only on the disk</p> </td> </tr></tbody> </table> <h3 id=\"ceph.rook.io/v1.SecuritySpec\">SecuritySpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ClusterSpec\">ClusterSpec</a>, <a href=\"#ceph.rook.io/v1.ObjectStoreSecuritySpec\">ObjectStoreSecuritySpec</a>) </p> <div> <p>SecuritySpec is security spec to include various security items such as kms</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>kms</code><br/> <em> <a href=\"#ceph.rook.io/v1.KeyManagementServiceSpec\"> KeyManagementServiceSpec </a> </em> </td> <td> <em>(Optional)</em> <p>KeyManagementService is the main Key Management option</p> </td> </tr> <tr> <td> <code>keyRotation</code><br/> <em> <a href=\"#ceph.rook.io/v1.KeyRotationSpec\"> KeyRotationSpec </a> </em> </td> <td> <em>(Optional)</em> <p>KeyRotation defines options for Key Rotation.</p> </td> </tr> </tbody> </table> <h3"
},
{
"data": "</h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.Node\">Node</a>, <a href=\"#ceph.rook.io/v1.StorageScopeSpec\">StorageScopeSpec</a>) </p> <div> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>useAllDevices</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Whether to consume all the storage devices found on a machine</p> </td> </tr> <tr> <td> <code>deviceFilter</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>A regular expression to allow more fine-grained selection of devices on nodes across the cluster</p> </td> </tr> <tr> <td> <code>devicePathFilter</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>A regular expression to allow more fine-grained selection of devices with path names</p> </td> </tr> <tr> <td> <code>devices</code><br/> <em> <a href=\"#ceph.rook.io/v1.Device\"> []Device </a> </em> </td> <td> <em>(Optional)</em> <p>List of devices to use as storage devices</p> </td> </tr> <tr> <td> <code>volumeClaimTemplates</code><br/> <em> <a href=\"#ceph.rook.io/v1.VolumeClaimTemplate\"> []VolumeClaimTemplate </a> </em> </td> <td> <em>(Optional)</em> <p>PersistentVolumeClaims to use as storage</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.SnapshotSchedule\">SnapshotSchedule </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.SnapshotSchedulesSpec\">SnapshotSchedulesSpec</a>) </p> <div> <p>SnapshotSchedule is a schedule</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>interval</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Interval is the interval in which snapshots will be taken</p> </td> </tr> <tr> <td> <code>start_time</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>StartTime is the snapshot starting time</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.SnapshotScheduleRetentionSpec\">SnapshotScheduleRetentionSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.FSMirroringSpec\">FSMirroringSpec</a>) </p> <div> <p>SnapshotScheduleRetentionSpec is a retention policy</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>path</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Path is the path to snapshot</p> </td> </tr> <tr> <td> <code>duration</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Duration represents the retention duration for a snapshot</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.SnapshotScheduleSpec\">SnapshotScheduleSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.FSMirroringSpec\">FSMirroringSpec</a>, <a href=\"#ceph.rook.io/v1.MirroringSpec\">MirroringSpec</a>) </p> <div> <p>SnapshotScheduleSpec represents the snapshot scheduling settings of a mirrored pool</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>path</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Path is the path to snapshot, only valid for CephFS</p> </td> </tr> <tr> <td> <code>interval</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Interval represent the periodicity of the snapshot.</p> </td> </tr> <tr> <td> <code>startTime</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>StartTime indicates when to start the snapshot</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.SnapshotScheduleStatusSpec\">SnapshotScheduleStatusSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephBlockPoolStatus\">CephBlockPoolStatus</a>) </p> <div> <p>SnapshotScheduleStatusSpec is the status of the snapshot schedule</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>snapshotSchedules</code><br/> <em> <a href=\"#ceph.rook.io/v1.SnapshotSchedulesSpec\"> []SnapshotSchedulesSpec </a> </em> </td> <td> <em>(Optional)</em> <p>SnapshotSchedules is the list of snapshots scheduled</p> </td> </tr> <tr> <td> <code>lastChecked</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>LastChecked is the last time time the status was checked</p> </td> </tr> <tr> <td> <code>lastChanged</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>LastChanged is the last time time the status last changed</p> </td> </tr> <tr> <td> <code>details</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Details contains potential status errors</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.SnapshotSchedulesSpec\">SnapshotSchedulesSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.SnapshotScheduleStatusSpec\">SnapshotScheduleStatusSpec</a>) </p> <div> <p>SnapshotSchedulesSpec is the list of snapshot scheduled for images in a pool</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>pool</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Pool is the pool name</p> </td> </tr> <tr> <td> <code>namespace</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Namespace is the RADOS namespace the image is part of</p> </td> </tr> <tr> <td> <code>image</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Image is the mirrored image</p> </td> </tr> <tr> <td> <code>items</code><br/> <em> <a href=\"#ceph.rook.io/v1.SnapshotSchedule\"> []SnapshotSchedule </a> </em> </td> <td> <em>(Optional)</em> <p>Items is the list schedules times for a given snapshot</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.StatesSpec\">StatesSpec </h3> <p> (<em>Appears"
},
{
"data": "href=\"#ceph.rook.io/v1.PoolMirroringStatusSummarySpec\">PoolMirroringStatusSummarySpec</a>) </p> <div> <p>StatesSpec are rbd images mirroring state</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>starting_replay</code><br/> <em> int </em> </td> <td> <em>(Optional)</em> <p>StartingReplay is when the replay of the mirroring journal starts</p> </td> </tr> <tr> <td> <code>replaying</code><br/> <em> int </em> </td> <td> <em>(Optional)</em> <p>Replaying is when the replay of the mirroring journal is on-going</p> </td> </tr> <tr> <td> <code>syncing</code><br/> <em> int </em> </td> <td> <em>(Optional)</em> <p>Syncing is when the image is syncing</p> </td> </tr> <tr> <td> <code>stopping_replay</code><br/> <em> int </em> </td> <td> <em>(Optional)</em> <p>StopReplaying is when the replay of the mirroring journal stops</p> </td> </tr> <tr> <td> <code>stopped</code><br/> <em> int </em> </td> <td> <em>(Optional)</em> <p>Stopped is when the mirroring state is stopped</p> </td> </tr> <tr> <td> <code>unknown</code><br/> <em> int </em> </td> <td> <em>(Optional)</em> <p>Unknown is when the mirroring state is unknown</p> </td> </tr> <tr> <td> <code>error</code><br/> <em> int </em> </td> <td> <em>(Optional)</em> <p>Error is when the mirroring state is errored</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.Status\">Status </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.CephBucketNotification\">CephBucketNotification</a>, <a href=\"#ceph.rook.io/v1.CephFilesystemMirror\">CephFilesystemMirror</a>, <a href=\"#ceph.rook.io/v1.CephNFS\">CephNFS</a>, <a href=\"#ceph.rook.io/v1.CephObjectRealm\">CephObjectRealm</a>, <a href=\"#ceph.rook.io/v1.CephObjectZone\">CephObjectZone</a>, <a href=\"#ceph.rook.io/v1.CephObjectZoneGroup\">CephObjectZoneGroup</a>, <a href=\"#ceph.rook.io/v1.CephRBDMirror\">CephRBDMirror</a>) </p> <div> <p>Status represents the status of an object</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>phase</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>observedGeneration</code><br/> <em> int64 </em> </td> <td> <em>(Optional)</em> <p>ObservedGeneration is the latest generation observed by the controller.</p> </td> </tr> <tr> <td> <code>conditions</code><br/> <em> <a href=\"#ceph.rook.io/v1.Condition\"> []Condition </a> </em> </td> <td> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.StorageClassDeviceSet\">StorageClassDeviceSet </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.StorageScopeSpec\">StorageScopeSpec</a>) </p> <div> <p>StorageClassDeviceSet is a storage class device set</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <p>Name is a unique identifier for the set</p> </td> </tr> <tr> <td> <code>count</code><br/> <em> int </em> </td> <td> <p>Count is the number of devices in this set</p> </td> </tr> <tr> <td> <code>resources</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core\"> Kubernetes core/v1.ResourceRequirements </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>placement</code><br/> <em> <a href=\"#ceph.rook.io/v1.Placement\"> Placement </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>preparePlacement</code><br/> <em> <a href=\"#ceph.rook.io/v1.Placement\"> Placement </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>config</code><br/> <em> map[string]string </em> </td> <td> <em>(Optional)</em> <p>Provider-specific device configuration</p> </td> </tr> <tr> <td> <code>volumeClaimTemplates</code><br/> <em> <a href=\"#ceph.rook.io/v1.VolumeClaimTemplate\"> []VolumeClaimTemplate </a> </em> </td> <td> <p>VolumeClaimTemplates is a list of PVC templates for the underlying storage devices</p> </td> </tr> <tr> <td> <code>portable</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Portable represents OSD portability across the hosts</p> </td> </tr> <tr> <td> <code>tuneDeviceClass</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>TuneSlowDeviceClass Tune the OSD when running on a slow Device Class</p> </td> </tr> <tr> <td> <code>tuneFastDeviceClass</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>TuneFastDeviceClass Tune the OSD when running on a fast Device Class</p> </td> </tr> <tr> <td> <code>schedulerName</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>Scheduler name for OSD pod placement</p> </td> </tr> <tr> <td> <code>encrypted</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> <p>Whether to encrypt the deviceSet</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.StorageScopeSpec\">StorageScopeSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ClusterSpec\">ClusterSpec</a>) </p> <div> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>nodes</code><br/> <em> <a href=\"#ceph.rook.io/v1.Node\"> []Node </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>useAllNodes</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>onlyApplyOSDPlacement</code><br/> <em> bool </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>config</code><br/> <em> map[string]string </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>Selection</code><br/> <em> <a href=\"#ceph.rook.io/v1.Selection\"> Selection </a> </em> </td> <td> <p> (Members of <code>Selection</code> are embedded into this type.) </p> </td> </tr> <tr> <td> <code>storageClassDeviceSets</code><br/> <em> <a href=\"#ceph.rook.io/v1.StorageClassDeviceSet\"> []StorageClassDeviceSet </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>store</code><br/> <em>"
},
{
"data": "href=\"#ceph.rook.io/v1.OSDStore\"> OSDStore </a> </em> </td> <td> <em>(Optional)</em> </td> </tr> <tr> <td> <code>flappingRestartIntervalHours</code><br/> <em> int </em> </td> <td> <em>(Optional)</em> <p>FlappingRestartIntervalHours defines the time for which the OSD pods, that failed with zero exit code, will sleep before restarting. This is needed for OSD flapping where OSD daemons are marked down more than 5 times in 600 seconds by Ceph. Preventing the OSD pods to restart immediately in such scenarios will prevent Rook from marking OSD as <code>up</code> and thus peering of the PGs mapped to the OSD. User needs to manually restart the OSD pod if they manage to fix the underlying OSD flapping issue before the restart interval. The sleep will be disabled if this interval is set to 0.</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.StoreType\">StoreType (<code>string</code> alias)</h3> <div> </div> <table> <thead> <tr> <th>Value</th> <th>Description</th> </tr> </thead> <tbody><tr><td><p>"bluestore"</p></td> <td><p>StoreTypeBlueStore is the bluestore backend storage for OSDs</p> </td> </tr><tr><td><p>"bluestore-rdr"</p></td> <td><p>StoreTypeBlueStoreRDR is the bluestore-rdr backed storage for OSDs</p> </td> </tr></tbody> </table> <h3 id=\"ceph.rook.io/v1.StretchClusterSpec\">StretchClusterSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.MonSpec\">MonSpec</a>) </p> <div> <p>StretchClusterSpec represents the specification of a stretched Ceph Cluster</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>failureDomainLabel</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>FailureDomainLabel the failure domain name (e,g: zone)</p> </td> </tr> <tr> <td> <code>subFailureDomain</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>SubFailureDomain is the failure domain within a zone</p> </td> </tr> <tr> <td> <code>zones</code><br/> <em> <a href=\"#ceph.rook.io/v1.MonZoneSpec\"> []MonZoneSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Zones is the list of zones</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.TopicEndpointSpec\">TopicEndpointSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.BucketTopicSpec\">BucketTopicSpec</a>) </p> <div> <p>TopicEndpointSpec contains exactly one of the endpoint specs of a Bucket Topic</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>http</code><br/> <em> <a href=\"#ceph.rook.io/v1.HTTPEndpointSpec\"> HTTPEndpointSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Spec of HTTP endpoint</p> </td> </tr> <tr> <td> <code>amqp</code><br/> <em> <a href=\"#ceph.rook.io/v1.AMQPEndpointSpec\"> AMQPEndpointSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Spec of AMQP endpoint</p> </td> </tr> <tr> <td> <code>kafka</code><br/> <em> <a href=\"#ceph.rook.io/v1.KafkaEndpointSpec\"> KafkaEndpointSpec </a> </em> </td> <td> <em>(Optional)</em> <p>Spec of Kafka endpoint</p> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.VolumeClaimTemplate\">VolumeClaimTemplate </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.MonSpec\">MonSpec</a>, <a href=\"#ceph.rook.io/v1.MonZoneSpec\">MonZoneSpec</a>, <a href=\"#ceph.rook.io/v1.Selection\">Selection</a>, <a href=\"#ceph.rook.io/v1.StorageClassDeviceSet\">StorageClassDeviceSet</a>) </p> <div> <p>VolumeClaimTemplate is a simplified version of K8s corev1’s PVC. It has no type meta or status.</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>metadata</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#objectmeta-v1-meta\"> Kubernetes meta/v1.ObjectMeta </a> </em> </td> <td> <em>(Optional)</em> <p>Standard object’s metadata. More info: <a href=\"https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\">https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata</a></p> Refer to the Kubernetes API documentation for the fields of the <code>metadata</code> field. </td> </tr> <tr> <td> <code>spec</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#persistentvolumeclaimspec-v1-core\"> Kubernetes core/v1.PersistentVolumeClaimSpec </a> </em> </td> <td> <em>(Optional)</em> <p>spec defines the desired characteristics of a volume requested by a pod author. More info: <a href=\"https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims\">https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims</a></p> <br/> <br/> <table> <tr> <td> <code>accessModes</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#persistentvolumeaccessmode-v1-core\"> []Kubernetes core/v1.PersistentVolumeAccessMode </a> </em> </td> <td> <em>(Optional)</em> <p>accessModes contains the desired access modes the volume should have. More info: <a href=\"https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1\">https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1</a></p> </td> </tr> <tr> <td> <code>selector</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta\"> Kubernetes meta/v1.LabelSelector </a> </em> </td> <td> <em>(Optional)</em> <p>selector is a label query over volumes to consider for binding.</p> </td> </tr> <tr> <td> <code>resources</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumeresourcerequirements-v1-core\"> Kubernetes core/v1.VolumeResourceRequirements </a> </em> </td> <td> <em>(Optional)</em> <p>resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: <a href=\"https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources\">https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources</a></p> </td> </tr> <tr> <td> <code>volumeName</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>volumeName is the binding reference to the PersistentVolume backing this claim.</p> </td> </tr> <tr> <td> <code>storageClassName</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>storageClassName is the name of the StorageClass required by the claim. More info:"
},
{
"data": "href=\"https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1\">https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1</a></p> </td> </tr> <tr> <td> <code>volumeMode</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#persistentvolumemode-v1-core\"> Kubernetes core/v1.PersistentVolumeMode </a> </em> </td> <td> <em>(Optional)</em> <p>volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec.</p> </td> </tr> <tr> <td> <code>dataSource</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#typedlocalobjectreference-v1-core\"> Kubernetes core/v1.TypedLocalObjectReference </a> </em> </td> <td> <em>(Optional)</em> <p>dataSource field can be used to specify either: An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource.</p> </td> </tr> <tr> <td> <code>dataSourceRef</code><br/> <em> <a href=\"https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#typedobjectreference-v1-core\"> Kubernetes core/v1.TypedObjectReference </a> </em> </td> <td> <em>(Optional)</em> <p>dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn’t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn’t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.</p> </td> </tr> <tr> <td> <code>volumeAttributesClassName</code><br/> <em> string </em> </td> <td> <em>(Optional)</em> <p>volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it’s not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: <a href=\"https://kubernetes.io/docs/concepts/storage/persistent-volumes#volumeattributesclass\">https://kubernetes.io/docs/concepts/storage/persistent-volumes#volumeattributesclass</a> (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled.</p> </td> </tr> </table> </td> </tr> </tbody> </table> <h3 id=\"ceph.rook.io/v1.ZoneSpec\">ZoneSpec </h3> <p> (<em>Appears on:</em><a href=\"#ceph.rook.io/v1.ObjectStoreSpec\">ObjectStoreSpec</a>) </p> <div> <p>ZoneSpec represents a Ceph Object Store Gateway Zone specification</p> </div> <table> <thead> <tr> <th>Field</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td> <code>name</code><br/> <em> string </em> </td> <td> <p>RGW Zone the Object Store is in</p> </td> </tr> </tbody> </table> <hr/> <p><em> Generated with <code>gen-crd-api-reference-docs</code>. </em></p>"
}
] |
{
"category": "Runtime",
"file_name": "specification.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "(images-remote)= The CLI command can support several image servers and comes pre-configured with our own. See {ref}`image-servers` for an overview. <!-- Include start list remotes --> To see all configured remote servers, enter the following command: incus remote list Remote servers that use the are pure image servers. Servers that use the `incus` format are Incus servers, which either serve solely as image servers or might provide some images in addition to serving as regular Incus servers. See {ref}`image-server-types` for more information. <!-- Include end list remotes --> To list all remote images on a server, enter the following command: incus image list <remote>: You can filter the results. See {ref}`images-manage-filter` for instructions. How to add a remote depends on the protocol that the server uses. To add a simple streams server as a remote, enter the following command: incus remote add <remote_name> <URL> --protocol=simplestreams The URL must use HTTPS. <!-- Include start add remotes --> To add an Incus server as a remote, enter the following command: incus remote add <remote_name> <IP|FQDN|URL> [flags] Some authentication methods require specific flags (for example, use for OIDC authentication). See {ref}`server-authenticate` and {ref}`authentication` for more information. For example, enter the following command to add a remote through an IP address: incus remote add my-remote 192.0.2.10 You are prompted to confirm the remote server fingerprint and then asked for the token. <!-- Include end add remotes --> To reference an image, specify its remote and its alias or fingerprint, separated with a colon. For example: images:ubuntu/22.04 images:ubuntu/22.04 local:ed7509d7e83f (images-remote-default)= If you specify an image name without the name of the remote, the default image server is used. To see which server is configured as the default image server, enter the following command: incus remote get-default To select a different remote as the default image server, enter the following command: incus remote switch <remote_name>"
}
] |
{
"category": "Runtime",
"file_name": "images_remote.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "The following is an overview of the different installation methods available. Kata Containers 3.0 rust runtime requires nested virtualization or bare metal. Check to see if your system is capable of running Kata Containers. Kata Containers 3.0 rust runtime currently runs on 64-bit systems supporting the following architectures: Notes: For other architectures, see https://github.com/kata-containers/kata-containers/issues/4320 | Architecture | Virtualization technology | |-|-| | `x86_64`| VT-x | | `aarch64` (\"`arm64`\")| Hyp | | Installation method | Description | Automatic updates | Use case | Availability ||-|-|--|-- | | | The preferred way to deploy the Kata Containers distributed binaries on a Kubernetes cluster | No! | Best way to give it a try on kata-containers on an already up and running Kubernetes cluster. | Yes | | | Kata packages provided by Linux distributions official repositories | yes | Recommended for most users. | No | | | Run a single command to install a full system | No! | For those wanting the latest release quickly. | No | | | Follow a guide step-by-step to install a working system | No! | For those who want the latest release with more control. | No | | | Build the software components manually | No! | Power users and developers only. | Yes | Follow the . `ToDo` `ToDo` `ToDo` Download `Rustup` and install `Rust` > Notes: > For Rust version, please set `RUSTVERSION` to the value of `languages.rust.meta.newest-version key` in or, if `yq` is available on your system, run `export RUSTVERSION=$(yq read versions.yaml languages.rust.meta.newest-version)`. Example for `x86_64` ``` $ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh $ source $HOME/.cargo/env $ rustup install ${RUST_VERSION} $ rustup default ${RUSTVERSION}-x8664-unknown-linux-gnu ``` Musl support for fully static binary Example for `x86_64` ``` $ rustup target add x86_64-unknown-linux-musl ``` install Example for musl 1.2.3 ``` $ curl -O https://git.musl-libc.org/cgit/musl/snapshot/musl-1.2.3.tar.gz $ tar vxf musl-1.2.3.tar.gz $ cd musl-1.2.3/ $ ./configure --prefix=/usr/local/ $ make && sudo make install ``` ``` $ git clone https://github.com/kata-containers/kata-containers.git $ cd kata-containers/src/runtime-rs $ make && sudo make install ``` After running the command above, the default config file `configuration.toml` will be installed under `/usr/share/defaults/kata-containers/`, the binary file `containerd-shim-kata-v2` will be installed under `/usr/local/bin/` . Follow the . Follow the . Follow the . Follow the ."
}
] |
{
"category": "Runtime",
"file_name": "kata-containers-3.0-rust-runtime-installation-guide.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "sidebar_label: etcd sidebar_position: 3 slug: /etcdbestpractices By default, etcd sets a of 2GB, which can support storing metadata of two million files. Adjusted via the `--quota-backend-bytes` option, do not exceed 8GB. By default, etcd will keep the modification history of all data until the amount of data exceeds the space quota and the service cannot be provided. It is recommended to add the following options to enable : ```` --auto-compaction-mode revision --auto-compaction-retention 1000000 ```` When the amount of data reaches the quota and cannot be written, the capacity can be reduced by manual compaction (`etcdctl compact`) and defragmentation (`etcdctl defrag`). It is strongly recommended to perform these operations on the nodes of the etcd cluster one by one, otherwise the entire etcd cluster may become unavailable. etcd provides strongly consistent read and write access, and all operations involve multi-machine transactions and disk data persistence. It is recommended to use high-performance SSD for deployment, otherwise it will affect the performance of the file system. For more hardware configuration suggestions, please refer to . If the etcd cluster has power-down protection, or other measures that can ensure that all nodes will not go down at the same time, you can also disable data synchronization and disk storage through the `--unsafe-no-fsync` option to reduce access latency and improve files system performance. At this time, if two nodes are down at the same time, there is a risk of data loss. It is recommended to build an independent etcd service in the Kubernetes environment for JuiceFS to use, instead of using the default etcd service in the cluster, to avoid affecting the stability of the Kubernetes cluster when the file system access pressure is high."
}
] |
{
"category": "Runtime",
"file_name": "etcd_best_practices.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "The Quality-of-Service (QoS) scheduler performs egress-traffic management by prioritizing the transmission of the packets of different type services and subscribers based on the Service Level Agreements (SLAs). The QoS scheduler can be enabled on one or more NIC output interfaces depending upon the requirement. The QoS scheduler supports a number of scheduling and shaping levels which construct hierarchical-tree. The first level in the hierarchy is port (i.e. the physical interface) that constitutes the root node of the tree. The subsequent level is subport which represents the group of the users/subscribers. The individual user/subscriber is represented by the pipe at the next level. Each user can have different traffic type based on the criteria of specific loss rate, jitter, and latency. These traffic types are represented at the traffic-class level in the form of different traffic- classes. The last level contains number of queues which are grouped together to host the packets of the specific class type traffic. The QoS scheduler implementation requires flow classification, enqueue and dequeue operations. The flow classification is mandatory stage for HQoS where incoming packets are classified by mapping the packet fields information to 5-tuple (HQoS subport, pipe, traffic class, queue within traffic class, and color) and storing that information in mbuf sched field. The enqueue operation uses this information to determine the queue for storing the packet, and at this stage, if the specific queue is full, QoS drops the packet. The dequeue operation consists of scheduling the packet based on its length and available credits, and handing over the scheduled packet to the output interface. For more information on QoS Scheduler, please refer DPDK Programmer's Guide- http://dpdk.org/doc/guides/progguide/qosframework.html Following illustrates the default HQoS configuration for each 10GbE output port: Single subport (subport 0): Subport rate set to 100% of port rate Each of the 4 traffic classes has rate set to 100% of port rate 4K pipes per subport 0 (pipes 0 .. 4095) with identical configuration: Pipe rate set to 1/4K of port rate Each of the 4 traffic classes has rate set to 100% of pipe rate Within each traffic class, the byte-level WRR weights for the 4 queues are set to 1:1:1:1 ``` port { rate 1250000000 / Assuming 10GbE port / frame_overhead 24 /* Overhead fields per Ethernet frame: 7B (Preamble) + 1B (Start of Frame Delimiter (SFD)) + 4B (Frame Check Sequence (FCS)) + 12B (Inter Frame Gap (IFG)) */ mtu 1522 / Assuming Ethernet/IPv4 pkt (FCS not included) / nsubportsper_port 1 / Number of subports per output interface / npipesper_subport 4096 / Number of pipes (users/subscribers) / queue_sizes 64 64 64 64 /* Packet queue size for each traffic class. All queues within the same pipe traffic class have the same size. Queues from different pipes serving the same traffic class have the same"
},
{
"data": "/ } ``` ``` subport 0 { tb_rate 1250000000 / Subport level token bucket rate (bytes per second) / tb_size 1000000 / Subport level token bucket size (bytes) / tc0_rate 1250000000 / Subport level token bucket rate for traffic class 0 (bytes per second) / tc1_rate 1250000000 / Subport level token bucket rate for traffic class 1 (bytes per second) / tc2_rate 1250000000 / Subport level token bucket rate for traffic class 2 (bytes per second) / tc3_rate 1250000000 / Subport level token bucket rate for traffic class 3 (bytes per second) / tc_period 10 / Time interval for refilling the token bucket associated with traffic class (Milliseconds) / pipe 0 4095 profile 0 / pipes (users/subscribers) configured with pipe profile 0 / } ``` ``` pipe_profile 0 { tb_rate 305175 / Pipe level token bucket rate (bytes per second) / tb_size 1000000 / Pipe level token bucket size (bytes) / tc0_rate 305175 / Pipe level token bucket rate for traffic class 0 (bytes per second) / tc1_rate 305175 / Pipe level token bucket rate for traffic class 1 (bytes per second) / tc2_rate 305175 / Pipe level token bucket rate for traffic class 2 (bytes per second) / tc3_rate 305175 / Pipe level token bucket rate for traffic class 3 (bytes per second) / tc_period 40 / Time interval for refilling the token bucket associated with traffic class at pipe level (Milliseconds) / tc3oversubscriptionweight 1 / Weight traffic class 3 oversubscription / tc0wrrweights 1 1 1 1 / Pipe queues WRR weights for traffic class 0 / tc1wrrweights 1 1 1 1 / Pipe queues WRR weights for traffic class 1 / tc2wrrweights 1 1 1 1 / Pipe queues WRR weights for traffic class 2 / tc3wrrweights 1 1 1 1 / Pipe queues WRR weights for traffic class 3 / } ``` ``` red { tc0wredmin 48 40 32 / Minimum threshold for traffic class 0 queue (min_th) in number of packets / tc0wredmax 64 64 64 / Maximum threshold for traffic class 0 queue (max_th) in number of packets / tc0wredinvprob 10 10 10 /* Inverse of packet marking probability for traffic class 0 queue (maxp = 1 / maxpinv) */ tc0wredweight 9 9 9 / Traffic Class 0 queue weight / tc1wredmin 48 40 32 / Minimum threshold for traffic class 1 queue (min_th) in number of packets / tc1wredmax 64 64 64 / Maximum threshold for traffic class 1 queue (max_th) in number of packets / tc1wredinvprob 10 10 10 /* Inverse of packet marking probability for traffic class 1 queue (maxp = 1 / maxpinv) */ tc1wredweight 9 9 9 / Traffic Class 1 queue weight / tc2wredmin 48 40 32 / Minimum threshold for traffic class 2 queue (min_th) in number of packets / tc2wredmax 64 64 64 / Maximum threshold for traffic class 2 queue (max_th) in number of packets / tc2wredinvprob 10 10 10 /* Inverse of packet marking probability for traffic class 2 queue (maxp = 1 / maxpinv) */ tc2wredweight 9 9 9 / Traffic Class 2 queue weight / tc3wredmin 48 40 32 / Minimum threshold for traffic class 3 queue (min_th) in number of packets / tc3wredmax 64 64 64 / Maximum threshold for traffic class 3 queue (max_th) in number of packets / tc3wredinvprob 10 10 10 /* Inverse of packet marking probability for traffic class 3 queue (maxp = 1 / maxpinv) */ tc3wredweight 9 9 9 / Traffic Class 3 queue weight / } ``` The Hierarchical Quality-of-Service (HQoS) scheduler object could be seen as part of the logical NIC output interface. To enable HQoS on specific output interface, vpp startup.conf file has to be configured accordingly. The output interface that requires HQoS, should have \"hqos\" parameter specified in dpdk section. Another optional parameter \"hqos-thread\" has been defined which can be used to associate the output interface with specific hqos thread. In cpu section of the config file, \"corelist-hqos-threads\" is introduced to assign logical cpu cores to run the HQoS threads. A HQoS thread can run multiple HQoS objects each associated with different output interfaces. All worker threads instead of writing packets to NIC TX queue directly, write the packets to a software"
},
{
"data": "The hqos_threads read the software queues, and enqueue the packets to HQoS objects, as well as dequeue packets from HQOS objects and write them to NIC output interfaces. The worker threads need to be able to send the packets to any output interface, therefore, each HQoS object associated with NIC output interface should have software queues equal to worker threads count. Following illustrates the sample startup configuration file with 4x worker threads feeding 2x hqos threads that handle each QoS scheduler for 1x output interface. ``` dpdk { socket-mem 16384,16384 dev 0000:02:00.0 { num-rx-queues 2 hqos } dev 0000:06:00.0 { num-rx-queues 2 hqos } num-mbufs 1000000 } cpu { main-core 0 corelist-workers 1, 2, 3, 4 corelist-hqos-threads 5, 6 } ``` Each QoS scheduler instance is initialised with default parameters required to configure hqos port, subport, pipe and queues. Some of the parameters can be re-configured in run-time through CLI commands. Following commands can be used to configure QoS scheduler parameters. The command below can be used to set the subport level parameters such as token bucket rate (bytes per seconds), token bucket size (bytes), traffic class rates (bytes per seconds) and token update period (Milliseconds). ``` set dpdk interface hqos subport <interface> subport <subport_id> [rate <n>] [bktsize <n>] [tc0 <n>] [tc1 <n>] [tc2 <n>] [tc3 <n>] [period <n>] ``` For setting the pipe profile, following command can be used. ``` set dpdk interface hqos pipe <interface> subport <subportid> pipe <pipeid> profile <profile_id> ``` To assign QoS scheduler instance to the specific thread, following command can be used. ``` set dpdk interface hqos placement <interface> thread <n> ``` The command below is used to set the packet fields required for classifying the incoming packet. As a result of classification process, packet field information will be mapped to 5 tuples (subport, pipe, traffic class, pipe, color) and stored in packet mbuf. ``` set dpdk interface hqos pktfield <interface> id subport|pipe|tc offset <n> mask <hex-mask> ``` The DSCP table entries used for identifying the traffic class and queue can be set using the command below; ``` set dpdk interface hqos tctbl <interface> entry <mapval> tc <tcid> queue <queue_id> ``` The QoS Scheduler configuration can displayed using the command below. ``` vpp# show dpdk interface hqos TenGigabitEthernet2/0/0 Thread: Input SWQ size = 4096 packets Enqueue burst size = 256 packets Dequeue burst size = 220 packets Packet field 0: slab position = 0, slab bitmask = 0x0000000000000000 (subport) Packet field 1: slab position = 40, slab bitmask = 0x0000000fff000000 (pipe) Packet field 2: slab position = 8, slab bitmask = 0x00000000000000fc (tc) Packet field 2 tc translation table: ([Mapped Value Range]: tc/queue tc/queue"
},
{
"data": "Port: Rate = 1250000000 bytes/second MTU = 1514 bytes Frame overhead = 24 bytes Number of subports = 1 Number of pipes per subport = 4096 Packet queue size: TC0 = 64, TC1 = 64, TC2 = 64, TC3 = 64 packets Number of pipe profiles = 1 Subport 0: Rate = 120000000 bytes/second Token bucket size = 1000000 bytes Traffic class rate: TC0 = 120000000, TC1 = 120000000, TC2 = 120000000, TC3 = 120000000 bytes/second TC period = 10 milliseconds Pipe profile 0: Rate = 305175 bytes/second Token bucket size = 1000000 bytes Traffic class rate: TC0 = 305175, TC1 = 305175, TC2 = 305175, TC3 = 305175 bytes/second TC period = 40 milliseconds TC0 WRR weights: Q0 = 1, Q1 = 1, Q2 = 1, Q3 = 1 TC1 WRR weights: Q0 = 1, Q1 = 1, Q2 = 1, Q3 = 1 TC2 WRR weights: Q0 = 1, Q1 = 1, Q2 = 1, Q3 = 1 TC3 WRR weights: Q0 = 1, Q1 = 1, Q2 = 1, Q3 = 1 ``` The QoS Scheduler placement over the logical cpu cores can be displayed using below command. ``` vpp# show dpdk interface hqos placement Thread 5 (vpphqos-threads0 at lcore 5): TenGigabitEthernet2/0/0 queue 0 Thread 6 (vpphqos-threads1 at lcore 6): TenGigabitEthernet4/0/1 queue 0 ``` This section explains the available binary APIs for configuring QoS scheduler parameters in run-time. The following API can be used to set the pipe profile of a pipe that belongs to a given subport: ``` swinterfacesetdpdkhqospipe rx <intfc> | swif_index <id> subport <subport-id> pipe <pipe-id> profile <profile-id> ``` The data structures used for set the pipe profile parameter are as follows; ``` / \\\\brief DPDK interface HQoS pipe profile set request @param client_index - opaque cookie to identify the sender @param context - sender context, to match reply w/ request @param swifindex - the interface @param subport - subport ID @param pipe - pipe ID within its subport @param profile - pipe profile ID */ define swinterfacesetdpdkhqos_pipe { u32 client_index; u32 context; u32 swifindex; u32 subport; u32 pipe; u32 profile; }; / \\\\brief DPDK interface HQoS pipe profile set reply @param context - sender context, to match reply w/ request @param retval - request return code */ define swinterfacesetdpdkhqospipereply { u32 context; i32 retval; }; ``` The following API can be used to set the subport level parameters, for example- token bucket rate (bytes per seconds), token bucket size (bytes), traffic class rate (bytes per seconds) and tokens update period. ``` swinterfacesetdpdkhqossubport rx <intfc> | swif_index <id> subport <subport-id> [rate <n>] [bktsize <n>] [tc0 <n>] [tc1 <n>] [tc2 <n>] [tc3 <n>] [period <n>] ``` The data structures used for set the subport level parameter are as follows; ``` / \\\\brief DPDK interface HQoS subport parameters set request @param client_index - opaque cookie to identify the sender @param context - sender context, to match reply w/ request @param swifindex - the interface @param subport - subport ID @param tb_rate - subport token bucket rate (measured in bytes/second) @param tb_size - subport token bucket size (measured in credits) @param tc_rate - subport traffic class 0 .. 3 rates (measured in bytes/second) @param tc_period - enforcement period for rates (measured in milliseconds) */ define swinterfacesetdpdkhqos_subport { u32 client_index; u32 context; u32 swifindex; u32 subport; u32 tb_rate; u32 tb_size; u32 tc_rate[4]; u32 tc_period; }; / \\\\brief DPDK interface HQoS subport parameters set reply @param context - sender context, to match reply w/ request @param retval - request return code */ define swinterfacesetdpdkhqossubportreply { u32 context; i32 retval; }; ``` The following API can be used set the"
}
] |
{
"category": "Runtime",
"file_name": "qos_doc.md",
"project_name": "FD.io",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Show load information ``` cilium-dbg loadinfo [flags] ``` ``` -h, --help help for loadinfo ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_loadinfo.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Incus supports intercepting some specific system calls from unprivileged containers. If they're considered to be safe, it executes them with elevated privileges on the host. Doing so comes with a performance impact for the syscall in question and will cause some work for Incus to evaluate the request and if allowed, process it with elevated privileges. Enabling of specific system call interception options is done on a per-container basis through container configuration options. The `mknod` and `mknodat` system calls can be used to create a variety of special files. Most commonly inside containers, they may be called to create block or character devices. Creating such devices isn't allowed in unprivileged containers as this is a very easy way to escalate privileges by allowing direct write access to resources like disks or memory. But there are files which are safe to create. For those, intercepting this syscall may unblock some specific workloads and allow them to run inside an unprivileged containers. The devices which are currently allowed are: overlayfs whiteout (char 0:0) `/dev/console` (char 5:1) `/dev/full` (char 1:7) `/dev/null` (char 1:3) `/dev/random` (char 1:8) `/dev/tty` (char 5:0) `/dev/urandom` (char 1:9) `/dev/zero` (char 1:5) All file types other than character devices are currently sent to the kernel as usual, so enabling this feature doesn't change their behavior at all. This can be enabled by setting `security.syscalls.intercept.mknod` to `true`. The `bpf` system call is used to manage eBPF programs in the kernel. Those can be attached to a variety of kernel subsystems. In general, loading of eBPF programs that are not trusted can be problematic as it can facilitate timing based attacks. Incus' eBPF support is currently restricted to programs managing devices cgroup entries. To enable it, you need to set both `security.syscalls.intercept.bpf` and `security.syscalls.intercept.bpf.devices` to true. The `mount` system call allows for mounting both physical and virtual file systems. By default, unprivileged containers are restricted by the kernel to just a handful of virtual and network file systems. To allow mounting physical file systems, system call interception can be used. Incus offers a variety of options to handle this. `security.syscalls.intercept.mount` is used to control the entire feature and needs to be turned on for any of the other options to work. `security.syscalls.intercept.mount.allowed` allows specifying a list of file systems which can be directly mounted in the"
},
{
"data": "This is the most dangerous option as it allows the user to feed data that is not trusted at the kernel. This can easily be used to crash the host system or to attack it. It should only ever be used in trusted environments. `security.syscalls.intercept.mount.shift` can be set on top of that so the resulting mount is shifted to the UID/GID map used by the container. This is needed to avoid everything showing up as `nobody`/`nogroup` inside of unprivileged containers. The much safer alternative to those is `security.syscalls.intercept.mount.fuse` which can be set to pairs of file-system name and FUSE handler. When this is set, an attempt at mounting one of the configured file systems will be transparently redirected to instead calling the FUSE equivalent of that file system. As this is all running as the caller, it avoids the entire issue around the kernel attack surface and so is generally considered to be safe, though you should keep in mind that any kind of system call interception makes for an easy way to overload the host system. The `sched_setscheduler` system call is used to manage process priority. Granting this may allow a user to significantly increase the priority of their processes, potentially taking a lot of system resources. It also allows access to schedulers like `SCHED_FIFO` which are generally considered to be flawed and can significantly impact overall system stability. This is why under normal conditions, only the real root user (or global `CAPSYSNICE`) would allow its use. The `setxattr` system call is used to set extended attributes on files. The attributes which are handled by this currently are: `trusted.overlay.opaque` (overlayfs directory whiteout) Note that because the mediation must happen on a number of character strings, there is no easy way at present to only intercept the few attributes we care about. As we only allow the attributes above, this may result in breakage for other attributes that would have been previously allowed by the kernel. This can be enabled by setting `security.syscalls.intercept.setxattr` to `true`. The `sysinfo` system call is used by some distributions instead of `/proc/` entries to report on resource usage. In order to provide resource usage information specific to the container, rather than the whole system, this syscall interception mode uses cgroup-based resource usage information to fill in the system call response."
}
] |
{
"category": "Runtime",
"file_name": "syscall-interception.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List fqdn cache contents ``` cilium-dbg fqdn cache list [flags] ``` ``` -e, --endpoint string List cache entries for a specific endpoint id -h, --help help for list -p, --matchpattern string List cache entries with FQDN that match matchpattern -o, --output string json| yaml| jsonpath='{}' -s, --source string List cache entries from a specific source (lookup, connection) ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage fqdn proxy cache"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_fqdn_cache_list.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "CNI Genie is an add-on to open-source project and is designed to provide the following features: Multiple CNI plugins are available to users in runtime. The user can offer any of the available CNI plugins to containers upon creating them User-story: based on performance requirements, application requirements, workload placement requirements, the user could be interested to use different CNI plugins for different application groups Different CNI plugins are different in terms of need for port-mapping, NAT, tunneling, interrupting host ports/interfaces Multiple IP addresses can be injected into a single container making the container reachable across multiple networks User-story: in a serverless platform the Request Dispatcher container that receives requests from customers of all different tenants needs to be able to pass the request to the right tenant. As a result, is should be reachable on the networks of all tenants User-story: many Telecom vendors are adopting container technology. For a router/firewall application to run in a container, it needs to have multiple interfaces Upon creating a pod, the user can manually select the logical network, or multiple logical networks, that the pod should be added to If upon creating a pod no logical network is included in the yaml configuration, CNI Genie will automatically select one of the available CNI plugins CNI Genie maintains a list of KPIs for all available CNI plugins. Examples of such KPIs are occupancy rate, number of subnets, response times CNI Genie stores records of requests made to each CNI plugin for logging and auditing purposes and it can generate reports upon request Network policy Network access control Note: CNI Genie is NOT a routing solution! It gets IP addresses from various CNSs"
}
] |
{
"category": "Runtime",
"file_name": "INTRODUCTION.md",
"project_name": "CNI-Genie",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Inspect StateDB ``` -h, --help help for statedb ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI - Show contents of table \"bandwidth-qdiscs\" - Show contents of table \"devices\" - Dump StateDB contents as JSON - Show contents of table \"health\" - Show contents of table \"ipsets\" - Show contents of table \"l2-announce\" - Show contents of table \"node-addresses\" - Show contents of table \"routes\" - Show contents of table \"sysctl\""
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_statedb.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "The results of metadata performance testing through are as follows: tool settings ``` bash TEST_PATH=/mnt/cfs/mdtest # mount point of CubeFS volume for CLIENTS in 1 2 4 8 # number of clients do mpirun --allow-run-as-root -np $CLIENTS --hostfile hfile01 mdtest -n 5000 -u -z 2 -i 3 -d $TEST_PATH; done ``` | | 1 Process | 4 Processes | 16 Processes | 64 Processes | |--|--|-|--|--| | 1 Client | 769.908 | 3630.337 | 12777.619 | 20629.592 | | 2 Clients | 1713.038 | 7259.282 | 24064.052 | 36769.599 | | 4 Clients | 3723.993 | 14002.366 | 42976.837 | 61513.648 | | 8 Clients | 6681.783 | 23946.143 | 64191.38 | 93729.222 | | | 1 Process | 4 Processes | 16 Processes | 64 Processes | |--|--|-|--|--| | 1 Client | 853.995 | 4012.404 | 15238.647 | 42028.845 | | 2 Clients | 1906.261 | 7967.688 | 28410.308 | 62338.506 | | 4 Clients | 3942.590 | 15601.799 | 46741.945 | 87411.655 | | 8 Clients | 7072.080 | 25183.092 | 69054.923 | 79459.091 | | | 1 Process | 4 Processes | 16 Processes | 64 Processes | |--|-|-|--|--| | 1 Client | 462527.445 | 1736760.332 | 6194206.768 | 15509755.836 | | 2 Clients | 885454.335 | 3414538.352 | 12263175.104 | 24951003.498 | | 4 Clients | 1727030.782 | 6874284.765 | 24371306.250 | 10412238.894 | | 8 Clients | 1897588.214 | 7499219.744 | 25927923.646 | 4264896.279 | | | 1 Process | 4 Processes | 16 Processes | 64 Processes | |--|--|-|--|--| | 1 Client | 706.453 | 3306.270 | 9647.176 | 10879.290 | | 2 Clients | 1601.891 | 6601.522 | 19181.965 | 20756.693 | | 4 Clients | 3369.911 | 13165.056 | 36158.061 | 41817.753 | | 8 Clients | 6312.911 | 22560.687 | 55801.062 | 76157.675 | | | 1 Process | 4 Processes | 16 Processes | 64 Processes | |--|--|-|--|--| | 1 Client | 1220.322 | 4606.340 | 11715.457 | 23653.043 | | 2 Clients | 2360.133 | 8971.361 | 22097.206 | 41579.985 | | 4 Clients | 4547.021 | 17242.900 | 34965.471 | 61726.902 | | 8 Clients | 8809.381 | 20379.839 | 55363.389 | 71101.736 | | | 1 Process | 4 Processes | 16 Processes | 64 Processes | |--|--|-|--|--| | 1 Client | 734.383 | 379.403 | 146.070 | 37.811 | | 2 Clients | 648.938 | 432.150 | 148.921 | 30.699 | | 4 Clients | 756.639 | 394.733 | 123.722 | 23.998 | | 8 Clients | 607.547 | 263.439 | 124.911 | 7.510 | | | 1 Process | 4 Processes | 16 Processes | 64 Processes | |--|--|-|--|--| | 1 Client | 552.706 | 83.197 | 23.823 | 5.747 | | 2 Clients | 448.557 | 84.633 | 24.037 | 5.137 | | 4 Clients | 453.520 | 85.636 | 23.490 | 5.233 | | 8 Clients | 429.920 | 83.449 | 23.777 | 1.667 |"
}
] |
{
"category": "Runtime",
"file_name": "meta.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Once you have pushed configuration JSON to `etcd`, you can start `flanneld`. If you published your config at the default location, you can start `flanneld` with no arguments. Flannel will acquire a subnet lease, configure its routes based on other leases in the overlay network and start routing packets. It will also monitor `etcd` for new members of the network and adjust the routes accordingly. After flannel has acquired the subnet and configured backend, it will write out an environment variable file (`/run/flannel/subnet.env` by default) with subnet address and MTU that it supports. For more information on checking the IP range for a specific host, see . Flanneld does not support running multiple networks from a single daemon (it did previously as an experimental feature). However, it does support running multiple daemons on the same host with different configurations. The `-subnet-file` and `-etcd-prefix` options should be used to \"namespace\" the different daemons. For example ``` flanneld -subnet-file /vxlan.env -etcd-prefix=/vxlan/network ``` Download a `flannel` binary. ```bash wget https://github.com/flannel-io/flannel/releases/latest/download/flanneld-amd64 && chmod +x flanneld-amd64 ``` Run the binary. ```bash sudo ./flanneld-amd64 # it will hang waiting to talk to etcd ``` Run `etcd`. Follow the instructions on the , or, if you have docker just do ```bash docker run --rm --net=host quay.io/coreos/etcd ``` Observe that `flannel` can now talk to `etcd`, but can't find any config. So write some config. Either get `etcdctl` from the , or use `docker` again. ```bash docker run --rm -e ETCDCTL_API=3 --net=host quay.io/coreos/etcd etcdctl put /coreos.com/network/config '{ \"Network\": \"10.5.0.0/16\", \"Backend\": {\"Type\": \"vxlan\"}}' ``` Now `flannel` is running, it has created a VXLAN tunnel device on the host and written a subnet config file ```bash cat /run/flannel/subnet.env FLANNEL_NETWORK=10.5.0.0/16 FLANNEL_SUBNET=10.5.72.1/24 FLANNEL_MTU=1450 FLANNEL_IPMASQ=false ``` Each time flannel is restarted, it will attempt to access the `FLANNEL_SUBNET` value written in this subnet config file. This prevents each host from needing to update its network information in case a host is unable to renew its lease before it expires (e.g. a host was restarting during the time flannel would normally renew its lease). The `FLANNELSUBNET` value is also only used if it is valid for the etcd network config. For instance, a `FLANNELSUBNET` value of `10.5.72.1/24` will not be used if the etcd network value is set to `10.6.0.0/16` since it is not within that network range. Subnet config value is `10.5.72.1/24` ```bash cat /run/flannel/subnet.env FLANNEL_NETWORK=10.5.0.0/16 FLANNEL_SUBNET=10.5.72.1/24 FLANNEL_MTU=1450 FLANNEL_IPMASQ=false ``` etcd network value is `10.6.0.0/16`. Since `10.5.72.1/24` is outside of this network, a new lease will be allocated. ```bash export ETCDCTL_API=3 etcdctl get"
},
{
"data": "{ \"Network\": \"10.6.0.0/16\", \"Backend\": {\"Type\": \"vxlan\"}} ``` Flannel uses the interface selected to register itself in the datastore. The important options are: `-iface string`: Interface to use (IP or name) for inter-host communication. `-public-ip string`: IP accessible by other nodes for inter-host communication. The combination of the defaults, the autodetection and these two flags ultimately result in the following being determined: An interface (used for MTU detection and selecting the VTEP MAC in VXLAN). An IP address for that interface. A public IP that can be used for reaching this node. In `host-gw` it should match the interface address. Please be aware of the following flannel runtime limitations. The datastore type cannot be changed. The backend type cannot be changed. (It can be changed if you stop all workloads and restart all flannel daemons.) You can change the subnetlen/subnetmin/subnetmax with a daemon restart. (Subnets can be changed with caution. If pods are already using IP addresses outside the new range they will stop working.) The clusterwide network range cannot be changed (without downtime). Docker daemon accepts `--bip` argument to configure the subnet of the docker0 bridge. It also accepts `--mtu` to set the MTU for docker0 and veth devices that it will be creating. Because flannel writes out the acquired subnet and MTU values into a file, the script starting Docker can source in the values and pass them to Docker daemon: ```bash source /run/flannel/subnet.env docker daemon --bip=${FLANNELSUBNET} --mtu=${FLANNELMTU} & ``` Systemd users can use `EnvironmentFile` directive in the `.service` file to pull in `/run/flannel/subnet.env` If you want to leave default docker0 network as it is and instead create a new network that will be using flannel you do so like this: ```bash source /run/flannel/subnet.env docker network create --attachable=true --subnet=${FLANNELSUBNET} -o \"com.docker.network.driver.mtu\"=${FLANNELMTU} flannel ``` Vagrant has a tendency to give the default interface (one with the default route) a non-unique IP (often 10.0.2.15). This causes flannel to register multiple nodes with the same IP. To work around this issue, use `--iface` option to specify the interface that has a unique IP. When running with a backend other than `udp`, the kernel is providing the data path with `flanneld` acting as the control plane. As such, `flanneld` can be restarted (even to do an upgrade) without disturbing existing flows. However in the case of `vxlan` backend, this needs to be done within a few seconds as ARP entries can start to timeout requiring the flannel daemon to refresh them. Also, to avoid interruptions during restart, the configuration must not be changed (e.g. VNI, --iface values)."
}
] |
{
"category": "Runtime",
"file_name": "running.md",
"project_name": "Flannel",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "(network-ovn)= <!-- Include start OVN intro --> {abbr}`OVN (Open Virtual Network)` is a software-defined networking system that supports virtual network abstraction. You can use it to build your own private cloud. See for more information. <!-- Include end OVN intro --> The `ovn` network type allows to create logical networks using the OVN {abbr}`SDN (software-defined networking)`. This kind of network can be useful for labs and multi-tenant environments where the same logical subnets are used in multiple discrete networks. An Incus OVN network can be connected to an existing managed {ref}`network-bridge` or {ref}`network-physical` to gain access to the wider network. By default, all connections from the OVN logical networks are NATed to an IP allocated from the uplink network. See {ref}`network-ovn-setup` for basic instructions for setting up an OVN network. % Include content from ```{include} network_bridge.md :start-after: <!-- Include start MAC identifier note --> :end-before: <!-- Include end MAC identifier note --> ``` (network-ovn-options)= The following configuration key namespaces are currently supported for the `ovn` network type: `bridge` (L2 interface configuration) `dns` (DNS server and resolution configuration) `ipv4` (L3 IPv4 configuration) `ipv6` (L3 IPv6 configuration) `security` (network ACL configuration) `user` (free-form key/value for user metadata) ```{note} {{noteipaddresses_CIDR}} ``` The following configuration options are available for the `ovn` network type: Key | Type | Condition | Default | Description :-- | :-- | :-- | :-- | :-- `network` | string | - | - | Uplink network to use for external network access `bridge.hwaddr` | string | - | - | MAC address for the bridge `bridge.mtu` | integer | - | `1442` | Bridge MTU (default allows host to host Geneve tunnels) `dns.domain` | string | - | `incus` | Domain to advertise to DHCP clients and use for DNS resolution `dns.search` | string | - | - | Full comma-separated domain search list, defaulting to `dns.domain` value `dns.zone.forward` | string | - | - | Comma-separated list of DNS zone names for forward DNS records `dns.zone.reverse.ipv4` | string | - | - | DNS zone name for IPv4 reverse DNS records `dns.zone.reverse.ipv6` | string | - | - | DNS zone name for IPv6 reverse DNS records"
},
{
"data": "| string | standard mode | - (initial value on creation: `auto`) | IPv4 address for the bridge (use `none` to turn off IPv4 or `auto` to generate a new random unused subnet) (CIDR) `ipv4.dhcp` | bool | IPv4 address | `true` | Whether to allocate addresses using DHCP `ipv4.l3only` | bool | IPv4 address | `false` | Whether to enable layer 3 only mode. `ipv4.nat` | bool | IPv4 address | `false` (initial value on creation if `ipv4.address` is set to `auto`: `true`) | Whether to NAT `ipv4.nat.address` | string | IPv4 address | - | The source address used for outbound traffic from the network (requires uplink `ovn.ingress_mode=routed`) `ipv6.address` | string | standard mode | - (initial value on creation: `auto`) | IPv6 address for the bridge (use `none` to turn off IPv6 or `auto` to generate a new random unused subnet) (CIDR) `ipv6.dhcp` | bool | IPv6 address | `true` | Whether to provide additional network configuration over DHCP `ipv6.dhcp.stateful` | bool | IPv6 DHCP | `false` | Whether to allocate addresses using DHCP `ipv6.l3only` | bool | IPv6 DHCP stateful | `false` | Whether to enable layer 3 only mode. `ipv6.nat` | bool | IPv6 address | `false` (initial value on creation if `ipv6.address` is set to `auto`: `true`) | Whether to NAT `ipv6.nat.address` | string | IPv6 address | - | The source address used for outbound traffic from the network (requires uplink `ovn.ingress_mode=routed`) `security.acls` | string | - | - | Comma-separated list of Network ACLs to apply to NICs connected to this network `security.acls.default.egress.action`| string | `security.acls` | `reject` | Action to use for egress traffic that doesn't match any ACL rule `security.acls.default.egress.logged`| bool | `security.acls` | `false` | Whether to log egress traffic that doesn't match any ACL rule `security.acls.default.ingress.action` | string | `security.acls` | `reject` | Action to use for ingress traffic that doesn't match any ACL rule `security.acls.default.ingress.logged` | bool | `security.acls` | `false` | Whether to log ingress traffic that doesn't match any ACL rule `user.*` | string | - | - | User-provided free-form key/value pairs (network-ovn-features)= The following features are supported for the `ovn` network type: {ref}`network-acls` {ref}`network-forwards` {ref}`network-integrations` {ref}`network-zones` {ref}`network-ovn-peers` {ref}`network-load-balancers` ```{toctree} :maxdepth: 1 :hidden: Set up OVN </howto/networkovnsetup> Create routing relationships </howto/networkovnpeers> Configure network load balancers </howto/networkloadbalancers> ```"
}
] |
{
"category": "Runtime",
"file_name": "network_ovn.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Manage security policies ``` -h, --help help for policy ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI - Delete policy rules - Display policy node information - Import security policy in JSON format - Display cached information about selectors - Validate a policy - Wait for all endpoints to have updated to a given policy revision"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_policy.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Every project has certain philosophical conundrums, that its contributors have to grapple with. These may sound very straightforward at onset, but you know they go deeper when the same question keeps coming back. Following are some such conundrums and our current stand on it. As most philosophical answers start, the answer is \"it depends\". But here are some general guidelines. If the functionality is a straightforward extension of current functionality, like adding new API to or support for new CO, use the existing repository. If the functionality is experimental/poc/exploratory and you would like to seek feedback from the OpenEBS community, you can put it under . If the functionality has been accepted to make it into a release, but requires some work to make it part of existing repository like - , or , put it into a new repository, till it gets cooked. Typical examples are - when the functionality has to be packaged into its own container or runs as an independent service etc,. Follow the principles listed by There are several different ways in which Go projects are structured out there - for instance from (using a single repository for multiple binaries) to (with single binary - that acts as both cli, client, and server), and (with somewhere in between). With these choices, it also becomes difficult to pick a path forward, especially with the language that is strongly opinionated. After having tried a few ways, the current stance stands as follows: pkg will contain the utility packages (homegrown or wrappers around other libraries - like log helpers, network helpers etc.,) types that are defined as high level structs without dependency on any other types. These can be used for interacting with other systems (outside this repository) or between apps in this repository. internal will contain the first class citizens of the product - or what were called the main objects on which CRUDs are performed. cmd will contain the binaries (main package) or multiple applications, which will delegate the heavy lifting to their corresponding apps. The pkg and types are intentionally kept at the top level, in case they need to be moved their own repositories in the future. The order of the listing also roughly determines the dependencies or the import rules. For example, packages in cmd can import anything from above packages but not vice-versa."
}
] |
{
"category": "Runtime",
"file_name": "code-structuring.md",
"project_name": "OpenEBS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> List all metrics for the operator ``` cilium-operator-azure metrics list [flags] ``` ``` -h, --help help for list -p, --match-pattern string Show only metrics whose names match matchpattern -o, --output string json| yaml| jsonpath='{}' -s, --server-address string Address of the operator API server (default \"localhost:9234\") ``` - Access metric status of the operator"
}
] |
{
"category": "Runtime",
"file_name": "cilium-operator-azure_metrics_list.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: Monitor Health Failure in a distributed system is to be expected. Ceph was designed from the ground up to deal with the failures of a distributed system. At the next layer, Rook was designed from the ground up to automate recovery of Ceph components that traditionally required admin intervention. Monitor health is the most critical piece of the equation that Rook actively monitors. If they are not in a good state, the operator will take action to restore their health and keep your cluster protected from disaster. The Ceph monitors (mons) are the brains of the distributed cluster. They control all of the metadata that is necessary to store and retrieve your data as well as keep it safe. If the monitors are not in a healthy state you will risk losing all the data in your system. Each monitor in a Ceph cluster has a static identity. Every component in the cluster is aware of the identity, and that identity must be immutable. The identity of a mon is its IP address. To have an immutable IP address in Kubernetes, Rook creates a K8s service for each monitor. The clusterIP of the service will act as the stable identity. When a monitor pod starts, it will bind to its podIP and it will expect communication to be via its service IP address. Multiple mons work together to provide redundancy by each keeping a copy of the metadata. A variation of the distributed algorithm Paxos is used to establish consensus about the state of the cluster. Paxos requires a super-majority of mons to be running in order to establish quorum and perform operations in the cluster. If the majority of mons are not running, quorum is lost and nothing can be done in the cluster. Most commonly a cluster will have three mons. This would mean that one mon could go down and allow the cluster to remain healthy. You would still have 2/3 mons running to give you consensus in the cluster for any operation. For highest availability, an odd number of mons is required. Fifty percent of mons will not be sufficient to maintain quorum. If you had two mons and one of them went down, you would have 1/2 of quorum. Since that is not a super-majority, the cluster would have to wait until the second mon is up again. Rook allows an even number of mons for higher"
},
{
"data": "See the if quorum is lost and to recover mon quorum from a single mon. The number of mons to create in a cluster depends on your tolerance for losing a node. If you have 1 mon zero nodes can be lost to maintain quorum. With 3 mons one node can be lost, and with 5 mons two nodes can be lost. Because the Rook operator will automatically start a new monitor if one dies, you typically only need three mons. The more mons you have, the more overhead there will be to make a change to the cluster, which could become a performance issue in a large cluster. Whatever the reason that a mon may fail (power failure, software crash, software hang, etc), there are several layers of mitigation in place to help recover the mon. It is always better to bring an existing mon back up than to failover to bring up a new mon. The Rook operator creates a mon with a Deployment to ensure that the mon pod will always be restarted if it fails. If a mon pod stops for any reason, Kubernetes will automatically start the pod up again. In order for a mon to support a pod/node restart, the mon metadata is persisted to disk, either under the `dataDirHostPath` specified in the CephCluster CR, or in the volume defined by the `volumeClaimTemplate` in the CephCluster CR. This will allow the mon to start back up with its existing metadata and continue where it left off even if the pod had to be re-created. Without this persistence, the mon cannot restart. If a mon is unhealthy and the K8s pod restart or liveness probe are not sufficient to bring a mon back up, the operator will make the decision to terminate the unhealthy monitor deployment and bring up a new monitor with a new identity. This is an operation that must be done while mon quorum is maintained by other mons in the cluster. The operator checks for mon health every 45 seconds. If a monitor is down, the operator will wait 10 minutes before failing over the unhealthy mon. These two intervals can be configured as parameters to the CephCluster CR (see below). If the intervals are too short, it could be unhealthy if the mons are failed over too"
},
{
"data": "If the intervals are too long, the cluster could be at risk of losing quorum if a new monitor is not brought up before another mon fails. ```yaml healthCheck: daemonHealth: mon: disabled: false interval: 45s timeout: 10m ``` If you want to force a mon to failover for testing or other purposes, you can scale down the mon deployment to 0, then wait for the timeout. Note that the operator may scale up the mon again automatically if the operator is restarted or if a full reconcile is triggered, such as when the CephCluster CR is updated. If the mon pod is in pending state and couldn't be assigned to a node (say, due to node drain), then the operator will wait for the timeout again before the mon failover. So the timeout waiting for the mon failover will be doubled in this case. To disable monitor automatic failover, the `timeout` can be set to `0`, if the monitor goes out of quorum Rook will never fail it over onto another node. This is especially useful for planned maintenance. Rook will create mons with pod names such as mon-a, mon-b, and mon-c. Let's say mon-b had an issue and the pod failed. ```console $ kubectl -n rook-ceph get pod -l app=rook-ceph-mon NAME READY STATUS RESTARTS AGE rook-ceph-mon-a-74dc96545-ch5ns 1/1 Running 0 9m rook-ceph-mon-b-6b9d895c4c-bcl2h 1/1 Error 2 9m rook-ceph-mon-c-7d6df6d65c-5cjwl 1/1 Running 0 8m ``` After a failover, you will see the unhealthy mon removed and a new mon added such as mon-d. A fully healthy mon quorum is now running again. ```console $ kubectl -n rook-ceph get pod -l app=rook-ceph-mon NAME READY STATUS RESTARTS AGE rook-ceph-mon-a-74dc96545-ch5ns 1/1 Running 0 19m rook-ceph-mon-c-7d6df6d65c-5cjwl 1/1 Running 0 18m rook-ceph-mon-d-9e7ea7e76d-4bhxm 1/1 Running 0 20s ``` From the toolbox we can verify the status of the health mon quorum: ```console $ ceph -s cluster: id: 35179270-8a39-4e08-a352-a10c52bb04ff health: HEALTH_OK services: mon: 3 daemons, quorum a,b,d (age 2m) mgr: a(active, since 12m) osd: 3 osds: 3 up (since 10m), 3 in (since 10m) [...] ``` Rook will automatically fail over the mons when the following settings are updated in the CephCluster CR: `spec.network.hostNetwork`: When enabled or disabled, Rook fails over all monitors, configuring them to enable or disable host networking. `spec.network.Provider` : When updated from being empty to \"host\", Rook fails over all monitors, configuring them to enable or disable host networking. `spec.network.multiClusterService`: When enabled or disabled, Rook fails over all monitors, configuring them to start (or stop) using service IPs compatible with the multi-cluster service."
}
] |
{
"category": "Runtime",
"file_name": "ceph-mon-health.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This guide is for running kube-router as the network provider for on premise and/or bare metal clusters outside of a cloud provider's environment. It assumes the initial cluster is bootstrapped and a networking provider needs configuration. All pod networking are allocated by kube-controller-manager. Kube-router provides service/pod networking, a network policy firewall, and a high performance based service proxy. The network policy firewall and service proxy are both optional but recommended. If you choose to run kube-router as daemonset, then both kube-apiserver and kubelet must be run with `--allow-privileged=true` option (see our ) Ensure your is configured to point its CNI configuration directory to `/etc/cni/net.d`. This is the default location for both `containerd` and `cri-o`, but can be set specifically if needed: Here is what the default containerd CNI plugin configuration looks like as of the writing of this document. The default containerd configuration can be retrieved using: ```sh containerd config default ``` ```toml [plugins] [plugins.\"io.containerd.grpc.v1.cri\".cni] bin_dir = \"/opt/cni/bin\" conf_dir = \"/etc/cni/net.d\" conf_template = \"\" ip_pref = \"\" maxconfnum = 1 ``` cri-o CRI configuration can be referenced via their If a previous CNI provider (e.g. weave-net, calico, or flannel) was used, remove old configurations from `/etc/cni/net.d` on each kubelet. If you choose to use kube-router for pod-to-pod network connectivity then needs to be configured to allocate pod CIDRs by passing the `--allocate-node-cidrs=true` flag and providing a `cluster-cidr` (e.g. by passing `--cluster-cidr=10.32.0.0/12`) For example: ```sh --allocate-node-cidrs=true --cluster-cidr=10.32.0.0/12 --service-cluster-ip-range=10.50.0.0/22 ``` This runs kube-router with pod/service networking, the network policy firewall, and service proxy to replace kube-proxy. The example command uses `10.32.0.0/12` as the pod CIDR address range and `https://cluster01.int.domain.com:6443` as the address. Please change these to suit your cluster. ```sh CLUSTERCIDR=10.32.0.0/12 \\ APISERVER=https://cluster01.int.domain.com:6443 \\ sh -c 'curl -s https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/generic-kuberouter-all-features.yaml | \\ sed -e \"s;%APISERVER%;$APISERVER;g\" -e \"s;%CLUSTERCIDR%;$CLUSTERCIDR;g\"' | \\ kubectl apply -f - ``` If was ever deployed to the cluster, then you need to remove it when running kube-router in this capacity or they will conflict with each other. Remove any previously running kube-proxy and all iptables rules it created. Start by deleting the kube-proxy daemonset: ```sh kubectl -n kube-system delete ds kube-proxy ``` Any iptables rules kube-proxy left around will also need to be cleaned up. This command might differ based on how kube-proxy was setup or configured: To cleanup kube-proxy we can do this with docker, containerd, or cri-o: ```sh docker run --privileged -v /lib/modules:/lib/modules --net=host registry.k8s.io/kube-proxy-amd64:v1.28.2 kube-proxy --cleanup ``` ```sh ctr images pull k8s.gcr.io/kube-proxy-amd64:v1.28.2 ctr run --rm --privileged --net-host --mount type=bind,src=/lib/modules,dst=/lib/modules,options=rbind:ro \\ registry.k8s.io/kube-proxy-amd64:v1.28.2 kube-proxy-cleanup kube-proxy --cleanup ``` ```sh crictl pull registry.k8s.io/kube-proxy-amd64:v1.28.2 crictl run --rm --privileged --net-host --mount type=bind,src=/lib/modules,dst=/lib/modules,options=rbind:ro registry.k8s.io/kube-proxy-amd64:v1.28.2 kube-proxy-cleanup kube-proxy --cleanup ``` This runs kube-router with pod/service networking and the network policy firewall. The Service proxy is disabled. ```sh kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/generic-kuberouter.yaml ``` In this mode kube-router relies on (or some other network service provider) to provide service networking. When service proxy is disabled kube-router will use to access APIserver through cluster-ip. Service networking must therefore be setup before deploying kube-router. kube-router supports setting log level via the command line -v or --v, To get maximal debug output from kube-router please start with `--v=3`"
}
] |
{
"category": "Runtime",
"file_name": "generic.md",
"project_name": "Kube-router",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: Use JuiceFS on Hadoop Ecosystem sidebar_position: 3 slug: /hadoopjavasdk import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; JuiceFS provides by Hadoop Java SDK. Various applications in the Hadoop ecosystem can smoothly use JuiceFS to store data without changing the code. JuiceFS Hadoop Java SDK is compatible with Hadoop 2.x and Hadoop 3.x. As well as variety of components in Hadoop ecosystem. JuiceFS uses local \"User/UID\" and \"Group/GID\" mappings by default, and when used in a distributed environment, to avoid permission issues, please refer to synchronizes the \"User/UID\" and \"Group/GID\" that needs to be used to all Hadoop nodes. It is also possible to define a global user and group file to make all nodes in the cluster share the permission configuration. Please see for related configurations. You should first create at least one JuiceFS file system to provide storage for components related to the Hadoop ecosystem through the JuiceFS Java SDK. When deploying the Java SDK, specify the metadata engine address of the created file system in the configuration file. To create a file system, please refer to . :::note If you want to use JuiceFS in a distributed environment, when creating a file system, please plan the object storage and database to be used reasonably to ensure that they can be accessed by each node in the cluster. ::: Depending on the read and write load of computing tasks (such as Spark executor), JuiceFS Hadoop Java SDK may require an additional 4 * off-heap memory to speed up read and write performance. By default, it is recommended to configure at least 1.2GB of off-heap memory for compute tasks. JuiceFS Hadoop Java SDK is compiled with JDK 8 by default. If it needs to be used in a higher version of Java runtime (such as Java 17), the following options need to be added to the JVM parameters to allow the use of reflection API: ```shell --add-exports=java.base/sun.nio.ch=ALL-UNNAMED ``` For more information on the above option, please refer to . Please refer to the document to learn how to download the precompiled JuiceFS Hadoop Java SDK. :::note No matter which system environment the client is compiled for, the compiled JAR file has the same name and can only be deployed in the matching system environment. For example, when compiled in Linux, it can only be used in the Linux environment. In addition, since the compiled package depends on glibc, it is recommended to compile with a lower version system to ensure better compatibility. ::: Compilation depends on the following tools: 1.20+ JDK 8+ 3.3+ Git make GCC 5.4+ Clone the repository: ```shell git clone https://github.com/juicedata/juicefs.git ``` Enter the directory and compile: ```shell cd juicefs/sdk/java make ``` :::note If Ceph RADOS is used to store data, you need to install `librados-dev` first and [build `libjfs.so`]`. ::: ```shell cd juicefs/sdk/java make ceph ``` After the compilation, you can find the compiled `JAR` file in the `sdk/java/target` directory, including two versions: Contains third-party dependent packages: `juicefs-hadoop-X.Y.Z.jar` Does not include third-party dependent packages: `original-juicefs-hadoop-X.Y.Z.jar` It is recommended to use a version that includes third-party dependencies. The client used in the Windows environment needs to be obtained through cross-compilation on Linux or macOS. The compilation depends on , which needs to be installed"
},
{
"data": "The steps are the same as compiling on Linux or macOS. For example, on the Ubuntu system, install the `mingw-w64` package first to solve the dependency problem: ```shell sudo apt install mingw-w64 ``` Clone and enter the JuiceFS source code directory, execute the following code to compile: ```shell cd juicefs/sdk/java ``` ```shell make win ``` To enable each component of the Hadoop ecosystem to correctly identify JuiceFS, the following configurations are required: Place the compiled JAR file and `$JAVA_HOME/lib/tools.jar` into the `classpath` of the component. The installation paths of common big data platforms and components are shown in the table below. Put JuiceFS configurations into the configuration file of each Hadoop ecosystem component (usually `core-site.xml`), see for details. It is recommended to place the JAR file in a fixed location, and the other locations are called it through symbolic links. | Name | Installing Paths | |-|| | CDH | `/opt/cloudera/parcels/CDH/lib/hadoop/lib`<br></br>`/opt/cloudera/parcels/CDH/spark/jars`<br></br>`/var/lib/impala` | | HDP | `/usr/hdp/current/hadoop-client/lib`<br></br>`/usr/hdp/current/hive-client/auxlib`<br></br>`/usr/hdp/current/spark2-client/jars` | | Amazon EMR | `/usr/lib/hadoop/lib`<br></br>`/usr/lib/spark/jars`<br></br>`/usr/lib/hive/auxlib` | | Alibaba Cloud EMR | `/opt/apps/ecm/service/hadoop//package/hadoop/share/hadoop/common/lib`<br></br>`/opt/apps/ecm/service/spark//package/spark/jars`<br></br>`/opt/apps/ecm/service/presto//package/presto/plugin/hive-hadoop2`<br></br>`/opt/apps/ecm/service/hive//package/apache-hive/lib`<br></br>`/opt/apps/ecm/service/impala//package/impala/lib` | | Tencent Cloud EMR | `/usr/local/service/hadoop/share/hadoop/common/lib`<br></br>`/usr/local/service/presto/plugin/hive-hadoop2`<br></br>`/usr/local/service/spark/jars`<br></br>`/usr/local/service/hive/auxlib` | | UCloud UHadoop | `/home/hadoop/share/hadoop/common/lib`<br></br>`/home/hadoop/hive/auxlib`<br></br>`/home/hadoop/spark/jars`<br></br>`/home/hadoop/presto/plugin/hive-hadoop2` | | Baidu Cloud EMR | `/opt/bmr/hadoop/share/hadoop/common/lib`<br></br>`/opt/bmr/hive/auxlib`<br></br>`/opt/bmr/spark2/jars` | | Name | Installing Paths | |--|--| | Hadoop | `${HADOOPHOME}/share/hadoop/common/lib/`, `${HADOOPHOME}/share/hadoop/mapreduce/lib/` | | Spark | `${SPARK_HOME}/jars` | | Presto | `${PRESTO_HOME}/plugin/hive-hadoop2` | | Trino | `${TRINO_HOME}/plugin/hive` | | Flink | `${FLINK_HOME}/lib` | | StarRocks | `${StarRocksHOME}/fe/lib/`, `${StarRocksHOME}/be/lib/hadoop/common/lib` | Please refer to the following table to set the relevant parameters of the JuiceFS file system and write it into the configuration file, which is generally `core-site.xml`. | Configuration | Default Value | Description | |-||-| | `fs.jfs.impl` | `io.juicefs.JuiceFileSystem` | Specify the storage implementation to be used. By default, `jfs://` scheme is used. If you want to use different scheme (e.g. `cfs://`), just modify it to `fs.cfs.impl`. No matter what scheme you use, it is always access the data in JuiceFS. | | `fs.AbstractFileSystem.jfs.impl` | `io.juicefs.JuiceFS` | Specify the storage implementation to be used. By default, `jfs://` scheme is used. If you want to use different scheme (e.g. `cfs://`), just modify it to `fs.AbstractFileSystem.cfs.impl`. No matter what scheme you use, it is always access the data in JuiceFS. | | `juicefs.meta` | | Specify the metadata engine address of the pre-created JuiceFS file system. You can configure multiple file systems for the client at the same time through the format of `juicefs.{vol_name}.meta`. Refer to . | | Configuration | Default Value | Description | |||-| | `juicefs.cache-dir` | | Directory paths of local cache. Use colon to separate multiple paths. Also support wildcard in path. It's recommended create these directories manually and set `0777` permission so that different applications could share the cache data. | | `juicefs.cache-size` | 0 | Maximum size of local cache in MiB. The default value is 0, which means that caching is disabled. It's the total size when set multiple cache directories. | | `juicefs.cache-full-block` | `true` | Whether cache every read blocks, `false` means only cache random/small read blocks. | | `juicefs.free-space` | 0.1 | Min free space ratio of cache directory | | `juicefs.open-cache` | 0 | Open files cache timeout in seconds (0 means disable this feature) | | `juicefs.attr-cache` | 0 | Expire of attributes cache in seconds | |"
},
{
"data": "| 0 | Expire of file entry cache in seconds | | `juicefs.dir-entry-cache` | 0 | Expire of directory entry cache in seconds | | `juicefs.discover-nodes-url` | | Specify the node discovery API, the node list will be refreshed every 10 minutes. <br/><br/><ul><li>YARN: `yarn`</li><li>Spark Standalone: `http://spark-master:web-ui-port/json/`</li><li>Spark ThriftServer: `http://thrift-server:4040/api/v1/applications/`</li><li>Presto: `http://coordinator:discovery-uri-port/v1/service/presto/`</li><li>File system: `jfs://{VOLUME}/etc/nodes`, you need to create this file manually, and write the hostname of the node into this file line by line</li></ul> | | Configuration | Default Value | Description | |--||-| | `juicefs.max-uploads` | 20 | The max number of connections to upload | | `juicefs.max-deletes` | 10 | The max number of connections to delete | | `juicefs.get-timeout` | 5 | The max number of seconds to download an object | | `juicefs.put-timeout` | 60 | The max number of seconds to upload an object | | `juicefs.memory-size` | 300 | Total read/write buffering in MiB | | `juicefs.prefetch` | 1 | Prefetch N blocks in parallel | | `juicefs.upload-limit` | 0 | Bandwidth limit for upload in Mbps | | `juicefs.download-limit` | 0 | Bandwidth limit for download in Mbps | | `juicefs.io-retries` | 10 | Number of retries after network failure | | `juicefs.writeback` | `false` | Upload objects in background | | Configuration | Default Value | Description | |-||--| | `juicefs.bucket` | | Specify a different endpoint for object storage | | `juicefs.debug` | `false` | Whether enable debug log | | `juicefs.access-log` | | Access log path. Ensure Hadoop application has write permission, e.g. `/tmp/juicefs.access.log`. The log file will rotate automatically to keep at most 7 files. | | `juicefs.superuser` | `hdfs` | The super user | | `juicefs.supergroup` | `supergroup` | The super user group | | `juicefs.users` | `null` | The path of username and UID list file, e.g. `jfs://name/etc/users`. The file format is `<username>:<UID>`, one user per line. | | `juicefs.groups` | `null` | The path of group name, GID and group members list file, e.g. `jfs://name/etc/groups`. The file format is `<group-name>:<GID>:<username1>,<username2>`, one group per line. | | `juicefs.umask` | `null` | The umask used when creating files and directories (e.g. `0022`), default value is `fs.permissions.umask-mode`. | | `juicefs.push-gateway` | | address, format is `<host>:<port>`. | | `juicefs.push-auth` | | information, format is `<username>:<password>`. | | `juicefs.push-graphite` | | address, format is `<host>:<port>`. | | `juicefs.push-interval` | 10 | Metric push interval (in seconds) | | `juicefs.push-labels` | | Metric labels, format is `key1:value1;key2:value2`. | | `juicefs.fast-resolve` | `true` | Whether enable faster metadata lookup using Redis Lua script | | `juicefs.no-usage-report` | `false` | Whether disable usage reporting. JuiceFS only collects anonymous usage data (e.g. version number), no user or any sensitive data will be collected. | | `juicefs.no-bgjob` | `false` | Disable background jobs (clean-up, backup, etc.) | | `juicefs.backup-meta` | 3600 | Interval (in seconds) to automatically backup metadata in the object storage (0 means disable backup) | | `juicefs.backup-skip-trash` | `false` | Skip files and directories in trash when backup metadata. | | `juicefs.heartbeat` | 12 | Heartbeat interval (in seconds) between client and metadata engine. It's recommended that all clients use the same value. | | `juicefs.skip-dir-mtime` | 100ms | Minimal duration to modify parent dir"
},
{
"data": "| When multiple JuiceFS file systems need to be used at the same time, all the above configuration items can be specified for a specific file system. You only need to put the file system name in the middle of the configuration item, such as `jfs1` and `jfs2` in the following example: ```xml <property> <name>juicefs.jfs1.meta</name> <value>redis://jfs1.host:port/1</value> </property> <property> <name>juicefs.jfs2.meta</name> <value>redis://jfs2.host:port/1</value> </property> ``` The following is a commonly used configuration example. Please replace the `{HOST}`, `{PORT}` and `{DB}` variables in the `juicefs.meta` configuration with actual values. ```xml <property> <name>fs.jfs.impl</name> <value>io.juicefs.JuiceFileSystem</value> </property> <property> <name>fs.AbstractFileSystem.jfs.impl</name> <value>io.juicefs.JuiceFS</value> </property> <property> <name>juicefs.meta</name> <value>redis://{HOST}:{PORT}/{DB}</value> </property> <property> <name>juicefs.cache-dir</name> <value>/data*/jfs</value> </property> <property> <name>juicefs.cache-size</name> <value>1024</value> </property> <property> <name>juicefs.access-log</name> <value>/tmp/juicefs.access.log</value> </property> ``` Please refer to the aforementioned configuration tables and add configuration parameters to the Hadoop configuration file `core-site.xml`. If you are using CDH 6, in addition to modifying `core-site`, you also need to modify `mapreduce.application.classpath` through the YARN service interface, adding: ```shell $HADOOPCOMMONHOME/lib/juicefs-hadoop.jar ``` In addition to modifying `core-site`, you also need to modify the configuration `mapreduce.application.classpath` through the MapReduce2 service interface and add it at the end (variables do not need to be replaced): ```shell /usr/hdp/${hdp.version}/hadoop/lib/juicefs-hadoop.jar ``` Add configuration parameters to `conf/flink-conf.yaml`. If you only use JuiceFS in Flink, you don't need to configure JuiceFS in the Hadoop environment, you only need to configure the Flink client. :::note Hudi supports JuiceFS since v0.10.0, please make sure you are using the correct version. ::: Please refer to to learn how to configure JuiceFS. It is possible to use Kafka Connect and HDFS Sink Connector and to store data on JuiceFS. First you need to add JuiceFS SDK to `classpath` in Kafka Connect, e.g., `/usr/share/java/confluentinc-kafka-connect-hdfs/lib`. While creating a Connect Sink task, configuration needs to be set up as follows: Specify `hadoop.conf.dir` as the directory that contains the configuration file `core-site.xml`. If it is not running in Hadoop environment, you can create a separate directory such as `/usr/local/juicefs/hadoop`, and then add the JuiceFS related configurations to `core-site.xml`. Specify `store.url` as a path starting with `jfs://`. For example: ```ini hadoop.conf.dir=/path/to/hadoop-conf store.url=jfs://path/to/store ``` JuiceFS can be used by HBase for HFile, but is not fast (low latency) enough for Write Ahead Log (WAL), because it take much longer time to persist data into object storage than memory of DataNode. It is recommended to deploy a small HDFS cluster to store WAL and HFile files to be stored on JuiceFS. Modify `hbase-site.xml`: ```xml title=\"hbase-site.xml\" <property> <name>hbase.rootdir</name> <value>jfs://{vol_name}/hbase</value> </property> <property> <name>hbase.wal.dir</name> <value>hdfs://{ns}/hbase-wal</value> </property> ``` In addition to modifying the above configurations, since the HBase cluster has already stored some data in ZooKeeper, in order to avoid conflicts, there are two solutions: Delete the old cluster Delete the znode (default `/hbase`) configured by `zookeeper.znode.parent` via the ZooKeeper client. :::note This operation will delete all data on this HBase cluster. ::: Use a new znode Keep the znode of the original HBase cluster so that it can be recovered later. Then configure a new value for `zookeeper.znode.parent`: ```xml title=\"hbase-site.xml\" <property> <name>zookeeper.znode.parent</name> <value>/hbase-jfs</value> </property> ``` When the following components need to access JuiceFS, they should be restarted. :::note Before restart, you need to confirm JuiceFS related configuration has been written to the configuration file of each component, usually you can find them in `core-site.xml` on the machine where the service of the component was"
},
{
"data": "::: | Components | Services | | - | -- | | Hive | HiveServer<br />Metastore | | Spark | ThriftServer | | Presto | Coordinator<br />Worker | | Impala | Catalog Server<br />Daemon | | HBase | Master<br />RegionServer | HDFS, Hue, ZooKeeper and other services don't need to be restarted. When `Class io.juicefs.JuiceFileSystem not found` or `No FilesSystem for scheme: jfs` exceptions was occurred after restart, reference . JuiceFS Hadoop Java SDK also has the same trash function as HDFS, which needs to be enabled by setting `fs.trash.interval` and `fs.trash.checkpoint.interval`, please refer to for more information. After the deployment of the JuiceFS Java SDK, the following methods can be used to verify the success of the deployment. ```bash hadoop fs -ls jfs://{JFS_NAME}/ ``` :::info The `JFS_NAME` is the volume name when you format JuiceFS file system. ::: ```sql CREATE TABLE IF NOT EXISTS person ( name STRING, age INT ) LOCATION 'jfs://{JFS_NAME}/tmp/person'; ``` Add Maven or Gradle dependencies: <Tabs> <TabItem value=\"maven\" label=\"Maven\"> ```xml <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>{HADOOP_VERSION}</version> <scope>provided</scope> </dependency> <dependency> <groupId>io.juicefs</groupId> <artifactId>juicefs-hadoop</artifactId> <version>{JUICEFSHADOOPVERSION}</version> <scope>provided</scope> </dependency> ``` </TabItem> <TabItem value=\"gradle\" label=\"Gradle\"> ```groovy dependencies { implementation 'org.apache.hadoop:hadoop-common:${hadoopVersion}' implementation 'io.juicefs:juicefs-hadoop:${juicefsHadoopVersion}' } ``` </TabItem> </Tabs> Use the following sample code to verify: <!-- autocorrect: false --> ```java package demo; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; public class JuiceFSDemo { public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); conf.set(\"fs.jfs.impl\", \"io.juicefs.JuiceFileSystem\"); conf.set(\"juicefs.meta\", \"redis://127.0.0.1:6379/0\"); // JuiceFS metadata engine URL Path p = new Path(\"jfs://{JFSNAME}/\"); // Please replace \"{JFSNAME}\" with the correct value FileSystem jfs = p.getFileSystem(conf); FileStatus[] fileStatuses = jfs.listStatus(p); // Traverse JuiceFS file system and print file paths for (FileStatus status : fileStatuses) { System.out.println(status.getPath()); } } } ``` <!-- autocorrect: true --> Please see the documentation to learn how to collect and display JuiceFS monitoring metrics. Here are a series of methods to use the built-in stress testing tool of the JuiceFS client to test the performance of the client environment that has been successfully deployed. create ```shell hadoop jar juicefs-hadoop.jar nnbench create -files 10000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench -local ``` This command will create 10000 empty files open ```shell hadoop jar juicefs-hadoop.jar nnbench open -files 10000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench -local ``` This command will open 10000 files without reading data rename ```shell hadoop jar juicefs-hadoop.jar nnbench rename -files 10000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench -local ``` delete ```shell hadoop jar juicefs-hadoop.jar nnbench delete -files 10000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench -local ``` For reference | Operation | TPS | Latency (ms) | | | - | | | create | 644 | 1.55 | | open | 3467 | 0.29 | | rename | 483 | 2.07 | | delete | 506 | 1.97 | sequential write ```shell hadoop jar juicefs-hadoop.jar dfsio -write -size 20000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/DFSIO -local ``` sequential read ```shell hadoop jar juicefs-hadoop.jar dfsio -read -size 20000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/DFSIO -local ``` When run the cmd for the second time, the result may be much better than the first run. It's because the data was cached in memory, just clean the local disk cache. For reference | Operation | Throughput (MB/s) | | | -- | | write | 647 | | read | 111 | If the network bandwidth of the machine is relatively low, it can generally reach the network bandwidth bottleneck. The following command will start the MapReduce distributed task to test the metadata and IO"
},
{
"data": "During the test, it is necessary to ensure that the cluster has sufficient resources to start the required map tasks. Computing resources used in this test: Server: 4 cores and 32 GB memory, burst bandwidth 5Gbit/s x 3 Database: Alibaba Cloud Redis 5.0 Community 4G Master-Slave Edition create ```shell hadoop jar juicefs-hadoop.jar nnbench create -maps 10 -threads 10 -files 1000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench ``` 10 map task, each has 10 threads, each thread create 1000 empty file. 100000 files in total open ```shell hadoop jar juicefs-hadoop.jar nnbench open -maps 10 -threads 10 -files 1000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench ``` 10 map task, each has 10 threads, each thread open 1000 file. 100000 files in total rename ```shell hadoop jar juicefs-hadoop.jar nnbench rename -maps 10 -threads 10 -files 1000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench ``` 10 map task, each has 10 threads, each thread rename 1000 file. 100000 files in total delete ```shell hadoop jar juicefs-hadoop.jar nnbench delete -maps 10 -threads 10 -files 1000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/NNBench ``` 10 map task, each has 10 threads, each thread delete 1000 file. 100000 files in total For reference 10 threads | Operation | IOPS | Latency (ms) | | | - | | | create | 4178 | 2.2 | | open | 9407 | 0.8 | | rename | 3197 | 2.9 | | delete | 3060 | 3.0 | 100 threads | Operation | IOPS | Latency (ms) | | | - | | | create | 11773 | 7.9 | | open | 34083 | 2.4 | | rename | 8995 | 10.8 | | delete | 7191 | 13.6 | sequential write ```shell hadoop jar juicefs-hadoop.jar dfsio -write -maps 10 -size 10000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/DFSIO ``` 10 map task, each task write 10000MB random data sequentially sequential read ```shell hadoop jar juicefs-hadoop.jar dfsio -read -maps 10 -size 10000 -baseDir jfs://{JFS_NAME}/tmp/benchmarks/DFSIO ``` 10 map task, each task read 10000MB random data sequentially For reference | Operation | Average throughput (MB/s) | Total Throughput (MB/s) | | | - | -- | | write | 198 | 1835 | | read | 124 | 1234 | The test dataset is 100GB in size, and both Parquet and ORC file formats are tested. This test only tests the first 10 queries. Spark Thrift JDBC/ODBC Server is used to start the Spark resident process and then submit the task via Beeline connection. | Node Category | Instance Type | CPU | Memory | Disk | Number | | - | - | | | - | | | Master | Alibaba Cloud ecs.r6.xlarge | 4 | 32GiB | System Disk: 100GiB | 1 | | Core | Alibaba Cloud ecs.r6.xlarge | 4 | 32GiB | System Disk: 100GiB<br />Data Disk: 500GiB Ultra Disk x 2 | 3 | ```shell ${SPARK_HOME}/sbin/start-thriftserver.sh \\ --master yarn \\ --driver-memory 8g \\ --executor-memory 10g \\ --executor-cores 3 \\ --num-executors 3 \\ --conf spark.locality.wait=100 \\ --conf spark.sql.crossJoin.enabled=true \\ --hiveconf hive.server2.thrift.port=10001 ``` The 2 data disks of Core node are mounted in the `/data01` and `/data02` directories, and `core-site.xml` is configured as follows: ```xml <property> <name>juicefs.cache-size</name> <value>200000</value> </property> <property> <name>juicefs.cache-dir</name> <value>/data*/jfscache</value> </property> <property> <name>juicefs.cache-full-block</name> <value>false</value> </property> <property> <name>juicefs.discover-nodes-url</name> <value>yarn</value> </property> <property> <name>juicefs.attr-cache</name> <value>3</value> </property> <property> <name>juicefs.entry-cache</name> <value>3</value> </property> <property>"
},
{
"data": "<value>3</value> </property> ``` The task submission command is as follows: ```shell ${SPARK_HOME}/bin/beeline -u jdbc:hive2://localhost:10001/${DATABASE} \\ -n hadoop \\ -f query{i}.sql ``` JuiceFS can use local disk as a cache to accelerate data access, the following data is the result (in seconds) after 4 runs using Redis and TiKV as the metadata engine of JuiceFS respectively. | Queries | JuiceFS (Redis) | JuiceFS (TiKV) | HDFS | | - | | -- | - | | q1 | 20 | 20 | 20 | | q2 | 28 | 33 | 26 | | q3 | 24 | 27 | 28 | | q4 | 300 | 309 | 290 | | q5 | 116 | 117 | 91 | | q6 | 37 | 42 | 41 | | q7 | 24 | 28 | 23 | | q8 | 13 | 15 | 16 | | q9 | 87 | 112 | 89 | | q10 | 23 | 24 | 22 | | Queries | JuiceFS (Redis) | JuiceFS (TiKV) | HDFS | | - | | -- | - | | q1 | 33 | 35 | 39 | | q2 | 28 | 32 | 31 | | q3 | 23 | 25 | 24 | | q4 | 273 | 284 | 266 | | q5 | 96 | 107 | 94 | | q6 | 36 | 35 | 42 | | q7 | 28 | 30 | 24 | | q8 | 11 | 12 | 14 | | q9 | 85 | 97 | 77 | | q10 | 24 | 28 | 38 | It means JAR file was not loaded, you can verify it by `lsof -p {pid} | grep juicefs`. You should check whether the JAR file was located properly, or other users have the read permission. Some Hadoop distribution also need to modify `mapred-site.xml` and put the JAR file location path to the end of the parameter `mapreduce.application.classpath`. It means JuiceFS Hadoop Java SDK was not configured properly, you need to check whether there is JuiceFS related configuration in the `core-site.xml` of the component configuration. JuiceFS also uses the \"User/Group\" method to manage file permissions, using local users and groups by default. In order to ensure the unified permissions of different nodes during distributed computing, you can configure global \"User/UID\" and \"Group/GID\" mappings through `juicefs.users` and `juicefs.groups` configurations. In the Hadoop application scenario, the functions similar to the HDFS trash are still retained. It needs to be explicitly enabled by `fs.trash.interval` and `fs.trash.checkpoint.interval` configurations, please refer to for more information. In HDFS, each data block will have information, which the computing engine uses to schedule the computing tasks as much as possible to the nodes where the data is stored. JuiceFS will calculate the corresponding `BlockLocation` for each data block through the consistent hashing algorithm, so that when the same data is read for the second time, the computing engine may schedule the computing task to the same node, and the data cached on the local disk during the first computing can be used to accelerate data access. This algorithm needs to know all the computing node information in advance. The `juicefs.discover-nodes-url` configuration is used to obtain these computing node information. Not supported. JuiceFS does not verify the validity of Kerberos users, but can use Kerberos-authenticated username."
}
] |
{
"category": "Runtime",
"file_name": "hadoop_java_sdk.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "```bash cfs-cli nodeset list ``` ```bash cfs-cli nodeset info [NODESET ID] ``` ```bash cfs-cli nodeset update [NODESET ID] [flags] ``` ```bash Flags: --dataNodeSelector string Set the node select policy(datanode) for specify nodeset -h, --help help for update --metaNodeSelector string Set the node select policy(metanode) for specify nodeset ```"
}
] |
{
"category": "Runtime",
"file_name": "nodeset.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "To build StratoVirt, make sure that Rust language environment and Cargo have already been installed. The recommended version of rustc is 1.64.0 or later, otherwise compilation may be failed. ```shell $ rustc --version rustc 1.64.0 ``` If you want to deploy rust environment, the following link will help you: <https://www.rust-lang.org/tools/install> With glibc, StratoVirt is linked dynamically. It's the default target to build StratoVirt. ```shell $ arch=`uname -m` $ rustup target add ${arch}-unknown-linux-gnu $ cargo build --workspace --bins --release --target ${arch}-unknown-linux-gnu ``` Now you can find StratoVirt binary file in `target/${arch}-unknown-linux-gnu/release/stratovirt`. StratoVirt can also be built using musl-libc toolchains. By this way, StratoVirt is linked statically and has no library dependencies. ```shell $ arch=`uname -m` $ rustup target add ${arch}-unknown-linux-musl $ cargo build --workspace --bins --release --target ${arch}-unknown-linux-musl ``` Now you can find StratoVirt static binary file in `target/${arch}-unknown-linux-musl/release/stratovirt`. For different scenarios, StratoVirt provides feature conditional compilation options based on the cargo `feature`. List of optional features: scream_alsa: enable virtual sound card with `ALSA` interface scream_pulseaudio: enable virtual sound card with `PulseAudio` interface usb_host: enable USB Host device usbcamerav4l2: enable USB camera with `v4l2` backend gtk: enable GTK display vnc: enable VNC display ramfb: enable ramfb display device virtio_gpu: enable virtio-gpu virtualized graphics card pvpanic: enable virtualized pvpanic pci device ```shell $ cargo build --workspace --bins --release --features \"scream_alsa\" ``` Stratovirt now can run on OpenHarmony OS(OHOS). Stratovirt, OHOS version, is compiled on x64, and relies on RUST cross compilation toolchain and SDK offered by OHOS. Before compiling, specify OHOS SDK path in environment variable OHOS_SDK. Some crates needed by StratoVirt now are not support OHOS platform, adapting is essential. Here is a command demo: ``` RUSTFLAGS=\"-C link-arg=--target=aarch64-linux-ohos -C linker={OHOS_SDK}/llvm/bin/clang\" cargo build --target aarch64-linux-ohos --features {FEATURES}\" ``` In order to build StratoVirt in containers, ensure that the docker software is installed. This can be checked with the following command: ```shell $ docker -v Docker version 18.09.0 ``` If you want to deploy a docker environment, the following link can help you: <https://docs.docker.com/get-docker/> Run the script under tools/buildstratovirtstatic directory to automatically run a docker container to build a statically linked StratoVirt. ```shell $ cd tools/buildstratovirtstatic $ sh buildstratovirtfromdocker.sh customimage_name ``` After the build is complete, you can find the statically linked binary StratoVirt in the path: `target/${arch}-unknown-linux-musl/release/stratovirt`."
}
] |
{
"category": "Runtime",
"file_name": "build_guide.md",
"project_name": "StratoVirt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "layout: global title: Running Presto on Iceberg Tables with Alluxio Presto has introduced support for in version 0.256. This document describes how to use Presto to query Iceberg tables through Alluxio. This document is currently experimental, and the information provided here is subject to change. In order to use Presto to query an Iceberg table, make sure you have a working setup of Presto, Hive Metastore and Alluxio, and Presto can access data through Alluxio's filesystem interface. If not, please refer to the on general Presto installation and configuration. Most of that guide apply for Iceberg workflows as well, and this document covers the specific instructions for working with Iceberg tables. All from the general Presto setup; Presto server, version 0.257 or later. Copy the Alluxio client jar located at `{{site.ALLUXIOCLIENTJAR_PATH}}` into Presto Iceberg connector's directory located at `${PRESTO_HOME}/plugin/iceberg/`. Then restart the Presto server: ```shell $ ${PRESTO_HOME}/bin/launcher restart ``` Also note that the same client jar file needs to be on Hive's classpath. If not, please refer to the on setting up Hive to work with Alluxio. Presto reads and writes an Iceberg table using the . To enable the Iceberg connector, create a catalog for Iceberg connector in Presto's installation directory as `${PRESTO_HOME}/etc/catalog/iceberg.properties`: ```properties connector.name=iceberg hive.metastore.uri=thrift://localhost:9083 ``` Change the Hive Metastore connection URI to match your setup. For demonstration purposes, we will create an example schema and an Iceberg table. Launch the Presto CLI client with the following command: ```shell $ ./presto --server localhost:8080 --catalog iceberg --debug ``` For more information on the client, please refer to this section on [querying tables using Presto] ({{ '/en/compute/Presto.html' | relativize_url }}#query-tables-using-presto). Note that the catalog is set to `iceberg` since we will be dealing with Iceberg tables. Run the following statements from the client: ```sql CREATE SCHEMA iceberg_test; USE iceberg_test; CREATE TABLE person (name varchar, age int, id int) WITH (location = 'alluxio://localhost:19998/person', format = 'parquet'); ``` Change the hostname and port in the Alluxio connection URI to match your setup. These statements create a schema `iceberg_test` and a table `person` at the directory `/person` in Alluxio filesystem, and with Parquet as the table's storage format. Insert one row of sample data into the newly created table: ```sql INSERT INTO person VALUES ('alice', 18, 1000); ``` Note: there was a bug in the write path of Presto's Iceberg connector, so insertion may fail. This issue has been resolved in Presto version 0.257 by . Now you can verify things are working by reading back the data from the table: ```sql SELECT * FROM person; ``` As well as examine the files in Alluxio: ```shell $ bin/alluxio fs ls /person drwxr-xr-x alluxio alluxio 10 PERSISTED 06-29-2021 16:24:02:007 DIR /person/metadata drwxr-xr-x alluxio alluxio 1 PERSISTED 06-29-2021 16:24:00:049 DIR /person/data $ bin/alluxio fs ls /person/data -rw-r--r-- alluxio alluxio 400 PERSISTED 06-29-2021 16:24:00:691 100% /person/data/6e6a451a-8f20-4d73-9ef6-ee48070dad27.parquet $ bin/alluxio fs ls /person/metadata -rw-r--r-- alluxio alluxio 1406 PERSISTED 06-29-2021 16:23:28:608 100% /person/metadata/00000-2fd982ae-2a81-44a8-a4db-505e9ba6c09d.metadata.json ... (snip) ``` You can see the metadata and data files of the Iceberg table have been created."
}
] |
{
"category": "Runtime",
"file_name": "Presto-Iceberg.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Velero Backup resource provides real-time progress of an ongoing backup by means of a Progress field in the CR. Velero Restore, on the other hand, only shows one of the phases (InProgress, Completed, PartiallyFailed, Failed) of the ongoing restore. In this document, we propose detailed progress reporting for Velero Restore. With the introduction of the proposed Progress field, Velero Restore CR will look like: ```yml apiVersion: velero.io/v1 kind: Restore metadata: name: test-restore namespace: velero spec: [...] status: phase: InProgress progress: itemsRestored: 100 totalItems: 140 ``` Enable progress reporting for Velero Restore Estimate time to completion The current Restore CR lets users know whether a restore is in-progress or completed (failed/succeeded). While this basic piece of information is useful to the end user, there seems to be room for improvement in the user experience. The Restore CR can show detailed progress in terms of the number of resources restored so far and the total number of resources to be restored. This will be particularly useful for restores that run for a longer duration of time. Such progress reporting already exists for Velero Backup. This document proposes similar implementation for Velero Restore. We propose to divide the restore process in two steps. The first step will collect all the items to be restored from the backup tarball. It will apply the label selector and include/exclude rules on the resources / items and store them (preserving the priority order) in an in-memory data structure. The second step will read the collected items and restore them. A new struct will be introduced to store progress information: ```go type RestoreProgress struct { TotalItems int `json:\"totalItems,omitempty` ItemsRestored int `json:\"itemsRestored,omitempty` } ``` `RestoreStatus` will include the above struct: ```go type RestoreStatus struct { [...] Progress *RestoreProgress `json:\"progress,omitempty\"` } ``` Currently, the restore process works by looping through the resources in the backup tarball and restoring them one-by-one in the same pass: ```go func (ctx *context) execute(...) { [...] for _, resource := range getOrderedResources(...) { [...] for namespace, items := range resourceList.ItemsByNamespace { [...] for _, item := range items { [...] // restore item here w, e := restoreItem(...) } } } } ``` We propose to remove the call to `restoreItem()` in the inner most loop and instead store the item in a data structure. Once all the items are collected, we loop through the array of collected items and make a call to `restoreItem()`: ```go func (ctx *context) getOrderedResourceCollection(...) { collectedResources := []restoreResource for _, resource := range getOrderedResources(...) { [...] for namespace, items := range resourceList.ItemsByNamespace {"
},
{
"data": "collectedResource := restoreResource{} for _, item := range items { [...] // store item in a data structure collectedResource.itemsByNamespace[originalNamespace] = append(collectedResource.itemsByNamespace[originalNamespace], item) } } collectedResources.append(collectedResources, collectedResource) } return collectedResources } func (ctx *context) execute(...) { [...] // get all items resources := ctx.getOrderedResourceCollection(...) for _, resource := range resources { [...] for _, items := range resource.itemsByNamespace { [...] for _, item := range items { [...] // restore the item w, e := restoreItem(...) } } } [...] } ``` We introduce two new structs to hold the collected items: ```go type restoreResource struct { resource string itemsByNamespace maprestoreItem totalItems int } type restoreItem struct { targetNamespace string name string } ``` Each group resource is represented by `restoreResource`. The map `itemsByNamespace` is indexed by `originalNamespace`, and the values are list of `items` in the original namespace. `totalItems` is simply the count of all items which are present in the nested map of namespace and items. It is updated every time an item is added to the map. Each item represented by `restoreItem` has `name` and the resolved `targetNamespace`. The total number of items can be calculated by simply adding the number of total items present in the map of all resources. ```go totalItems := 0 for _, resource := range collectedResources { totalItems += resource.totalItems } ``` The additional items returned by the plugins will still be discovered at the time of plugin execution. The number of `totalItems` will be adjusted to include such additional items. As a result, the number of total items is expected to change whenever plugins execute: ```go i := 0 for _, resource := range resources { [...] for _, items := range resource.itemsByNamespace { [...] for _, item := range items { [...] // restore the item w, e := restoreItem(...) i++ // calculate the actual count of resources actualTotalItems := len(ctx.restoredItems) + (totalItems - i) } } } ``` The updates to the `progress` field in the CR can be sent on a channel as soon as an item is restored. A goroutine receiving update on that channel can make an `Update()` call to update the Restore CR. This will require us to pass an instance of `RestoresGetter` to the `kubernetesRestorer` struct. As an alternative, we have considered an approach which doesn't divide the restore process in two steps. With that approach, the total number of items will be read from the Backup CR. We will keep three counters, `totalItems`, `skippedItems` and `restoredItems`: ```yml status: phase: InProgress progress: totalItems: 100 skippedItems: 20 restoredItems: 79 ``` This approach doesn't require us to find the number of total items beforehand. Omitted Omitted TBD https://github.com/vmware-tanzu/velero/issues/21"
}
] |
{
"category": "Runtime",
"file_name": "restore-progress.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: \"Ark Config definition\" layout: docs * * Heptio Ark defines its own Config object (a custom resource) for specifying Ark backup and cloud provider settings. When the Ark server is first deployed, it waits until you create a Config--specifically one named `default`--in the `heptio-ark` namespace. NOTE: There is an underlying assumption that you're running the Ark server as a Kubernetes deployment. If the `default` Config is modified, the server shuts down gracefully. Once the kubelet restarts the Ark server pod, the server then uses the updated Config values. A sample YAML `Config` looks like the following: ``` apiVersion: ark.heptio.com/v1 kind: Config metadata: namespace: heptio-ark name: default persistentVolumeProvider: name: aws config: region: us-west-2 backupStorageProvider: name: aws bucket: ark config: region: us-west-2 backupSyncPeriod: 60m gcSyncPeriod: 60m scheduleSyncPeriod: 1m restoreOnlyMode: false ``` The configurable parameters are as follows: | Key | Type | Default | Meaning | | | | | | | `persistentVolumeProvider` | CloudProviderConfig | None (Optional) | The specification for whichever cloud provider the cluster is using for persistent volumes (to be snapshotted), if any.<br><br>If not specified, Backups and Restores requesting PV snapshots & restores, respectively, are considered invalid. <br><br> NOTE: For Azure, your Kubernetes cluster needs to be version 1.7.2+ in order to support PV snapshotting of its managed disks. | | `persistentVolumeProvider/name` | String<br><br>(Ark natively supports `aws`, `gcp`, and `azure`. Other providers may be available via external plugins.) | None (Optional) | The name of the cloud provider the cluster is using for persistent volumes, if any. | | `persistentVolumeProvider/config` | map, , and -specific configs or your provider's documentation.) | None (Optional) | Configuration keys/values to be passed to the cloud provider for persistent volumes. | | `backupStorageProvider` | CloudProviderConfig | Required Field | The specification for whichever cloud provider will be used to actually store the backups. | | `backupStorageProvider/name` | String<br><br>(Ark natively supports `aws`, `gcp`, and `azure`. Other providers may be available via external plugins.) | Required Field | The name of the cloud provider that will be used to actually store the backups. | | `backupStorageProvider/bucket` | String | Required Field | The storage bucket where backups are to be uploaded. | | `backupStorageProvider/config` | map, , and -specific configs or your provider's"
},
{
"data": "| None (Optional) | Configuration keys/values to be passed to the cloud provider for backup storage. | | `backupSyncPeriod` | metav1.Duration | 60m0s | How frequently Ark queries the object storage to make sure that the appropriate Backup resources have been created for existing backup files. | | `gcSyncPeriod` | metav1.Duration | 60m0s | How frequently Ark queries the object storage to delete backup files that have passed their TTL. | | `scheduleSyncPeriod` | metav1.Duration | 1m0s | How frequently Ark checks its Schedule resource objects to see if a backup needs to be initiated. | | `resourcePriorities` | []string | `[namespaces, persistentvolumes, persistentvolumeclaims, secrets, configmaps, serviceaccounts, limitranges]` | An ordered list that describes the order in which Kubernetes resource objects should be restored (also specified with the `<RESOURCE>.<GROUP>` format.<br><br>If a resource is not in this list, it is restored after all other prioritized resources. | | `restoreOnlyMode` | bool | `false` | When RestoreOnly mode is on, functionality for backups, schedules, and expired backup deletion is turned off. Restores are made from existing backup files in object storage. | (Or other S3-compatible storage) | Key | Type | Default | Meaning | | | | | | | `region` | string | Empty | Example: \"us-east-1\"<br><br>See for the full list.<br><br>Queried from the AWS S3 API if not provided. | | `s3ForcePathStyle` | bool | `false` | Set this to `true` if you are using a local storage service like Minio. | | `s3Url` | string | Required field for non-AWS-hosted storage| Example: http://minio:9000<br><br>You can specify the AWS S3 URL here for explicitness, but Ark can already generate it from `region`, and `bucket`. This field is primarily for local storage services like Minio.| | `kmsKeyId` | string | Empty | Example: \"502b409c-4da1-419f-a16e-eif453b3i49f\" or \"alias/`<KMS-Key-Alias-Name>`\"<br><br>Specify an id or alias to enable encryption of the backups stored in S3. Only works with AWS S3 and may require explicitly granting key usage rights.| | Key | Type | Default | Meaning | | | | | | | `region` | string | Required Field | Example: \"us-east-1\"<br><br>See for the full list. | No parameters required. No parameters required. No parameters required. | Key | Type | Default | Meaning | | | | | | | `apiTimeout` | metav1.Duration | 2m0s | How long to wait for an Azure API request to complete before timeout. |"
}
] |
{
"category": "Runtime",
"file_name": "config-definition.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "SRV records can be used to declare the backend nodes; just use the `-srv-domain` flag. ``` dig SRV etcd.tcp.confd.io ``` ``` ... ;; ANSWER SECTION: etcd.tcp.confd.io. 300 IN SRV 1 100 4001 etcd.confd.io. ``` ``` confd -backend etcd -srv-domain confd.io ``` ``` dig SRV consul.tcp.confd.io ``` ``` ... ;; ANSWER SECTION: consul.tcp.confd.io. 300 IN SRV 1 100 8500 consul.confd.io. ``` ``` confd -backend consul -srv-domain confd.io ``` By default the `scheme` is set to http; change it with the `-scheme` flag. ``` confd -scheme https -srv-domain confd.io ``` Both the SRV domain and scheme can be configured in the confd configuration file. See the for more details."
}
] |
{
"category": "Runtime",
"file_name": "dns-srv-records.md",
"project_name": "Project Calico",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "This document describes some of the details of the Kata Agent Policy documents auto-generated by the `genpolicy` tool. See for general information about the Kata Agent Policy generation tool. See for an introduction to typical Kata Agent Policy document contents. The name of the Kata Agent Policy package must be `agent_policy`: ``` package agent_policy ``` For an introduction to Policy default values, see . `genpolicy` copies the default values from into the auto-generated Policy. Therefore, all Policies generated using the same `rules.rego` file are using the same default values. Some of the requests are always allowed by the auto-generated Policy. Those requests have a default value of `true` and there aren't any rules associated with them. Examples: ``` default CreateSandboxRequest := true default DestroySandboxRequest := true ``` Other requests have a default value of `false` and there is at least one associated with these requests that allows such requests depending on the request input parameters. Examples: ``` default CopyFileRequest := false default CreateContainerRequest := false ``` For an introduction to Policy rules, see . `genpolicy` copies the rules from into the auto-generated Policy. Therefore, all Policies generated using the same `rules.rego` file are using the same rules. For additional details about the `genpolicy` rules see . Unlike the and , the Policy data is specific to each input `YAML` file. `genpolicy` generates Policy data that the can use to allow some of the Kata Agent requests depending on the input data of these requests. Any unexpected requests are rejected by the Policy. This section provides details about the used by the Policy documents auto-generated by the `genpolicy` tool. `CopyFile` requests are rejected by the auto-generated Policy unless the destination path of the file being copied matches at least one regular expression from `policydata.requestdefaults.CopyFileRequest`. By default, there is a single regex in `policydata.requestdefaults.CopyFileRequest`, copied by `genpolicy` from : ``` policy_data := { ... \"request_defaults\": { ... \"CopyFileRequest\": [ \"^$(cpath)/\" ], ... }, ... } ``` The tool defines `$(cpath)` by copying its value from the same settings file into Policy's `policy_data.common.cpath`: ``` common := { ... \"cpath\": \"/run/kata-containers/shared/containers\", ... } ``` Therefore, by default the auto-generated Policy allows the Host to copy any files under `/run/kata-containers/shared/containers` and rejects any other `CopyFile` requests. A user can alter this behavior by using a custom settings file including a different `policydata.requestdefaults.CopyFileRequest` field value, instead of using the default from `genpolicy-settings.json`. Most of the rules from are applicable to the `CreateContainer` request, because: The inputs of `CreateContainer` are very complex - e.g., see the Spec data structure from the . Those complex inputs could allow a buggy or malicious Host to alter the intended behavior of user's Kubernetes (K8s) pods. For example, the Host could try to start in a confidential containers K8s pod a different container image than the image specified by user's `YAML`. Therefore, the Policy used by each pod has to verify that all the container images being used are exactly those that were referenced by the input `YAML` at the time when the Policy was created. The auto-generated Policy data contains descriptions of the data structure corresponding to every container referenced by user's `YAML` file. For example, if `genpolicy` creates a Policy corresponding to a `Pod` that just starts a `busybox` shell, the tool will generate two `OCI` data structures in the Policy - one for the K8s `pause` container and another for the `busybox` shell. Example: ``` policy_data := { \"containers\": [ { \"OCI\": { \"Version\":"
},
{
"data": "\"Process\": { \"Terminal\": false, \"User\": { \"UID\": 65535, \"GID\": 65535, \"AdditionalGids\": [], \"Username\": \"\" }, \"Args\": [ \"/pause\" ], ... }, ... }, ... }, { \"OCI\": { \"Version\": \"1.1.0-rc.1\", \"Process\": { \"Terminal\": false, \"User\": { \"UID\": 0, \"GID\": 0, \"AdditionalGids\": [], \"Username\": \"\" }, \"Args\": [ \"/bin/sh\" ], ... }, ... }, ... } ], ... ``` The auto-generated Policy rules allow the creation of any container that matches at least one of the OCI Policy data structures. Warning The auto-generated Policy doesn't keep track of which containers are already running in a pod. Therefore, in the example above the Kata Shim could start two shell containers instead of just one shell in a single pod - as long as both of these containers match the Policy data for user's shell container. Following are examples of auto-generated Policy rules that check some of the `CreateContainer` input `OCI Spec` data structure fields: The `Version` fields of the `OCI` Policy data and of the input `CreateContainer` data should match. The container `OCI.Root.Readonly` field from the Policy and the input data should have the same value. Each annotation of the container being created should match an annotation from the Policy data. Warning Container creation is allowed even if some of the Policy data annotations *are not present in the input `OCI` data annotations. The auto-generated Policy just checks that those annotations that are present* in the input `OCI` are allowed by the Policy data. Verify that the values of the following annotations are consistent with the Policy data: `io.katacontainers.pkg.oci.bundle_path` `io.katacontainers.pkg.oci.container_type` `io.kubernetes.cri.container-name` `io.kubernetes.cri.container-type` `io.kubernetes.cri.sandbox-log-directory` `io.kubernetes.cri.sandbox-id` `io.kubernetes.cri.sandbox-name` `io.kubernetes.cri.sandbox-namespace` `nerdctl/network-namespace` The input `OCI.Linux.Namespaces` information matches the Policy. All the Policy `OCI.Linux.MaskedPaths` paths are present in the input `MaskedPaths` too. Warning The input `OCI.Linux.MaskedPaths` is allowed by the auto-generated Policy to include *more* paths than the Policy data. But, if a path is masked by the Policy's `oci.Linux.MaskedPaths`, a `CreateContainer` request is rejected if its input data doesn't mask the same path. All the Policy `OCI.Linux.ReadonlyPaths` paths are present either in the input `ReadonlyPaths` or the input `MaskedPaths`. Warning the input `ReadonlyPaths` can contain *more* paths than the Policy `ReadonlyPaths`, but if the Policy designates a path as `Readonly` then that path must be designated either as `Readonly` or `Masked` by the `CreateContainer` input data. The `Args`, `Cwd`, `NoNewPrivileges`, `Env` and other `OCI.Process` input field values are consistent with the Policy. The input `OCI.Root.Path` matches the Policy data. The input `OCI.Mounts` are allowed by Policy. `Storages` is another input field of Kata Agent's `CreateContainer` requests. The `Storages` Policy data for each container gets generated by `genpolicy` based on: The container images referenced by user's `YAML` file. Any `volumes` and `volumeMounts` information that might be present in user's `YAML` file. The `volumes` data from . Example of `Storages` data from an auto-generated Policy file: ``` policy_data := { \"containers\": [ ... { \"OCI\": { ... }, \"storages\": [ { \"driver\": \"blk\", \"driver_options\": [], \"source\": \"\", \"fstype\": \"tar\", \"options\": [ \"$(hash0)\" ], \"mount_point\": \"$(layer0)\", \"fs_group\": null }, { \"driver\": \"blk\", \"driver_options\": [], \"source\": \"\", \"fstype\": \"tar\", \"options\": [ \"$(hash1)\" ], \"mount_point\": \"$(layer1)\", \"fs_group\": null }, { \"driver\": \"overlayfs\", \"driver_options\": [], \"source\": \"\", \"fstype\": \"fuse3.kata-overlay\", \"options\": [ \"2c342a137e693c7898aec36da1047f191dc7c1687e66198adacc439cf4adf379:2570e3a19e1bf20ddda45498a9627f61555d2d6c01479b9b76460b679b27d552\", \"8568c70c0ccfe0051092e818da769111a59882cd19dd799d3bca5ffa82791080:b643b6217748983830b26ac14a35a3322dd528c00963eaadd91ef55f513dc73f\" ], \"mount_point\": \"$(cpath)/$(bundle-id)\", \"fs_group\": null }, { \"driver\": \"local\", \"driver_options\": [], \"source\": \"local\", \"fstype\": \"local\", \"options\": [ \"mode=0777\" ], \"mount_point\": \"^$(cpath)/$(sandbox-id)/local/data$\", \"fs_group\": null }, { \"driver\": \"ephemeral\", \"driver_options\": [], \"source\": \"tmpfs\", \"fstype\": \"tmpfs\", \"options\": [], \"mount_point\": \"^/run/kata-containers/sandbox/ephemeral/data2$\", \"fs_group\": null } ], ... } ], ... } ``` In this example, the corresponding `CreateContainer` request input is expected to include the following Kata Containers `Storages`: Corresponding to container image layer 0. Corresponding to container image layer"
},
{
"data": "Corresponding to the `overlay` of the container images. For the `data` volume of the `YAML` example below. For the `data2` volume of the `YAML` example below. ```yaml apiVersion: v1 kind: Pod metadata: name: persistent spec: ... containers: ... volumeMounts: mountPath: /busy1 name: data mountPath: /busy2 name: data2 volumes: name: data emptyDir: {} name: data2 emptyDir: medium: Memory ``` `genpolicy` auto-generates the Policy `overlay` layer storage data structure. That structure provides some of the information used to validate the integrity of each container image referenced by user's `YAML` file: An ordered collection of layer IDs. For each layer ID, a `dm-verity` root hash value. Each container image layer is exposed by the Kata Shim to the Guest VM as a `dm-verity` protected block storage device. If the `CreateContainer` input layer IDs and `dm-verity` root hashes match those from the Policy: The Kata Agent uses the IDs and root hashes and to mount the container image layer storage devices. The Guest kernel ensures the integrity of the container image, by checking the `dm-verity` information of each layer. `ExecProcess` requests are rejected by the auto-generated Policy unless: They correspond to an `exec` K8s `livenessProbe`, `readinessProbe` or `startupProbe`, or They correspond to the `policydata.requestdefaults.ExecProcessRequest` data from Policy. Given this example `genpolicy` input `YAML` file as input: ```yaml apiVersion: v1 kind: Pod metadata: name: exec-test spec: containers: ... command: /bin/sh env: name: POD_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP readinessProbe: exec: command: echo \"Ready ${POD_IP}!\" failureThreshold: 1 periodSeconds: 5 timeoutSeconds: 10 ``` the tool generates the following `ExecProcessRequest` Policy data: ``` policy_data := { \"containers\": [ ... { \"OCI\": { ... }, \"storages\": [ ... ], \"exec_commands\": [ \"echo Ready ${POD_IP}!\", ] } ] } ``` An `ExecProcess` request is allowed by the auto-generated Policy if its command line matches at least one entry from the `commands` and/or the `regex` fields of the `policydata.requestdefaults.ExecProcessRequest` data structure. The `commands` and the `regex` entries get copied by `genpolicy` from . By default there are no such entries (both the `commands` and the `regex` collections are empty), so no `ExecProcess` requests are allowed by these two collections. A user that wants to allow some of the `ExecProcess` requests can specify a modified copy of `genpolicy-settings.json` as parameter to `genpolicy`. Warning The `commands` are easier to use, but require specifying the full command line being allowed by the Policy. The `regex` entries are more flexible - because a single entry can allow multiple `ExecProcess` command lines - but are easier to misuse e.g., by users that are not regular expression experts. Examples of `policydata.requestdefaults.ExecProcessRequest.commands` entries: ``` policy_data := { ... \"request_defaults\": { ... \"ExecProcessRequest\": { \"commands\": [ \"/bin/bash\", \"/bin/myapp -p1 -p2\" ], \"regex\": [] }, ... } } ``` Examples of `policydata.requestdefaults.ExecProcessRequest.regex` entries: ``` policy_data := { ... \"request_defaults\": { ... \"ExecProcessRequest\": { \"commands\": [], \"regex\": [ \"^/bin/sh -x -c echo hostName \\\\| nc -v -t -w 2 externalname-service [0-9]+$\", \"^/bin/sh -x -c echo hostName \\\\| nc -v -t -w 2 [0-9]+\\\\.[0-9]+\\\\.[0-9]+\\\\.[0-9]+ [0-9]+$\" ] }, ... } } ``` `ReadStream` requests are rejected by the default auto-generated Policy. A user can allow the Kata Containers Shim to read the `stdout`/`stderr` streams of the Guest VM containers by allowing these requests using a modified - e.g., ``` policy_data := { ... \"request_defaults\": { ... \"ReadStreamRequest\": true, ... } } ``` By default, `WriteStream` requests are rejected by the auto-generated Policy. A user can allow the Kata Containers Shim to send input to the `stdin` of Guest VM containers by allowing these requests using a modified - e.g., ``` policy_data := { ... \"request_defaults\": { ... \"WriteStreamRequest\": true, ... } }"
}
] |
{
"category": "Runtime",
"file_name": "genpolicy-auto-generated-policy-details.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: Monitoring and Data Visualization sidebar_position: 3 description: This guide will help you understand the monitoring metrics provided by JuiceFS, and how to visualize these metrics using Prometheus and Grafana. JuiceFS offers a suite of monitoring metrics, and this document outlines how to collect these metrics and visualize them with a monitoring system similar to the one depicted in the following image using Prometheus and Grafana. The setup process is as follows: Configure Prometheus to scrape JuiceFS monitoring metrics. Configure Grafana to read the monitoring data from Prometheus. Use the official JuiceFS Grafana dashboard template to display the monitoring metrics. :::tip This document uses open-source versions of Grafana and Prometheus for examples. ::: After mounting JuiceFS, it will automatically expose Prometheus-formatted metrics at `http://localhost:9567/metrics`. To observe the state changes of various metrics over a time range, you'll need to set up Prometheus and configure it to periodically scrape and save these metrics. The process for collecting metrics may vary slightly depending on the mount method or access type (such as FUSE mount, CSI Driver, S3 Gateway, Hadoop SDK, etc.). For detailed instructions, see . For example, here's how you might configure Prometheus for a common FUSE mount: If you haven't already set up Prometheus, follow the . Edit your `prometheus.yml` configuration file and add a new scrape configuration under `scrape_configs`. Define the JuiceFS client metrics address: ```yaml {20-22} global: scrape_interval: 15s evaluation_interval: 15s alerting: alertmanagers: static_configs: targets: rule_files: scrape_configs: job_name: \"prometheus\" static_configs: targets: [\"localhost:9090\"] job_name: \"juicefs\" static_configs: targets: [\"localhost:9567\"] ``` Start the Prometheus service: ```shell ./prometheus --config.file=prometheus.yml ``` Visit `http://localhost:9090` to see the Prometheus interface. Once Prometheus begins scraping JuiceFS metrics, the next step is to set up Grafana to read from Prometheus. If you haven't yet installed Grafana, follow the . In Grafana, create a new data source of type Prometheus: Name: A name that helps you identify the data source, such as the name of the file system. URL: The Prometheus data API endpoint, typically `http://localhost:9090`. JuiceFS's official Grafana dashboard templates can be found in the Grafana Dashboard repository and can be imported directly into Grafana via the URL `https://grafana.com/grafana/dashboards/20794/` or by using the ID `20794`. Here's what the official JuiceFS Grafana dashboard might look like: For different types of JuiceFS Client, metrics data is handled slightly differently. When the JuiceFS file system is mounted via the command, you can collect monitoring metrics via the address `http://localhost:9567/metrics`, or you can customize it via the `--metrics` option. For example: ```shell juicefs mount --metrics localhost:9567 ... ``` You can view these monitoring metrics using the command line tool: ```shell curl http://localhost:9567/metrics ``` In addition, the root directory of each JuiceFS file system has a hidden file called `.stats`, through which you can also view monitoring metrics. For example (assuming here that the path to the mount point is `/jfs`): ```shell cat /jfs/.stats ``` :::tip If you want to view the metrics in real-time, you can use the command. ::: See . :::note This feature needs to run JuiceFS client version 0.17.1 and above. ::: The will provide monitoring metrics at the address `http://localhost:9567/metrics` by default, or you can customize it with the `-metrics` option. For example: ```shell juicefs gateway --metrics localhost:9567 ... ``` If you are deploying JuiceFS S3 Gateway , you can refer to the Prometheus configuration in the section to collect monitoring metrics (the difference is mainly in the regular expression for the label `metakubernetespodlabelappkubernetesio_name`),"
},
{
"data": "```yaml {6-8} scrape_configs: job_name: 'juicefs-s3-gateway' kubernetessdconfigs: role: pod relabel_configs: sourcelabels: [metakubernetespodlabelappkubernetesioname] action: keep regex: juicefs-s3-gateway sourcelabels: [address_] action: replace regex: ([^:]+)(:\\d+)? replacement: $1:9567 targetlabel: address_ sourcelabels: [metakubernetespodnode_name] target_label: node action: replace ``` enables users to quickly deploy and manage Prometheus in Kubernetes. With the `ServiceMonitor` CRD provided by Prometheus Operator, scrape configuration can be automatically generated. For example (assuming that the `Service` of the JuiceFS S3 Gateway is deployed in the `kube-system` namespace): ```yaml apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: juicefs-s3-gateway spec: namespaceSelector: matchNames: kube-system selector: matchLabels: app.kubernetes.io/name: juicefs-s3-gateway endpoints: port: metrics ``` For more information on Prometheus Operator, please refer to the . supports reporting monitoring metrics to and . Report metrics to Pushgateway: ```xml <property> <name>juicefs.push-gateway</name> <value>host:port</value> </property> ``` At the same time, the frequency of reporting metrics can be modified through the `juicefs.push-interval` configuration. The default is to report once every 10 seconds. :::info According to the suggestion of , it is required to set `honor_labels: true` in the Prometheus's . It is important to note that the timestamp of the metrics scraped by Prometheus from Pushgateway is not the time when the JuiceFS Hadoop Java SDK reported it, but the time when it scraped. For details, please refer to . By default, Pushgateway will only save metrics in memory. If you need to persist metrics to disk, you can specify the file path for saving by the `--persistence.file` option and the frequency of saving to the file with the `--persistence.interval` option (by default, the metrics will be saved every 5 minutes). ::: :::note Each process using JuiceFS Hadoop Java SDK will have a unique metric, and Pushgateway will always remember all the collected metrics. This may cause the continuous accumulation of metrics and taking up too much memory, and it will also make Prometheus scraping metrics slow. Therefore, it is recommended to clean up metrics on Pushgateway regularly. For this, the following command can help. Clearing the metrics will not affect the running JuiceFS Hadoop Java SDK to continuously report data. Note that the `--web.enable-admin-api` option must be specified when Pushgateway is started, and the following command will clear all monitoring metrics in Pushgateway. ```bash curl -X PUT http://host:9091/api/v1/admin/wipe ``` ::: For more information about Pushgateway, please check . Report metrics to Graphite: ```xml <property> <name>juicefs.push-graphite</name> <value>host:port</value> </property> ``` At the same time, the frequency of reporting metrics can be modified through the `juicefs.push-interval` configuration. The default is to report every 10 seconds. For all configurations supported by JuiceFS Hadoop Java SDK, please refer to . :::note This feature needs to run JuiceFS client version 1.0.0 and above. ::: JuiceFS support to use Consul as registration center for metrics API. The default Consul address is `127.0.0.1:8500`. You could customize the address through `--consul` option, e.g.: ```shell juicefs mount --consul 1.2.3.4:8500 ... ``` When the Consul address is configured, the configuration of the `--metrics` option is not needed, and JuiceFS will automatically configure metrics URL according to its own network and port conditions. If `--metrics` is set at the same time, it will first try to listen on the configured metrics URL. For each service registered to Consul, the is always `juicefs`, and the format of is `<IP>:<mount-point>`, for example: `127.0.0.1:/tmp/jfs`. The of each service contains two keys `hostname` and `mountpoint`, the corresponding values represent the host name and path of the mount point respectively. In particular, the `mountpoint` value for the S3 Gateway is always `s3gateway`. After successfully registering with Consul, you need to add a new configuration to `prometheus.yml` and fill in the `services` with `juicefs`. Refer to ."
}
] |
{
"category": "Runtime",
"file_name": "monitoring.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Firecracker is an open source Virtual Machine Monitor (VMM) that enables secure, multi-tenant, minimal-overhead execution of container and function workloads. Firecracker was built by developers at Amazon Web Services to enable services such as and to improve resource utilization and customer experience, while providing the security and isolation required of public cloud infrastructure. Firecracker started from Chromium OS's Virtual Machine Monitor, , an open source VMM written in Rust. Today, crosvm and Firecracker have diverged to serve very different customer needs. is an open source community where we collaborate with the crosvm maintainers and other groups and individuals to build and share quality Rust virtualization components. When we launched Lambda in November of 2014, we were focused on providing a secure experience. At launch we used per-customer EC2 instances to provide strong security and isolation between customers. As Lambda grew, we saw the need for technology to provide a highly secure, flexible, and efficient runtime environment for services like Lambda and Fargate. Using our experience building isolated EC2 instances with hardware virtualization technology, we started an effort to build a VMM that was tailored to integrate with container ecosystems. The Firecracker VMM is built to be processor agnostic. Intel, AMD and 64 bit ARM processors are supported for production workloads. You can find more details . Yes. Firecracker is integrated with , (via ), and containerd via . We welcome contributions that enable Firecracker to integrate naturally with the container ecosystem and provide more choices in how container workloads are isolated. Firecracker is an that is purpose-built for running serverless functions and containers safely and efficiently, and nothing more. Firecracker is written in Rust, provides a minimal required device model to the guest operating system while excluding non-essential functionality (only 6 emulated devices are available: virtio-net, virtio-balloon, virtio-block, virtio-vsock, serial console, and a minimal keyboard controller used only to stop the microVM). This, along with a streamlined kernel loading process enables a \\< 125 ms startup time and a \\< 5 MiB memory footprint. The Firecracker process also provides a RESTful control API, handles resource rate limiting for microVMs, and provides a microVM metadata service to enable the sharing of configuration data between the host and guest. Firecracker supports Linux host and guest operating systems as well as guests. Currently supported host/guest kernel versions can be found in the . Firecracker is licensed under the Apache License, version 2.0, allowing you to freely use, copy, and distribute your changes under the terms of your choice. . Crosvm code sections are licensed under a that also allows you to use, copy, and distribute your changes under the terms of your choice. Firecracker is an AWS open source project that encourages contributions from customers and the developer"
},
{
"data": "Any contribution is welcome as long as it aligns with our . You can learn more about how to contribute in . You can chat with others in the community on the . The Firecracker owns project maintainer responsibilities, permissions to merge pull requests, and the ability to create new Firecracker releases. Guest operating systems must be built for the same CPU architecture as the host on which it will run. Firecracker does not support running microVMs on any architecture other than the one the host is running on. In other words, running an OS built for a `x86_64` on an `aarch64` system will not work, and vice versa. Initrds are only recently supported in Firecracker. If your release predates issue being resolved, please update. In order to debug the issue, check the response of the `InstanceStart` API request. Possible responses: Error: Submit a new issue with the label \"Support: Failure\". Success: If the boot was successful, you should get a response with 204 as the status code. If you have no output in the console, most likely you will have to update the kernel command line. By default, Firecracker starts with the serial console disabled for boot time performance reasons. Example of a kernel valid command line that enables the serial console (which goes in the `boot_args` field of the `/boot-source` Firecracker API resource): ```console console=ttyS0 reboot=k panic=1 pci=off nomodule ``` The `ip=` boot param in the linux kernel only actually supports configuring a single interface. Multiple interfaces can be set up in Firecracker using the API, but guest IP configuration at boot time through boot arguments can only be done for a single interface. The canonical solution is to use NTP in your guests. However, if you want to run Firecracker at scale, we suggest using a PTP emulated device as the guest's NTP time source so as to minimize network traffic and resource overhead. With this solution the guests will constantly update time to stay in sync with host wall-clock. They do so using cheap para-virtualized calls into kvm ptp instead of actual network NTP traffic. To be able to do this you need to have a guest kernel compiled with `KVM_PTP` support: ```console CONFIGPTP1588_CLOCK=y CONFIGPTP1588CLOCKKVM=y ``` Our already has these included. Now `/dev/ptp0` should be available in the guest. Next you need to configure `/dev/ptp0` as a NTP time source. For example when using `chrony`: Add `refclock PHC /dev/ptp0 poll 3 dpoll -2 offset 0` to the chrony conf file (`/etc/chrony/chrony.conf`) Restart the `chrony` daemon. You can see more info about the `refclock` parameters . Adjust them according to your needs. The relatively high FD usage is expected and correct. Firecracker heavily relies on event file descriptors to drive device emulation. There is no relation between the numbering of the `/network-interface` API calls and the number of the network interface in the"
},
{
"data": "Rather, it is usually the order of network interface creation that determines the number in the guest (but this depends on the distribution). For example, when you create two network interfaces by calling `/network-interfaces/1` and then `/network-interfaces/0`, it may result in this mapping: ```console /network-interfaces/1 -> eth0 /network-interfaces/0 -> eth1 ``` Firecracker does not implement ACPI and PM devices, therefore operations like gracefully rebooting or powering off the guest are supported in unconventional ways. Running the `poweroff` or `halt` commands inside a Linux guest will bring it down but Firecracker process remains unaware of the guest shutdown so it lives on. Running the `reboot` command in a Linux guest will gracefully bring down the guest system and also bring a graceful end to the Firecracker process. On `x86_64` systems, issuing a `SendCtrlAltDel` action command through the Firecracker API will generate a `Ctrl + Alt + Del` keyboard event in the guest which triggers a behavior identical to running the `reboot` command. This is, however, not supported on `aarch64` systems. Check out our . If you see errors like ... ```console [<TIMESTAMP>] fc_vmm: page allocation failure: order:6, mode:0x140c0c0 (GFPKERNEL|GFPCOMP|GFP_ZERO), nodemask=(null) [<TIMESTAMP>] fcvmm cpuset=<GUID> memsallowed=0 ``` ... then your host is running out of memory. KVM is attempting to do an allocation of 2^`order` bytes (in this case, 6) and there aren't sufficient contiguous pages. Possible mitigations are: Reduce memory pressure on the host. Maybe the host has memory but it's too fragmented for the kernel to use. The allocation above of order 6 means the kernel could not find 2^6 consecutive pages. One way to mitigate memory fragmentation is to for `vm.minfreekbytes` Or investigate other Passing an optional command line parameter, `--config-file`, to the Firecracker process allows this type of configuration. This parameter must be the path to a file that contains the JSON specification that will be used to configure and start the microVM. One example of such file can be found at `tests/framework/vm_config.json`. If the Firecracker process exits with `12` exit code (`Out of memory` error), the root cause is that there is not enough memory on the host to be used by the Firecracker microVM. If the microVM was not configured in terms of memory size through an API request, the host needs to meet the minimum requirement in terms of free memory size, namely 128 MB of free memory which the microVM defaults to. This may be related to \"We are seeing page allocation failures ...\" above. To validate, run this: ```sh sudo dmesg | grep \"page allocation failure\" ``` If another hypervisor like VMware or VirtualBox is running on the host and locks `/dev/kvm`, Firecracker process will fail to start with \"Resource busy\" error. This issue can be resolved by terminating the other hypervisor running on the host, and allowing Firecracker to start."
}
] |
{
"category": "Runtime",
"file_name": "FAQ.md",
"project_name": "Firecracker",
"subcategory": "Container Runtime"
}
|
[
{
"data": "While you are welcome to provide your own organization, typically a Cobra-based application will follow the following organizational structure: ``` appName/ cmd/ add.go your.go commands.go here.go main.go ``` In a Cobra app, typically the main.go file is very bare. It serves one purpose: initializing Cobra. ```go package main import ( \"{pathToYourApp}/cmd\" ) func main() { cmd.Execute() } ``` Cobra-CLI is its own program that will create your application and add any commands you want. It's the easiest way to incorporate Cobra into your application. For complete details on using the Cobra generator, please refer to To manually implement Cobra you need to create a bare main.go file and a rootCmd file. You will optionally provide additional commands as you see fit. Cobra doesn't require any special constructors. Simply create your commands. Ideally you place this in app/cmd/root.go: ```go var rootCmd = &cobra.Command{ Use: \"hugo\", Short: \"Hugo is a very fast static site generator\", Long: `A Fast and Flexible Static Site Generator built with love by spf13 and friends in Go. Complete documentation is available at https://gohugo.io/documentation/`, Run: func(cmd *cobra.Command, args []string) { // Do Stuff Here }, } func Execute() { if err := rootCmd.Execute(); err != nil { fmt.Fprintln(os.Stderr, err) os.Exit(1) } } ``` You will additionally define flags and handle configuration in your init() function. For example cmd/root.go: ```go package cmd import ( \"fmt\" \"os\" \"github.com/spf13/cobra\" \"github.com/spf13/viper\" ) var ( // Used for flags. cfgFile string userLicense string rootCmd = &cobra.Command{ Use: \"cobra-cli\", Short: \"A generator for Cobra based Applications\", Long: `Cobra is a CLI library for Go that empowers applications. This application is a tool to generate the needed files to quickly create a Cobra application.`, } ) // Execute executes the root command. func Execute() error { return rootCmd.Execute() } func init() { cobra.OnInitialize(initConfig) rootCmd.PersistentFlags().StringVar(&cfgFile, \"config\", \"\", \"config file (default is $HOME/.cobra.yaml)\") rootCmd.PersistentFlags().StringP(\"author\", \"a\", \"YOUR NAME\", \"author name for copyright attribution\") rootCmd.PersistentFlags().StringVarP(&userLicense, \"license\", \"l\", \"\", \"name of license for the project\") rootCmd.PersistentFlags().Bool(\"viper\", true, \"use Viper for configuration\") viper.BindPFlag(\"author\", rootCmd.PersistentFlags().Lookup(\"author\")) viper.BindPFlag(\"useViper\", rootCmd.PersistentFlags().Lookup(\"viper\")) viper.SetDefault(\"author\", \"NAME HERE <EMAIL ADDRESS>\") viper.SetDefault(\"license\", \"apache\") rootCmd.AddCommand(addCmd) rootCmd.AddCommand(initCmd) } func initConfig() { if cfgFile != \"\" { // Use config file from the flag. viper.SetConfigFile(cfgFile) } else { // Find home directory. home, err := os.UserHomeDir() cobra.CheckErr(err) // Search config in home directory with name \".cobra\" (without extension). viper.AddConfigPath(home) viper.SetConfigType(\"yaml\") viper.SetConfigName(\".cobra\") } viper.AutomaticEnv() if err := viper.ReadInConfig(); err == nil { fmt.Println(\"Using config file:\", viper.ConfigFileUsed()) } } ``` With the root command you need to have your main function execute it. Execute should be run on the root for clarity, though it can be called on any command. In a Cobra app, typically the main.go file is very bare. It serves one purpose: to initialize Cobra. ```go package main import ( \"{pathToYourApp}/cmd\" ) func main() { cmd.Execute() } ``` Additional commands can be defined and typically are each given their own file inside of the cmd/ directory. If you wanted to create a version command you would create cmd/version.go and populate it with the following: ```go package cmd import ( \"fmt\" \"github.com/spf13/cobra\" ) func init() { rootCmd.AddCommand(versionCmd) } var versionCmd = &cobra.Command{ Use: \"version\", Short: \"Print the version number of Hugo\", Long: `All software has versions. This is Hugo's`, Run: func(cmd *cobra.Command, args []string) { fmt.Println(\"Hugo Static Site Generator v0.9 -- HEAD\") }, } ``` A command may have subcommands which in turn may have other subcommands. This is achieved by using"
},
{
"data": "In some cases, especially in larger applications, each subcommand may be defined in its own go package. The suggested approach is for the parent command to use `AddCommand` to add its most immediate subcommands. For example, consider the following directory structure: ```text cmd root.go sub1 sub1.go sub2 leafA.go leafB.go sub2.go main.go ``` In this case: The `init` function of `root.go` adds the command defined in `sub1.go` to the root command. The `init` function of `sub1.go` adds the command defined in `sub2.go` to the sub1 command. The `init` function of `sub2.go` adds the commands defined in `leafA.go` and `leafB.go` to the sub2 command. This approach ensures the subcommands are always included at compile time while avoiding cyclic references. If you wish to return an error to the caller of a command, `RunE` can be used. ```go package cmd import ( \"fmt\" \"github.com/spf13/cobra\" ) func init() { rootCmd.AddCommand(tryCmd) } var tryCmd = &cobra.Command{ Use: \"try\", Short: \"Try and possibly fail at something\", RunE: func(cmd *cobra.Command, args []string) error { if err := someFunc(); err != nil { return err } return nil }, } ``` The error can then be caught at the execute function call. Flags provide modifiers to control how the action command operates. Since the flags are defined and used in different locations, we need to define a variable outside with the correct scope to assign the flag to work with. ```go var Verbose bool var Source string ``` There are two different approaches to assign a flag. A flag can be 'persistent', meaning that this flag will be available to the command it's assigned to as well as every command under that command. For global flags, assign a flag as a persistent flag on the root. ```go rootCmd.PersistentFlags().BoolVarP(&Verbose, \"verbose\", \"v\", false, \"verbose output\") ``` A flag can also be assigned locally, which will only apply to that specific command. ```go localCmd.Flags().StringVarP(&Source, \"source\", \"s\", \"\", \"Source directory to read from\") ``` By default, Cobra only parses local flags on the target command, and any local flags on parent commands are ignored. By enabling `Command.TraverseChildren`, Cobra will parse local flags on each command before executing the target command. ```go command := cobra.Command{ Use: \"print [OPTIONS] [COMMANDS]\", TraverseChildren: true, } ``` You can also bind your flags with : ```go var author string func init() { rootCmd.PersistentFlags().StringVar(&author, \"author\", \"YOUR NAME\", \"Author name for copyright attribution\") viper.BindPFlag(\"author\", rootCmd.PersistentFlags().Lookup(\"author\")) } ``` In this example, the persistent flag `author` is bound with `viper`. Note: the variable `author` will not be set to the value from config, when the `--author` flag is provided by user. More in . Flags are optional by default. If instead you wish your command to report an error when a flag has not been set, mark it as required: ```go rootCmd.Flags().StringVarP(&Region, \"region\", \"r\", \"\", \"AWS region (required)\") rootCmd.MarkFlagRequired(\"region\") ``` Or, for persistent flags: ```go rootCmd.PersistentFlags().StringVarP(&Region, \"region\", \"r\", \"\", \"AWS region (required)\") rootCmd.MarkPersistentFlagRequired(\"region\") ``` If you have different flags that must be provided together (e.g. if they provide the `--username` flag they MUST provide the `--password` flag as well) then Cobra can enforce that requirement: ```go rootCmd.Flags().StringVarP(&u, \"username\", \"u\", \"\", \"Username (required if password is set)\") rootCmd.Flags().StringVarP(&pw, \"password\", \"p\", \"\", \"Password (required if username is set)\") rootCmd.MarkFlagsRequiredTogether(\"username\", \"password\") ``` You can also prevent different flags from being provided together if they represent mutually exclusive options such as specifying an output format as either `--json` or `--yaml` but never both: ```go rootCmd.Flags().BoolVar(&ofJson, \"json\", false, \"Output in JSON\") rootCmd.Flags().BoolVar(&ofYaml, \"yaml\", false, \"Output in YAML\")"
},
{
"data": "\"yaml\") ``` In both of these cases: both local and persistent flags can be used NOTE: the group is only enforced on commands where every flag is defined a flag may appear in multiple groups a group may contain any number of flags Validation of positional arguments can be specified using the `Args` field of `Command`. The following validators are built in: Number of arguments: `NoArgs` - report an error if there are any positional args. `ArbitraryArgs` - accept any number of args. `MinimumNArgs(int)` - report an error if less than N positional args are provided. `MaximumNArgs(int)` - report an error if more than N positional args are provided. `ExactArgs(int)` - report an error if there are not exactly N positional args. `RangeArgs(min, max)` - report an error if the number of args is not between `min` and `max`. Content of the arguments: `OnlyValidArgs` - report an error if there are any positional args not specified in the `ValidArgs` field of `Command`, which can optionally be set to a list of valid values for positional args. If `Args` is undefined or `nil`, it defaults to `ArbitraryArgs`. Moreover, `MatchAll(pargs ...PositionalArgs)` enables combining existing checks with arbitrary other checks. For instance, if you want to report an error if there are not exactly N positional args OR if there are any positional args that are not in the `ValidArgs` field of `Command`, you can call `MatchAll` on `ExactArgs` and `OnlyValidArgs`, as shown below: ```go var cmd = &cobra.Command{ Short: \"hello\", Args: cobra.MatchAll(cobra.ExactArgs(2), cobra.OnlyValidArgs), Run: func(cmd *cobra.Command, args []string) { fmt.Println(\"Hello, World!\") }, } ``` It is possible to set any custom validator that satisfies `func(cmd *cobra.Command, args []string) error`. For example: ```go var cmd = &cobra.Command{ Short: \"hello\", Args: func(cmd *cobra.Command, args []string) error { // Optionally run one of the validators provided by cobra if err := cobra.MinimumNArgs(1)(cmd, args); err != nil { return err } // Run the custom validation logic if myapp.IsValidColor(args[0]) { return nil } return fmt.Errorf(\"invalid color specified: %s\", args[0]) }, Run: func(cmd *cobra.Command, args []string) { fmt.Println(\"Hello, World!\") }, } ``` In the example below, we have defined three commands. Two are at the top level and one (cmdTimes) is a child of one of the top commands. In this case the root is not executable, meaning that a subcommand is required. This is accomplished by not providing a 'Run' for the 'rootCmd'. We have only defined one flag for a single command. More documentation about flags is available at https://github.com/spf13/pflag ```go package main import ( \"fmt\" \"strings\" \"github.com/spf13/cobra\" ) func main() { var echoTimes int var cmdPrint = &cobra.Command{ Use: \"print [string to print]\", Short: \"Print anything to the screen\", Long: `print is for printing anything back to the screen. For many years people have printed back to the screen.`, Args: cobra.MinimumNArgs(1), Run: func(cmd *cobra.Command, args []string) { fmt.Println(\"Print: \" + strings.Join(args, \" \")) }, } var cmdEcho = &cobra.Command{ Use: \"echo [string to echo]\", Short: \"Echo anything to the screen\", Long: `echo is for echoing anything back. Echo works a lot like print, except it has a child command.`, Args: cobra.MinimumNArgs(1), Run: func(cmd *cobra.Command, args []string) { fmt.Println(\"Echo: \" + strings.Join(args, \" \")) }, } var cmdTimes = &cobra.Command{ Use: \"times [string to echo]\", Short: \"Echo anything to the screen more times\", Long: `echo things multiple times back to the user by providing a count and a string.`, Args: cobra.MinimumNArgs(1), Run: func(cmd *cobra.Command, args []string) { for i := 0; i < echoTimes; i++ { fmt.Println(\"Echo: \" + strings.Join(args, \" \")) } }, }"
},
{
"data": "\"times\", \"t\", 1, \"times to echo the input\") var rootCmd = &cobra.Command{Use: \"app\"} rootCmd.AddCommand(cmdPrint, cmdEcho) cmdEcho.AddCommand(cmdTimes) rootCmd.Execute() } ``` For a more complete example of a larger application, please checkout . Cobra automatically adds a help command to your application when you have subcommands. This will be called when a user runs 'app help'. Additionally, help will also support all other commands as input. Say, for instance, you have a command called 'create' without any additional configuration; Cobra will work when 'app help create' is called. Every command will automatically have the '--help' flag added. The following output is automatically generated by Cobra. Nothing beyond the command and flag definitions are needed. $ cobra-cli help Cobra is a CLI library for Go that empowers applications. This application is a tool to generate the needed files to quickly create a Cobra application. Usage: cobra-cli [command] Available Commands: add Add a command to a Cobra Application completion Generate the autocompletion script for the specified shell help Help about any command init Initialize a Cobra Application Flags: -a, --author string author name for copyright attribution (default \"YOUR NAME\") --config string config file (default is $HOME/.cobra.yaml) -h, --help help for cobra-cli -l, --license string name of license for the project --viper use Viper for configuration Use \"cobra-cli [command] --help\" for more information about a command. Help is just a command like any other. There is no special logic or behavior around it. In fact, you can provide your own if you want. Cobra supports grouping of available commands in the help output. To group commands, each group must be explicitly defined using `AddGroup()` on the parent command. Then a subcommand can be added to a group using the `GroupID` element of that subcommand. The groups will appear in the help output in the same order as they are defined using different calls to `AddGroup()`. If you use the generated `help` or `completion` commands, you can set their group ids using `SetHelpCommandGroupId()` and `SetCompletionCommandGroupId()` on the root command, respectively. You can provide your own Help command or your own template for the default command to use with the following functions: ```go cmd.SetHelpCommand(cmd *Command) cmd.SetHelpFunc(f func(*Command, []string)) cmd.SetHelpTemplate(s string) ``` The latter two will also apply to any children commands. When the user provides an invalid flag or invalid command, Cobra responds by showing the user the 'usage'. You may recognize this from the help above. That's because the default help embeds the usage as part of its output. $ cobra-cli --invalid Error: unknown flag: --invalid Usage: cobra-cli [command] Available Commands: add Add a command to a Cobra Application completion Generate the autocompletion script for the specified shell help Help about any command init Initialize a Cobra Application Flags: -a, --author string author name for copyright attribution (default \"YOUR NAME\") --config string config file (default is $HOME/.cobra.yaml) -h, --help help for cobra-cli -l, --license string name of license for the project --viper use Viper for configuration Use \"cobra [command] --help\" for more information about a command. You can provide your own usage function or template for Cobra to use. Like help, the function and template are overridable through public methods: ```go cmd.SetUsageFunc(f func(*Command) error) cmd.SetUsageTemplate(s string) ``` Cobra adds a top-level '--version' flag if the Version field is set on the root command. Running an application with the '--version' flag will print the version to stdout using the version template. The template can be customized using the `cmd.SetVersionTemplate(s string)`"
},
{
"data": "It is possible to run functions before or after the main `Run` function of your command. The `PersistentPreRun` and `PreRun` functions will be executed before `Run`. `PersistentPostRun` and `PostRun` will be executed after `Run`. The `Persistent*Run` functions will be inherited by children if they do not declare their own. These functions are run in the following order: `PersistentPreRun` `PreRun` `Run` `PostRun` `PersistentPostRun` An example of two commands which use all of these features is below. When the subcommand is executed, it will run the root command's `PersistentPreRun` but not the root command's `PersistentPostRun`: ```go package main import ( \"fmt\" \"github.com/spf13/cobra\" ) func main() { var rootCmd = &cobra.Command{ Use: \"root [sub]\", Short: \"My root command\", PersistentPreRun: func(cmd *cobra.Command, args []string) { fmt.Printf(\"Inside rootCmd PersistentPreRun with args: %v\\n\", args) }, PreRun: func(cmd *cobra.Command, args []string) { fmt.Printf(\"Inside rootCmd PreRun with args: %v\\n\", args) }, Run: func(cmd *cobra.Command, args []string) { fmt.Printf(\"Inside rootCmd Run with args: %v\\n\", args) }, PostRun: func(cmd *cobra.Command, args []string) { fmt.Printf(\"Inside rootCmd PostRun with args: %v\\n\", args) }, PersistentPostRun: func(cmd *cobra.Command, args []string) { fmt.Printf(\"Inside rootCmd PersistentPostRun with args: %v\\n\", args) }, } var subCmd = &cobra.Command{ Use: \"sub [no options!]\", Short: \"My subcommand\", PreRun: func(cmd *cobra.Command, args []string) { fmt.Printf(\"Inside subCmd PreRun with args: %v\\n\", args) }, Run: func(cmd *cobra.Command, args []string) { fmt.Printf(\"Inside subCmd Run with args: %v\\n\", args) }, PostRun: func(cmd *cobra.Command, args []string) { fmt.Printf(\"Inside subCmd PostRun with args: %v\\n\", args) }, PersistentPostRun: func(cmd *cobra.Command, args []string) { fmt.Printf(\"Inside subCmd PersistentPostRun with args: %v\\n\", args) }, } rootCmd.AddCommand(subCmd) rootCmd.SetArgs([]string{\"\"}) rootCmd.Execute() fmt.Println() rootCmd.SetArgs([]string{\"sub\", \"arg1\", \"arg2\"}) rootCmd.Execute() } ``` Output: ``` Inside rootCmd PersistentPreRun with args: [] Inside rootCmd PreRun with args: [] Inside rootCmd Run with args: [] Inside rootCmd PostRun with args: [] Inside rootCmd PersistentPostRun with args: [] Inside rootCmd PersistentPreRun with args: [arg1 arg2] Inside subCmd PreRun with args: [arg1 arg2] Inside subCmd Run with args: [arg1 arg2] Inside subCmd PostRun with args: [arg1 arg2] Inside subCmd PersistentPostRun with args: [arg1 arg2] ``` Cobra will print automatic suggestions when \"unknown command\" errors happen. This allows Cobra to behave similarly to the `git` command when a typo happens. For example: ``` $ hugo srever Error: unknown command \"srever\" for \"hugo\" Did you mean this? server Run 'hugo --help' for usage. ``` Suggestions are automatically generated based on existing subcommands and use an implementation of . Every registered command that matches a minimum distance of 2 (ignoring case) will be displayed as a suggestion. If you need to disable suggestions or tweak the string distance in your command, use: ```go command.DisableSuggestions = true ``` or ```go command.SuggestionsMinimumDistance = 1 ``` You can also explicitly set names for which a given command will be suggested using the `SuggestFor` attribute. This allows suggestions for strings that are not close in terms of string distance, but make sense in your set of commands but for which you don't want aliases. Example: ``` $ kubectl remove Error: unknown command \"remove\" for \"kubectl\" Did you mean this? delete Run 'kubectl help' for usage. ``` Cobra can generate documentation based on subcommands, flags, etc. Read more about it in the . Cobra can generate a shell-completion file for the following shells: bash, zsh, fish, PowerShell. If you add more information to your commands, these completions can be amazingly powerful and flexible. Read more about it in . Cobra makes use of the shell-completion system to define a framework allowing you to provide Active Help to your users. Active Help are messages (hints, warnings, etc) printed as the program is being used. Read more about it in ."
}
] |
{
"category": "Runtime",
"file_name": "user_guide.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "slug: /comparison/juicefsvss3fs description: This document compares S3FS and JuiceFS, examining their product positioning, architecture, caching, and features. is an open source tool developed in C++ that mounts S3 object storage locally via FUSE for read and write access as a local disk. In addition to Amazon S3, it supports all S3 API-compatible object stores. While both S3FS and JuiceFS share the basic functionality of mounting object storage buckets locally via FUSE and using them through POSIX interfaces, they differ significantly in functional details and technical implementation. S3FS is a utility that allows users to mount object storage buckets locally and read and write in a way that the users used to. It targets general use scenarios that are not sensitive to performance and network latency. JuiceFS is a distributed file system with a unique approach to data management and a series of technical optimizations for high performance, reliability, and security. It primarily addresses the storage needs of large volumes of data. S3FS does not do special optimization for files. It acts as an access channel between local and object storage, allowing the same content to be seen on the local mount point and the object storage browser. This makes it easy to use cloud storage locally. On the other hand, with this simple architecture, retrieving, reading, and writing files with S3FS require direct interaction with the object store, and network latency can impact strongly on performance and user experience. JuiceFS uses a architecture that separates data and metadata. Files are split into data blocks according to specific rules before being uploaded to object storage, and the corresponding metadata is stored in a separate database. The advantage of this is that retrieval of files and modification of metadata such as file names can directly interact with the database with a faster response, bypassing the network latency impact of interacting with the object store. In addition, when processing large files, although S3FS can solve the problem of transferring large files by uploading them in chunks, the nature of object storage dictates that appending files requires rewriting the entire object. For large files of tens or hundreds of gigabytes or even terabytes, repeated uploads waste a lot of time and bandwidth resources. JuiceFS avoids such problems by splitting individual files into chunks locally according to specific rules (default 4MiB) before uploading, regardless of their size. The rewriting and appending operations will eventually become new data blocks instead of modifying already generated data blocks. This greatly reduces the waste of time and bandwidth resources. For a detailed description of the JuiceFS architecture, refer to the"
},
{
"data": "S3FS supports disk caching, but it is disabled by default. Local caching can be enabled by specifying a cache path with `-o use_cache`. When caching is enabled, any file reads or writes will be written to the cache before the operation is actually performed. S3FS detects data changes via MD5 to ensure data correctness and reduce duplicate file downloads. Since all operations involved with S3FS require interactions with S3, whether the cache is enabled or not impacts significantly on its application experience. S3FS does not limit the cache capacity by default, which may cause the cache to fill up the disk when working with large buckets. You need to define the reserved disk space by `-o ensure_diskfree`. In addition, S3FS does not have a cache expiration and cleanup mechanism, so users need to manually clean up the cache periodically. Once the cache space is full, uncached file operations need to interact directly with the object storage, which will impact large file handling. JuiceFS uses a completely different caching approach than S3FS. First, JuiceFS guarantees data consistency. Secondly, JuiceFS defines a default disk cache usage limit of 100GiB, which can be freely adjusted by users as needed, and by default ensures that no more space is used when disk free space falls below 10%. When the cache usage limit reaches the upper limit, JuiceFS will automatically do cleanup using an LRU-like algorithm to ensure that cache is always available for subsequent read and write operations. For more information on JuiceFS caching, see the . | Comparison basis | S3FS | JuiceFS | ||-|-| | Data Storage | S3 | S3, other object storage, WebDAV, local disk | | Metadata Storage | No | Database | | Operating System | Linux, macOS | Linux, macOS, Windows | | Access Interface | POSIX | POSIX, HDFS API, S3 Gateway and CSI Driver | | POSIX Compatibility | Partially compatible | Fully compatible | | Shared Mounts | Supports but does not guarantee data integrity and consistency | Guarantee strong consistency | | Local Cache | | | | Symbol Links | | | | Standard Unix Permissions | | | | Strong Consistency | | | | Extended Attributes | | | | Hard Links | | | | File Chunking | | | | Atomic Operations | | | | Data Compression | | | | Client-side Encryption | | | | Development Language | C++ | Go | | Open Source License | GPL v2.0 | Apache License 2.0 | , , and are all derivatives based on S3FS and have essentially the same functional features and usage as S3FS."
}
] |
{
"category": "Runtime",
"file_name": "juicefs_vs_s3fs.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Rook is a large growing community of storage providers, contributors, and users. The Rook community has adopted this security disclosure and response policy to ensure we responsibly handle critical issues. This policy is adapted from the policies of the following CNCF projects: The Rook community credits and appreciates the example and security best practices that they have published openly. A third party security audit was performed in December 2019 by [Trail of Bits](https://www.trailofbits.com/). The full security report has been published and is [available for download](https://drive.google.com/file/d/1rOwrwYmBUpLUm6W5J5rhXvdVit818hWJ/view?usp=sharing). Security vulnerabilities should be handled quickly and sometimes privately. The primary goal of this process is to reduce the total time users are vulnerable to publicly known exploits. The Product Security Team (PST) is responsible for organizing the entire response including internal communication and external disclosure. The initial Product Security Team will consist of the set of maintainers that volunteered. Every beta or stable storage provider MUST have a representative on the PST. : for any security concerns. Received by Product Security Team members, and used by this Team to discuss security issues and fixes. : for early private information on Security patch releases. See below how Rook distributors can apply for this list. If you find a security vulnerability or any security related issues, please DO NOT file a public issue. Do not create a GitHub issue. Instead, send your report privately to . Security reports are greatly appreciated and we will publicly thank you for it. Please provide as much information as possible, so we can react quickly. For instance, that could include: Description of the location and potential impact of the vulnerability A detailed description of the steps required to reproduce the vulnerability (POC scripts, screenshots, and logs are all helpful to us) Whatever else you think we might need to identify the source of this vulnerability, and possibly even a suggested fix for the vulnerability as well If you know of a publicly disclosed security vulnerability please IMMEDIATELY email to inform the Product Security Team (PST) about the vulnerability so we start the patch, release, and communication process. If possible the PST will ask the person making the public report if the issue can be handled via a private disclosure process (for example if the full exploit details have not yet been published). If the reporter denies the request for private disclosure, the PST will move swiftly with the fix and release process. In extreme cases you can ask GitHub to delete the issue but this generally isn't necessary and is unlikely to make a public disclosure less damaging. For each vulnerability a member of the PST will volunteer to lead coordination with the \"Fix Team\" and is responsible for sending disclosure emails to the rest of the community. This lead will be referred to as the \"Fix Lead.\" The role of Fix Lead should rotate round-robin across the PST. Note that given the current size of the Rook community it is likely that the PST is the same as the \"Fix"
},
{
"data": "The PST may decide to bring in additional contributors for added expertise depending on the area of the code that contains the vulnerability. All of the timelines below are suggestions and assume a Private Disclosure. If the Team is dealing with a Public Disclosure all timelines become ASAP. If the fix relies on another upstream project's disclosure timeline, that will adjust the process as well. We will work with the upstream project to fit their timeline and best protect our users. These steps should be completed within the first 24 hours of disclosure. The Fix Lead will work quickly to identify relevant engineers from the affected projects and packages and CC those engineers into the disclosure thread. These selected developers are the Fix Team. The Fix Lead will get the Fix Team access to private security repos to develop the fix. These steps should be completed within the 1-7 days of Disclosure. The Fix Lead and the Fix Team will create a using the [CVSS Calculator](https://www.first.org/cvss/calculator/3.0). The Fix Lead makes the final call on the calculated CVSS; it is better to move quickly than making the CVSS perfect. The Fix Team will notify the Fix Lead that work on the fix branch is complete once there are LGTMs on all commits in the private repo from one or more maintainers. If the CVSS score is under 4.0 ([a low severity score](https://www.first.org/cvss/specification-document#i5)) the Fix Team can decide to slow the release process down in the face of holidays, developer bandwidth, etc. These decisions must be discussed on the mailing list. With the Fix Development underway the Rook Security Team needs to come up with an overall communication plan for the wider community. This Disclosure process should begin after the Team has developed a fix or mitigation so that a realistic timeline can be communicated to users. Disclosure of Forthcoming Fix to Users (Completed within 1-7 days of Disclosure) The Fix Lead will create a GitHub issue in Rook project to inform users that a security vulnerability has been disclosed and that a fix will be made available, with an estimation of the Release Date. It will include any mitigating steps users can take until a fix is available. The communication to users should be actionable. They should know when to block time to apply patches, understand exact mitigation steps, etc. Optional Fix Disclosure to Private Distributors List (Completed within 1-14 days of Disclosure): The Fix Lead will make a determination with the help of the Fix Team if an issue is critical enough to require early disclosure to distributors. Generally this Private Distributor Disclosure process should be reserved for remotely exploitable or privilege escalation issues. Otherwise, this process can be skipped. The Fix Lead will email the patches to [email protected] so distributors can prepare their own release to be available to users on the day of the issue's announcement. Distributors should read about the to find out the requirements for being added to this"
},
{
"data": "What if a distributor breaks embargo?* The PST will assess the damage and may make the call to release earlier or continue with the plan. When in doubt push forward and go public ASAP. Fix Release Day (Completed within 1-21 days of Disclosure) The Fix Team will selectively choose all needed commits from the Master branch in order to create a new release on top of the current last version released. Release process will be as usual. The Fix Lead will request a CVE from and include the CVSS and release details. The Fix Lead will inform all users, devs and integrators, now that everything is public, announcing the new releases, the CVE number, and the relevant merged PRs to get wide distribution and user action. As much as possible this email should be actionable and include links on how to apply the fix to user's environments; this can include links to external distributor documentation. This list is intended to be used primarily to provide actionable information to multiple distributor projects at once. This list is not intended for individuals to find out about security issues. The information members receive on [email protected] must not be made public, shared, nor even hinted at anywhere beyond the need-to-know within your specific team except with the list's explicit approval. This holds true until the public disclosure date/time that was agreed upon by the list. Members of the list and others may not use the information for anything other than getting the issue fixed for your respective distribution's users. Before any information from the list is shared with respective members of your team required to fix said issue, they must agree to the same terms and only find out information on a need-to-know basis. In the unfortunate event you share the information beyond what is allowed by this policy, you must urgently inform the mailing list of exactly what information leaked and to whom. If you continue to leak information and break the policy outlined here, you will be removed from the list. This is a team effort. As a member of the list you must carry some water. This could be in the form of the following: Review and/or test the proposed patches and point out potential issues with them (such as incomplete fixes for the originally reported issues, additional issues you might notice, and newly introduced bugs), and inform the list of the work done even if no issues were encountered. Help draft emails to the public disclosure mailing list. Help with release notes. To be eligible for the [email protected] mailing list, your distribution should: Be an active distributor of Rook. Have a user base not limited to your own organization. Have a publicly verifiable track record up to present day of fixing security issues. Not be a downstream or rebuild of another distributor. Be a participant and active contributor in the community. Accept the that is outlined above. Have someone already on the list vouch for the person requesting membership on behalf of your distribution. New membership requests are sent to . In the body of your request please specify how you qualify and fulfill each criterion listed in ."
}
] |
{
"category": "Runtime",
"file_name": "SECURITY.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "MinIO supports encrypting config, IAM assets with KMS provided keys. If the KMS is not enabled, MinIO will store the config, IAM data as plain text erasure coded in its backend. MinIO supports two ways of encrypting IAM and configuration data. You can either use KES - together with an external KMS - or, much simpler, set the env. variable `MINIOKMSSECRET_KEY` and start/restart the MinIO server. For more details about KES and how to set it up refer to our . Instead of configuring an external KMS you can start with a single key by setting the env. variable `MINIOKMSSECRET_KEY`. It expects the following format: ```sh MINIOKMSSECRET_KEY=<key-name>:<base64-value> ``` First generate a 256 bit random key via: ```sh $ cat /dev/urandom | head -c 32 | base64 - OSMM+vkKUTCvQs9YL/CVMIMt43HFhkUpqJxTmGl6rYw= ``` Now, you can set `MINIOKMSSECRET_KEY` like this: ```sh export MINIOKMSSECRET_KEY=my-minio-key:OSMM+vkKUTCvQs9YL/CVMIMt43HFhkUpqJxTmGl6rYw= ``` You can choose an arbitrary name for the key - instead of `my-minio-key`. Please note that losing the `MINIOKMSSECRET_KEY` will cause data loss since you will not be able to decrypt the IAM/configuration data anymore. For distributed MinIO deployments, specify the same `MINIOKMSSECRET_KEY` for each MinIO server process. At any point in time you can switch from `MINIOKMSSECRET_KEY` to a full KMS deployment. You just need to import the generated key into KES - for example via the KES CLI once you have successfully setup KES: ```sh kes key create my-minio-key OSMM+vkKUTCvQs9YL/CVMIMt43HFhkUpqJxTmGl6rYw= ``` For instructions on setting up KES, see the For instructions on using KES for encrypting the MinIO backend, follow the . The SSE-S3 configuration setup also supports MinIO KMS backend encryption. Why is this change needed? Before, there were two separate mechanisms - S3 objects got encrypted using a KMS, if present, and the IAM / configuration data got encrypted with the root credentials. Now, MinIO encrypts IAM / configuration and S3 objects with a KMS, if present. This change unified the key-management aspect within MinIO. The unified KMS-based approach has several advantages: Key management is now centralized. There is one way to change or rotate encryption keys. There used to be two different mechanisms - one for regular S3 objects and one for IAM data. Reduced server startup time. For IAM encryption with the root credentials, MinIO had to use a memory-hard function (Argon2) that (on purpose) consumes a lot of memory and CPU. The new KMS-based approach can use a key derivation function that is orders of magnitudes cheaper w.r.t. memory and"
},
{
"data": "Root credentials can now be changed easily. Before, a two-step process was required to change the cluster root credentials since they were used to en/decrypt the IAM data. So, both - the old and new credentials - had to be present at the same time during a rotation and the old credentials had to be removed once the rotation completed. This process is now gone. The root credentials can now be changed easily. Does this mean I need an enterprise KMS setup to run MinIO (securely)? No, MinIO does not depend on any third-party KMS provider. You have three options here: Run MinIO without a KMS. In this case all IAM data will be stored in plain-text. Run MinIO with a single secret key. MinIO supports a static cryptographic key that can act as minimal KMS. With this method all IAM data will be stored encrypted. The encryption key has to be passed as environment variable. Run MinIO with KES (minio/kes) in combination with any supported KMS as secure key store. For example, you can run MinIO + KES + Hashicorp Vault. What about an exiting MinIO deployment? Can I just upgrade my cluster? Yes, MinIO will try to transparently migrate any existing IAM data and either stores it in plaintext (no KMS) or re-encrypts using the KMS. Is this change backward compatible? Will it break my setup? This change is not backward compatible for all setups. In particular, the native Hashicorp Vault integration - which has been deprecated already - won't be supported anymore. KES is now mandatory if a third-party KMS should be used. Further, since the configuration data is encrypted with the KMS, the KMS configuration itself can no longer be stored in the MinIO config file and instead must be provided via environment variables. If you have set your KMS configuration using e.g. the `mc admin config` commands you will need to adjust your deployment. Even though this change is backward compatible we do not expect that it affects the vast majority of deployments in any negative way. Will an upgrade of an existing MinIO cluster impact the SLA of the cluster or will it even cause downtime? No, an upgrade should not cause any downtime. However, on the first startup - since MinIO will attempt to migrate any existing IAM data - the boot process may take slightly longer, but may not be visibly noticeable. Once the migration has completed, any subsequent restart should be as fast as before or even faster."
}
] |
{
"category": "Runtime",
"file_name": "IAM.md",
"project_name": "MinIO",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "We follow the , and use the corresponding tooling. For the purposes of the aforementioned guidelines, controller-runtime counts as a \"library project\", but otherwise follows the guidelines exactly. For release branches, we generally tend to support backporting one (1) major release (`release-{X-1}` or `release-0.{Y-1}`), but may go back further if the need arises and is very pressing (e.g. security updates). Note the . Particularly: We DO guarantee Kubernetes REST API compatibility -- if a given version of controller-runtime stops working with what should be a supported version of Kubernetes, this is almost certainly a bug. We DO NOT guarantee any particular compatibility matrix between kubernetes library dependencies (client-go, apimachinery, etc); Such compatibility is infeasible due to the way those libraries are versioned."
}
] |
{
"category": "Runtime",
"file_name": "VERSIONING.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Manage the multicast groups. ``` -h, --help help for group ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage multicast BPF programs - Add a multicast group. - Delete a multicast group. - List the multicast groups."
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_bpf_multicast_group.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium-agent --cmdref, do not edit manually--> Generate the autocompletion script for zsh Generate the autocompletion script for the zsh shell. If shell completion is not already enabled in your environment you will need to enable it. You can execute the following once: echo \"autoload -U compinit; compinit\" >> ~/.zshrc To load completions in your current shell session: source <(cilium-agent completion zsh) To load completions for every new session, execute once: cilium-agent completion zsh > \"${fpath[1]}/_cilium-agent\" cilium-agent completion zsh > $(brew --prefix)/share/zsh/site-functions/_cilium-agent You will need to start a new shell for this setup to take effect. ``` cilium-agent completion zsh [flags] ``` ``` -h, --help help for zsh --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell"
}
] |
{
"category": "Runtime",
"file_name": "cilium-agent_completion_zsh.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Cobra will follow a steady release cadence. Non breaking changes will be released as minor versions quarterly. Patch bug releases are at the discretion of the maintainers. Users can expect security patch fixes to be released within relatively short order of a CVE becoming known. For more information on security patch fixes see the CVE section below. Releases will follow . Users tracking the Master branch should expect unpredictable breaking changes as the project continues to move forward. For stability, it is highly recommended to use a release. We will maintain two major releases in a moving window. The N-1 release will only receive bug fixes and security updates and will be dropped once N+1 is released. Deprecation of Go versions or dependent packages will only occur in major releases. To reduce the change of this taking users by surprise, any large deprecation will be preceded by an announcement in the and an Issue on Github. Maintainers will make every effort to release security patches in the case of a medium to high severity CVE directly impacting the library. The speed in which these patches reach a release is up to the discretion of the maintainers. A low severity CVE may be a lower priority than a high severity one. Cobra maintainers will use GitHub issues and the as the primary means of communication with the community. This is to foster open communication with all users and contributors. Breaking changes are generally allowed in the master branch, as this is the branch used to develop the next release of Cobra. There may be times, however, when master is closed for breaking changes. This is likely to happen as we near the release of a new version. Breaking changes are not allowed in release branches, as these represent minor versions that have already been released. These version have consumers who expect the APIs, behaviors, etc, to remain stable during the lifetime of the patch stream for the minor release. Examples of breaking changes include: Removing or renaming exported constant, variable, type, or function. Updating the version of critical libraries such as `spf13/pflag`, `spf13/viper` etc... Some version updates may be acceptable for picking up bug fixes, but maintainers must exercise caution when reviewing. There may, at times, need to be exceptions where breaking changes are allowed in release branches. These are at the discretion of the project's maintainers, and must be carefully considered before merging. Maintainers will ensure the Cobra test suite utilizes the current supported versions of Golang. Changes to this document and the contents therein are at the discretion of the maintainers. None of the contents of this document are legally binding in any way to the maintainers or the users."
}
] |
{
"category": "Runtime",
"file_name": "CONDUCT.md",
"project_name": "Inclavare Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Piraeus Datastore consists of several different components. Each component runs as a separate Deployment, DaemonSet or plain Pod. All Pods are labelled according to their component. You can check the Pods running in your own cluster by running the following `kubectl` command: ``` $ kubectl get pods '-ocustom-columns=NAME:.metadata.name,COMPONENT:.metadata.labels.app\\.kubernetes\\.io/component' NAME COMPONENT ha-controller-vd82w ha-controller linstor-controller-6c8f8dc47-cm8hr linstor-controller linstor-csi-controller-59b9968b86-ftl76 linstor-csi-controller linstor-csi-node-hcmk9 linstor-csi-node linstor-satellite-k8s-10.test-66687 linstor-satellite piraeus-operator-controller-manager-6dcfcb4568-6jntp piraeus-operator piraeus-operator-gencert-59449cb449-nzg6z piraeus-operator-gencert ``` The Piraeus Operator creates and maintains the other components, except for the `piraeus-operator-gencert` component. Along with deploying the needed Kubernetes resources, it maintains the LINSTOR Cluster state by: Registering satellites Creating storage pools Maintaining node labels The \"generate certificate\" Pod creates and maintains the TLS key and certificate used by the Piraeus Operator. The TLS secret, named `webhook-server-cert`, is needed to start the Piraeus Operator Pod. In addition, this Pod keeps the `ValidatingWebhookConfiguration` for the Piraeus Operator up-to-date. The Piraeus Operator needs a TLS certificate to serve the validation endpoint for the custom resources it maintains. Historically, TLS certificates where created by `cert-manager`. This was removed to reduce the number of dependencies for deploying Piraeus Datastore. The LINSTOR Controller is responsible for resource placement, resource configuration, and orchestration of any operational processes that require a view of the whole cluster. It maintains a database of all the configuration information for the whole cluster, stored as Kubernetes objects. The LINSTOR Controller connects to the LINSTOR Satellites and sends them instructions for achieving the desired cluster state. It provides an external API used by LINSTOR CSI and Piraeus Operator to change the cluster state. The LINSTOR Satellite service runs on each node. It acts as the local configuration agent for LINSTOR managed storage. It is stateless and receives all the information it needs from the LINSTOR Controller. Satellites are started as DaemonSets, managed by the Piraeus"
},
{
"data": "We deploy one DaemonSet per node, this enables having per-node configuration and customization of the Satellite Pods. Satellites interact with the host operating system directly and are deployed as privileged containers. Integration with the host operating system also leads to two noteworthy interactions with [Linux namespaces]: Any DRBD device will inherit the network namespace of the Satellite Pods. Unless the Satellites are using host networking, DRBD will not be able to replicate data without a running Satellite Pod. See the [host networking guide] for more information. The Satellite process is spawned in a separate UTS namespace: this allows us to keep control of the hostname reported to DRBD tools, even when the Pod is using a generated name from the DaemonSet. Thus, DRBD connections will always use the Kubernetes node name. The use of separate UTS namespace should be completely transparent to users: running `kubectl exec ...` on a satellite Pod will drop you into this namespace, enabling you to run `drbdadm` commands as expected. The LINSTOR CSI Controller Pod creates, modifies and deletes volumes and snapshots. It translates the state of Kubernetes resources (`StorageClass`, `PersistentVolumeClaims`, `VolumeSnapshots`) into their equivalent in LINSTOR. The LINSTOR CSI Node Pods execute mount and unmount operations. Mount and Unmount are initiated by kubelet before starting a Pod with a Piraeus volume. They are deployed as a DaemonSet on every node in the cluster by default. There needs to be a LINSTOR Satellite running on the same node as a CSI Node Pod. The Piraeus High Availability Controller will speed up the fail-over process for stateful workloads using Piraeus for storage. It is deployed on every node in the cluster and listens for DRBD events to detect storage failures on other nodes. It evicts Pods when it detects that the storage on their node is inaccessible."
}
] |
{
"category": "Runtime",
"file_name": "components.md",
"project_name": "Piraeus Datastore",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: \"Plugins\" layout: docs Velero has a plugin architecture that allows users to add their own custom functionality to Velero backups & restores without having to modify/recompile the core Velero binary. To add custom functionality, users simply create their own binary containing implementations of Velero's plugin kinds (described below), plus a small amount of boilerplate code to expose the plugin implementations to Velero. This binary is added to a container image that serves as an init container for the Velero server pod and copies the binary into a shared emptyDir volume for the Velero server to access. Multiple plugins, of any type, can be implemented in this binary. A fully-functional is provided to serve as a convenient starting point for plugin authors. When naming your plugin, keep in mind that the name needs to conform to these rules: have two parts separated by '/' none of the above parts can be empty the prefix is a valid DNS subdomain name a plugin with the same name cannot already exist ``` example.io/azure 1.2.3.4/5678 example-with-dash.io/azure ``` You will need to give your plugin(s) a name when registering them by calling the appropriate `RegisterX` function: <https://github.com/vmware-tanzu/velero/blob/0e0f357cef7cf15d4c1d291d3caafff2eeb69c1e/pkg/plugin/framework/server.go#L42-L60> Velero currently supports the following kinds of plugins: Object Store - persists and retrieves backups, backup logs and restore logs Volume Snapshotter - creates volume snapshots (during backup) and restores volumes from snapshots (during restore) Backup Item Action - executes arbitrary logic for individual items prior to storing them in a backup file Restore Item Action - executes arbitrary logic for individual items prior to restoring them into a cluster Velero provides a that can be used by plugins to log structured information to the main Velero server log or per-backup/restore logs. It also passes a `--log-level` flag to each plugin binary, whose value is the value of the same flag from the main Velero process. This means that if you turn on debug logging for the Velero server via `--log-level=debug`, plugins will also emit debug-level logs. See the for an example of how to use the logger within your plugin. Velero uses a ConfigMap-based convention for providing configuration to plugins. If your plugin needs to be configured at runtime, define a ConfigMap like the following: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: my-plugin-config namespace: velero labels: velero.io/plugin-config: \"\" <fully-qualified-plugin-name>: <plugin-type> data: ``` Then, in your plugin's implementation, you can read this ConfigMap to fetch the necessary configuration. See the for an example of this -- in particular, the `getPluginConfig(...)` function."
}
] |
{
"category": "Runtime",
"file_name": "plugins.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Target version: 1.1 Rook was designed for storage consumption in the same Kubernetes cluster as the clients who are consuming the storage. However, this scenario is not always sufficient. Another common scenario is when Ceph is running in an \"external\" cluster from the clients. There are a number of reasons for this scenario: Centralized Ceph management in a single cluster with multiple Kubernetes clusters that need to consume storage. Customers already have a Ceph cluster running not in a K8s environment, likely deployed with Ansible, ceph-deploy, or even manually. They should be able to consume this storage from Kubernetes. Fully independent storage for another level of isolation from their K8s compute nodes. This scenario can technically also be accomplished in a single Kubernetes cluster through labels, taints, and tolerations. | | | ||| | Local Cluster | The cluster where clients are running that have a need to connect to the Ceph storage. Must be a Kubernetes/OpenShift cluster. | | External Cluster | The cluster where Ceph Mons, Mgr, OSDs, and MDS are running, which might have been deployed with Rook, Ansible, or any other method. | Requirements for clients in the local cluster to connect to the external cluster include: At least one mon endpoint where the connection to the cluster can be established Admin keyring for managing the cluster Network connectivity from a local cluster to the mons, mgr, osds, and mds of the external cluster: mon: for the operator to watch the mons that are in quorum mon/osd: for client access mgr: for dashboard access mds: for shared filesystem access When the Rook operator is started, initially it is not aware of any clusters. When the admin creates the operator, they will want to configure the operator differently depending on if they want to configure a local Rook cluster, or an external cluster. If external cluster management is required, the differences are: The Rook Discover DaemonSet would not be necessary. Its purpose is to detect local devices, which is only needed for OSD configuration. Side note: If a local cluster, the discover DaemonSet could be delayed starting until the first cluster is started. There is no need for the discovery until the first cluster is created. The Security Context Constraints (SCC) would not require all the privileges of a local cluster. These privileges are only required by mon and/or osd daemon pods, which are not running in the local cluster. `allowPrivilegedContainer` `allowHostDirVolumePlugin` `allowHostPID` `allowHostIPC` `allowHostPorts` The CSI driver is agnostic of whether Ceph is running locally or externally. The core requirement of the CSI driver is the list of mons and the keyring with which to connect. This metadata is required whether the cluster is local or external. The Rook operator will need to keep this metadata updated throughout the lifetime of the CSI driver. The CSI driver will be installed and configured by the Rook operator, similarly to any Rook cluster. The advantages of this approach instead of a standalone ceph-csi for external clusters include: Provide a consistent experience across any Kubernetes/OpenShift deployment Rook can install, configure, and update the Ceph CSI"
},
{
"data": "Admins don't have to worry about the CSI driver. Question: How would Rook behave in the case where the admin deployed ceph-csi standalone as well as Rook? It seems reasonable not to support this, although it's not clear if there would actually be conflicts between the two. The flex driver would also be agnostic of the cluster for the same reasons, but we wont need to worry about the flex driver going forward. In order for Rook to provide the storage to clients in the local cluster, the CephCluster CRD will be created in order for the operator to provide local management of the external cluster. There are several differences needed for the operator to be aware of an external cluster. Before the CephCluster CRD is created, some metadata must be initialized in local configmaps/secrets to allow the local cluster to manage the external cluster. mon endpoint(s) and admin keyring The mon, mgr, and osd daemons will not be managed by the local Rook operator. These daemons must be created and managed by the external cluster. The operator will make a \"best effort\" to keep the list of mons updated. If the mons change in the external cluster, the list of mons must be updated in the local cluster. The operator will need to query the Ceph status periodically (perhaps every minute). If there is a change to the mons, the operator will update the local configmaps/secrets.\\ If the local operator fails to see changes to the external mons, perhaps because it is down, the mon list could become stale. In that case, the admin will need to update the list similarly to how it was initialized when the local cluster was first created. The operator will update the cluster crd with the following status fields: Timestamp of the last successful time querying the mons Timestamp of the last attempt to query the mons Success/Failure message indicating the result of the last check The first bullet point above requires an extra manual configuration step by the cluster admin from what they need in a typical Rook cluster. The other items above will be handled automatically by the Rook operator. The extra step involves exporting metadata from the external cluster and importing it to the local cluster: The admin creates a yaml file with the needed resources from the external cluster (ideally we would provide a helper script to help automate this task): Save the mon list and admin keyring Load the yaml file into the local cluster `kubectl create -f <config.yaml>` The CephCluster CRD will have a new property \"external\" to indicate whether the cluster is external. If true, the local operator will implement the described behavior. Other CRDs such as CephBlockPool, CephFilesystem, and CephObjectStore do not need this property since they all belong to the cluster and will effectively inherit the external property. ```yaml kind: CephCluster spec: external: true ``` The mgr modules, including the dashboard, would be running in the external cluster. Any configuration that happens through the dashboard would depend on the orchestration modules in that external"
},
{
"data": "With the rook-ceph cluster created, the CSI driver integration will cover the Block (RWO) storage and no additional management is needed. When a pool CRD is created in the local cluster, the operator will create the pool in the external cluster. The pool settings will only be applied the first time the pool is created and should be skipped thereafter. The ownership and lifetime of the pool will belong to the external cluster. The local cluster should not apply pool settings to overwrite the settings defined in the external cluster. If the pool CRD is deleted from the local cluster, the pool will not be deleted in the external cluster. A shared filesystem must only be created in the external cluster. Clients in the local cluster can connect to the MDS daemons in the external cluster. The same instance of CephFS cannot have MDS daemons in different clusters. The MDS daemons must exist in the same cluster for a given filesystem. When the CephFilesystem CRD is created in the local cluster, Rook will ignore the request and print an error to the log. An object store can be created that will start RGW daemons in the local cluster. When the CephObjectStore CRD is created in the local cluster, the local Rook operator does the following: Create the metadata and data pools in the external cluster (if they don't exist yet) Create a realm, zone, and zone group in the external cluster (if they don't exist yet) Start the RGW daemon in the local cluster Local s3 clients will connect to the local RGW endpoints Question: Should we generate a unique name so an object store of the same name cannot be shared with the external cluster? Or should we allow sharing of the object store between the two clusters if the CRD has the same name? If the admin wants to create independent object stores, they could simply create them with unique CRD names. Assuming the object store can be shared with the external cluster, similarly to pools, the owner of the object store is the external cluster. If the local cluster attempts to change the pool settings such as replication, they will be ignored. Rook already creates and injects service monitoring configuration, consuming what the ceph-mgr prometheus exporter module generates. This enables the capability of a Kubernetes cluster to gather metrics from the external cluster and feed them in Prometheus. The idea is to allow Rook-Ceph to connect to an external ceph-mgr prometheus module exporter. Enhance external cluster script: the script tries to discover the list of managers IP addresses if provided by the user, the list of ceph-mgr IPs in the script is accepted via the new `--prometheus-exporter-endpoint` flag Add a new entry in the monitoring spec of the `CephCluster` CR: ```go // ExternalMgrEndpoints point to existing Ceph prometheus exporter endpoints ExternalMgrEndpoints []v1.EndpointAddress `json:\"externalMgrEndpoints,omitempty\"` } ``` So the CephCluster CR will look like: ```yaml monitoring: enabled: true externalMgrEndpoints: ip: \"192.168.0.2\" ip: \"192.168.0.3\" ``` Configure monitoring as part of `configureExternalCephCluster()` method Create a new metric Service Create an Endpoint resource based out of the IP addresses either discovered or provided by the user in the script"
}
] |
{
"category": "Runtime",
"file_name": "ceph-external-cluster.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Antrea ships a zip archive with OVS binaries for Windows. The binaries are hosted on the antrea.io website and updated as needed. This file documents the procedure to upload a new version of the OVS binaries. The archive is served from AWS S3, and therefore access to the Antrea S3 account is required for this procedure. We assume that you have already built the OVS binaries (if a custom built is required), or retrieved them from the official OVS build pipelines. The binaries must be built in Release mode for acceptable performance. Name the zip archive appropriately: `ovs-<OVS VERSION>[-antrea.<BUILD NUM>]-win64.zip` the format for `<OVS VERSION>` is `<MAJOR>.<MINOR>.<PATCH>`, with no `v` prefix. the `-antrea.<BUILD NUM>` component is optional but must be provided if this is not the official build for the referenced OVS version. `<BUILD NUM>` starts at 1 and is incremented for every new upload corresponding to that OVS version. Generate the SHA256 checksum for the archive. place yourself in the directory containing the archive. run `sha256sum -b <NAME>.zip > <NAME>.zip.sha256`, where `<NAME>` is determined by the previous step. Upload the archive and SHA256 checksum file to the `ovs/` folder in the `downloads.antrea.io` S3 bucket. As you upload the files, grant public read access to them (you can also do it after the upload with the `Make public` action). Validate both public links: `https://downloads.antrea.io/ovs/<NAME>.zip` `https://downloads.antrea.io/ovs/<NAME>.zip.sha256` Update the Antrea Windows documentation and helper scripts as needed, e.g. `hack/windows/Install-OVS.ps1`."
}
] |
{
"category": "Runtime",
"file_name": "updating-ovs-windows.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "`Upcall` is a direct communication tool between VMM and guest developed upon `vsock`. The server side of the `upcall` is a driver in guest kernel (kernel patches are needed for this feature) and it'll start to serve the requests after the kernel starts. And the client side is in Dragonball VMM , it'll be a thread that communicates with `vsock` through `uds`. We want to keep the lightweight of the VM through the implementation of the `upcall`. We define specific operations in the device manager service (one of the services in `upcall` we developed) to perform device hotplug / hot-unplug including vCPU hotplug, `virtio-mmio` hotplug, and memory hotplug. We have accomplished device hotplug / hot-unplug directly through `upcall` in order to avoid the virtualization of ACPI to minimize virtual machines overhead. And there could be many other uses if other services are implemented. `Upcall` needs a server in the guest kernel which will be several kernel patches for the `upcall` server itself and different services registered in the `upcall` server. It's currently tested on upstream Linux kernel 5.10. To make it easy for users to use, we have open-source the `upcall` guest patches in and develop `upcall` support in . You could use following command to download the upstream kernel (currently Dragonball uses 5.10.25) and put the `upcall` patches and other Kata patches into kernel code. `sh build-kernel.sh -e -t dragonball -f setup` `-e` here means experimental, mainly because `upcall` patches are not in upstream Linux kernel. `-t dragonball` is for specifying hypervisor type `-f` is for generating `.config` file After this command, the kernel code with `upcall` and related `.config` file are all set up in the directory `kata-linux-dragonball-experimental-5.10.25- to build and use this guest kernel. Also, a client-side is also needed in VMM. Dragonball has already open-source the way to implement `upcall` client and Dragonball compiled with `dbs-upcall` feature will enable Dragonball client side."
}
] |
{
"category": "Runtime",
"file_name": "upcall.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "| Case ID | Title | Priority | Smoke | Status | Other | ||-|-|-||-| | I00001 | Add, delete, modify, and query CIDR subnet | p1 | | done | | | I00002 | Add and remove IP for CIDR subnet | p1 | | done | | | I00003 | Automatically create, scale, restart, and delete of ippools for different types of controllers | p1 | | done | | | I00004 | The excludeIPs in the subnet will not be used by pools created automatically or manually. | p2 | | done | | | I00005 | If routes and gateway are modified for CIDR, how the manually created ippools are affected? | p3 | | done | | | I00006 | Multiple automatic creation and recycling of ippools, eventually the free IPs in the subnet should be restored to its initial state | p1 | true | done | | | I00007 | Multiple manual creation and recycling of ippools, eventually the free IPs in the subnet should be restored to its initial state | p1 | true | done | | | I00008 | Scale up and down the number of deployment several times to see if the state of the ippools is eventually stable | p1 | | done | | | I00009 | Create 100 Subnets with the same `Subnet.spec` and see if only one succeeds in the end | p1 | | done | | | I00010 | Create 100 IPPools with the same `IPPool.spec` and see if only one succeeds in the end | p1 | | done | | | I00011 | The subnet automatically creates an ippool and allocates IP, and should consider reservedIP | p1 | | done | | | I00012 | SpiderSubnet supports multiple interfaces | p1 | | done | | | I00013 | SpiderSubnet supports automatic IP assignment for Pods | p2 | | done | | | I00014 | The default subnet should support different controller types | p2 | | done | | | I00015 | Applications with the same name and type can use the reserved IPPool. | p2 | | done | | | I00016 | Applications with the same name and different types cannot use the reserved IPPool. | p3 | | done | | | I00017 | Ability to create fixed IPPools for applications with very long names | p3 | | done | | | I00018 | automatic IPPool IPs are not modifiable | p3 | | done | | | I00019 | Change the annotation ipam.spidernet.io/ippool-reclaim to true and the reserved IPPool will be reclaimed. | p3 | | done | | | I00020 | Redundant IPs for automatic IPPool, which cannot be used by other applications | p3 | | done | | | I00021 | Pod works correctly when multiple NICs are specified by annotations for applications of the same name | p3 | | done | | | I00022 | Dirty data in the subnet should be recycled. | p3 | | done | | | I00023 | SpiderSubnet feature doesn't support orphan pod | p2 | | done | | | I00024 | The Pod would not be setup and no auto Pools created when the SpiderSubnet AutoPool feature is disabled | p2 | | done | |"
}
] |
{
"category": "Runtime",
"file_name": "subnet.md",
"project_name": "Spiderpool",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Below are list of adopters of the `ocicrypt` library or supports use of OCI encrypted images: Below are the list of projects that are in the process of adopting support:"
}
] |
{
"category": "Runtime",
"file_name": "ADOPTERS.md",
"project_name": "CRI-O",
"subcategory": "Container Runtime"
}
|
[
{
"data": "If any part of the flannel project has bugs or documentation mistakes, please let us know by . Before creating a bug report, please check that an issue reporting the same problem does not already exist. To make the bug report accurate and easy to understand, please try to create bug reports that are: Specific. Include as much details as possible: which version, what environment, what configuration, etc. Reproducible. Include the steps to reproduce the problem. We understand some issues might be hard to reproduce, please includes the steps that might lead to the problem. Isolated. Please try to isolate and reproduce the bug with minimum dependencies. It would significantly slow down the speed to fix a bug if too many dependencies are involved in a bug report. Debugging external systems that rely on flannel is out of scope, but we are happy to provide guidance in the right direction or help with using flannel itself. Unique. Do not duplicate an existing bug report. Scoped. One bug per report. Do not follow up with another bug inside one report. It may be worthwhile to read before creating a bug report. We might ask for further information to locate a bug. A duplicated bug report will be closed. ``` bash $ kill -QUIT $PID ``` ``` bash $ flannel --version ```"
}
] |
{
"category": "Runtime",
"file_name": "reporting_bugs.md",
"project_name": "Flannel",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "This guide shows you how to configure the DRBD Module Loader when using a HTTP Proxy. To complete this guide, you should be familiar with: editing `LinstorSatelliteConfiguration` resources. using the `kubectl` command line tool to access the Kubernetes cluster. We will use environment variables to configure the proxy, this tells the drbd-module-loader component to use the proxy for outgoing communication. Configure the sample below according to your environment and apply the configuration using `kubectl apply -f filename.yml`. This sample configuration assumes that a HTTP proxy is reacheable at `http://10.0.0.1:3128`. ```yaml apiVersion: piraeus.io/v1 kind: LinstorSatelliteConfiguration metadata: name: http-proxy spec: podTemplate: spec: initContainers: name: drbd-module-loader env: name: HTTP_PROXY value: http://10.0.0.1:3128 # Add your proxy connection here name: HTTPS_PROXY value: http://10.0.0.1:3128 # Add your proxy connection here name: NO_PROXY value: localhost,127.0.0.1,10.0.0.0/8,172.16.0.0/12 # Add internal IP ranges and domains here ```"
}
] |
{
"category": "Runtime",
"file_name": "http-proxy.md",
"project_name": "Piraeus Datastore",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "CubeFS uses JSON as the format of the configuration file. | Configuration Item | Type | Description | Required | Default Value | |:|:-|:--|:-|:--| | role | string | The role of the process, the value can only be master | Yes | | | ip | string | Host IP address | Yes | | | listen | string | Port number on which the HTTP service listens | Yes | | | prof | string | Golang pprof port number | Yes | | | id | string | Distinguish different master nodes | Yes | | | peers | string | Raft replication group member information | Yes | | | logDir | string | Directory for storing log files | Yes | | | logLevel | string | Log level | No | error | | retainLogs | string | How many Raft logs to keep. | Yes | | | walDir | string | Directory for storing Raft WAL logs. | Yes | | | storeDir | string | Directory for storing RocksDB data. This directory must exist. If the directory does not exist, the service cannot be started. | Yes | | | clusterName | string | Cluster name | Yes | | | ebsAddr | string | Address of the erasure coding subsystem. This must be configured when using the erasure coding"
},
{
"data": "| No | | | exporterPort | int | Port for Prometheus to obtain monitoring data | No | | | consulAddr | string | Consul registration address, used by Prometheus exporter | No | | | metaNodeReservedMem | string | Reserved memory size for metadata nodes, in bytes | No | 1073741824 | | heartbeatPort | string | Raft heartbeat communication port | No | 5901 | | replicaPort | string | Raft data transmission port | No | 5902 | | nodeSetCap | string | Capacity of NodeSet | No | 18 | | missingDataPartitionInterval | string | If no heartbeat is received during this time period, the replica is considered lost, in seconds | No | 24h | | dataPartitionTimeOutSec | string | If no heartbeat is received during this time period, the replica is considered not alive, in seconds | No | 10min | | numberOfDataPartitionsToLoad | string | Maximum number of data partitions to check at a time | No | 40 | | secondsToFreeDataPartitionAfterLoad | string | After how many seconds to start releasing the memory occupied by the loaded data partition task | No | 300 | | tickInterval | string | Timer interval for checking heartbeat and election timeout, in milliseconds | No | 500 | | electionTick | string | How many times the timer is reset before the election times out | No | 5 | | bindIp | bool | Whether to listen for connections only on the host IP | No | false | | faultDomain | bool | Whether to enable fault domain | No | false | | faultDomainBuildAsPossible | bool | Whether to still try to build a nodeSetGroup as much as possible if the number of available fault domains is less than the expected number | No | false | | faultDomainGrpBatchCnt | string | Number of available fault domains | No | 3 | | dpNoLeaderReportIntervalSec | string | How often to report when data partitions has no leader, unit: s | No | 60 | | mpNoLeaderReportIntervalSec | string | How often to report when meta partitions has no leader, unit: s | No | 60 | | maxQuotaNumPerVol | string | Maximum quota number per volume | No | 100 | | volForceDeletion | bool | the non-empty volume can be deleted directly or not | No | true | | volDeletionDentryThreshold | int | if the non-empty volume can't be deleted directly , this param define a threshold , only volumes with a dentry count that is less than or equal to the threshold can be deleted | No | 0 | | enableLogPanicHook | bool | (Experimental) Hook `panic` function to flush log before executing `panic` | No |false | | enableDirectDeleteVol | bool | to control the support for delayed volume deletion. `true``, will delete volume directly| No |true | ``` json { \"role\": \"master\", \"id\":\"1\", \"ip\": \"127.0.0.1\", \"listen\": \"17010\", \"prof\":\"17020\", \"peers\": \"1:127.0.0.1:17010,2:127.0.0.2:17010,3:127.0.0.3:17010\", \"retainLogs\":\"20000\", \"logDir\": \"/cfs/master/log\", \"logLevel\":\"info\", \"walDir\":\"/cfs/master/data/wal\", \"storeDir\":\"/cfs/master/data/store\", \"exporterPort\": 9500, \"consulAddr\": \"http://consul.prometheus-cfs.local\", \"clusterName\":\"cubefs01\", \"metaNodeReservedMem\": \"1073741824\" } ```"
}
] |
{
"category": "Runtime",
"file_name": "master.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<! Verify first that your issue/request is not already reported on GitHub. Also test if the latest release, and devel branch are affected too. Always add information AFTER of these html comments. --> <! Pick one below and delete the rest --> Bug Report Feature Idea Documentation Report <! Insert below this comment the name of the servicetype. --> <! Paste verbatim output from \"openio --version\" between quotes below --> ``` ``` <! Paste verbatim output from \"cat /etc/oio/sds.conf.d/NAMESPACE\" between quotes below --> ``` ``` <! Paste verbatim output from \"cat /etc/os-release\" between quotes below --> ``` ``` <! Explain the problem briefly --> <! For bugs, show exactly how to reproduce the problem, using a minimal test-case. For new features, show how the feature would be used. --> ```bash ``` <! You can also paste gist.github.com links for larger files --> <! What did you expect to happen when running the steps above? --> <! What actually happened? --> <! Paste verbatim command output between quotes below --> ``` ```"
}
] |
{
"category": "Runtime",
"file_name": "ISSUE_TEMPLATE.md",
"project_name": "OpenIO",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Direct Tools Rook is designed with Kubernetes design principles from the ground up. This topic is going to escape the bounds of Kubernetes storage and show you how to use block and file storage directly from a pod without any of the Kubernetes magic. The purpose of this topic is to help you quickly test a new configuration, although it is not meant to be used in production. All of the benefits of Kubernetes storage including failover, detach, and attach will not be available. If your pod dies, your mount will die with it. To test mounting your Ceph volumes, start a pod with the necessary mounts. An example is provided in the examples test directory: ```console kubectl create -f deploy/examples/direct-mount.yaml ``` After the pod is started, connect to it like this: ```console kubectl -n rook-ceph get pod -l app=rook-direct-mount $ kubectl -n rook-ceph exec -it <pod> bash ``` After you have created a pool as described in the topic, you can create a block image and mount it directly in a pod. This example will show how the Ceph rbd volume can be mounted in the direct mount pod. Create the . Create a volume image (10MB): ```console rbd create replicapool/test --size 10 rbd info replicapool/test rbd feature disable replicapool/test fast-diff deep-flatten object-map ``` Map the block volume and format it and mount it: ```console rbd map replicapool/test lsblk | grep rbd mkfs.ext4 -m0 /dev/rbd0 mkdir /tmp/rook-volume mount /dev/rbd0 /tmp/rook-volume ``` Write and read a file: ```console echo \"Hello Rook\" > /tmp/rook-volume/hello cat /tmp/rook-volume/hello ``` Unmount the volume and unmap the kernel device: ```console umount /tmp/rook-volume rbd unmap /dev/rbd0 ``` After you have created a filesystem as described in the topic, you can mount the filesystem from multiple pods. The the other topic you may have mounted the filesystem already in the registry pod. Now we will mount the same filesystem in the Direct Mount pod. This is just a simple way to validate the Ceph filesystem and is not recommended for production Kubernetes pods. Follow to start a pod with the necessary mounts and then proceed with the following commands after connecting to the pod. ```console mkdir /tmp/registry monendpoints=$(grep monhost /etc/ceph/ceph.conf | awk '{print $3}') my_secret=$(grep key /etc/ceph/keyring | awk '{print $3}') mount -t ceph -o mdsnamespace=myfs,name=admin,secret=$mysecret $mon_endpoints:/ /tmp/registry df -h ``` Now you should have a mounted filesystem. If you have pushed images to the registry you will see a directory called `docker`. ```console ls /tmp/registry ``` Try writing and reading a file to the shared filesystem. ```console echo \"Hello Rook\" > /tmp/registry/hello cat /tmp/registry/hello rm -f /tmp/registry/hello ``` To unmount the shared filesystem from the Direct Mount Pod: ```console umount /tmp/registry rmdir /tmp/registry ``` No data will be deleted by unmounting the filesystem."
}
] |
{
"category": "Runtime",
"file_name": "direct-tools.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Backported CORS filter. #489 (#493) #503 Add OPTIONS in Webservice Fixed duplicate compression in dispatch. #449 Added check on writer to prevent compression of response twice. #447 Enable content encoding on Handle and ServeHTTP (#446) List available representations in 406 body (#437) Convert to string using rune() (#443) 405 Method Not Allowed must have Allow header (#436) add field allowedMethodsWithoutContentType (#424) support describing response headers (#426) fix openapi examples (#425) merge v3 fix (#422) fix WriteError return value (#415) allow prefix and suffix in path variable expression (#414) support google custome verb (#413) fix panic in Response.WriteError if err == nil fix issue #400 , parsing mime type quality Route Builder added option for contentEncodingEnabled (#398) Avoid return of 415 Unsupported Media Type when request body is empty (#396) Reduce allocations in per-request methods to improve performance (#395) Fix issue with default responses and invalid status code 0. (#393) add per Route content encoding setting (overrides container setting) add Request.QueryParameters() add json-iterator (via build tag) disable vgo module (until log is moved) add vgo module add JSONNewDecoderFunc to allow custom JSON Decoder usage (go 1.10+) Make JSR 311 routing and path param processing consistent Adding description to RouteBuilder.Reads() Update example for Swagger12 and OpenAPI added route condition functions using `.If(func)` in route building. solved issue #304, make operation names unique [IMPORTANT] For swagger users, change your import statement to: swagger \"github.com/emicklei/go-restful-swagger12\" moved swagger 1.2 code to go-restful-swagger12 created TAG 2.0.0 remove defer request body close expose Dispatch for testing filters and Routefunctions swagger response model cannot be array created TAG 1.0.0 (API change) Remove code related to caching request content. Removes SetCacheReadEntity(doCache bool) Default change! now use CurlyRouter (was RouterJSR311) Default change! no more caching of request content Default change! do not recover from panics fix the DefaultRequestContentType feature take the qualify factor of the Accept header mediatype into account when deciding the contentype of the response add constructors for custom entity accessors for xml and json rename new WriteStatusAnd... to WriteHeaderAnd... for consistency fixed problem with changing Header after WriteHeader (issue 235) changed behavior of WriteHeader (immediate write) and WriteEntity (no status write) added support for custom EntityReaderWriters. add support for reading entities from compressed request content use sync.Pool for compressors of http response and request body add Description to Parameter for documentation in Swagger UI add configurable logging if not specified, the Operation is derived from the Route function expose Parameter creation functions make trace logger an interface fix OPTIONSFilter customize rendering of ServiceError JSR311 router now handles wildcards add Notes to Route (api add) PrettyPrint per response. (as proposed in #167) (api add) ApiVersion(.) for documentation in Swagger UI (api change) struct fields tagged with \"description\" show up in Swagger UI (api change) ReturnsError -> Returns (api add)"
},
{
"data": "for DRY use of RouteBuilder fix swagger nested structs sort Swagger response messages by code (api add) ReturnsError allows you to document Http codes in swagger fixed problem with greedy CurlyRouter (api add) Access-Control-Max-Age in CORS add tracing functionality (injectable) for debugging purposes support JSON parse 64bit int fix empty parameters for swagger WebServicesUrl is now optional for swagger fixed duplicate AccessControlAllowOrigin in CORS (api change) expose ServeMux in container (api add) added AllowedDomains in CORS (api add) ParameterNamed for detailed documentation (api add) expose constructor of Request for testing. (api add) ParameterNamed gives access to a Parameter definition and its data (for further specification). (api add) SetCacheReadEntity allow scontrol over whether or not the request body is being cached (default true for compatibility reasons). (api add) CORS can be configured with a list of allowed domains (api add) Route path parameters can use wildcard or regular expressions. (requires CurlyRouter) (api add) Request now provides information about the matched Route, see method SelectedRoutePath (api change) renamed parameter constants (go-lint checks) (api add) support for CloseNotify, see http://golang.org/pkg/net/http/#CloseNotifier (api change) Write* methods in Response now return the error or nil. added example of serving HTML from a Go template. fixed comparing Allowed headers in CORS (is now case-insensitive) (api add) Response knows how many bytes are written to the response body. (api add) RecoverHandler(handler RecoverHandleFunction) to change how panic recovery is handled. Default behavior is to log and return a stacktrace. This may be a security issue as it exposes sourcecode information. (api add) Response knows what HTTP status has been written (api add) Request can have attributes (map of string->interface, also called request-scoped variables (api change) Router interface simplified Implemented CurlyRouter, a Router that does not use|allow regular expressions in paths add OPTIONS support add CORS support fixed some reported issues (see github) (api change) deprecated use of WriteError; use WriteErrorString instead (fix) v1.0.1 tag: fix Issue 111: WriteErrorString (api add) Added implementation Container: a WebServices collection with its own http.ServeMux allowing multiple endpoints per program. Existing uses of go-restful will register their services to the DefaultContainer. (api add) the swagger package has be extended to have a UI per container. if panic is detected then a small stack trace is printed (thanks to runner-mei) (api add) WriteErrorString to Response Important API changes: (api remove) package variable DoNotRecover no longer works ; use restful.DefaultContainer.DoNotRecover(true) instead. (api remove) package variable EnableContentEncoding no longer works ; use restful.DefaultContainer.EnableContentEncoding(true) instead. (api add) Added support for response encoding (gzip and deflate(zlib)). This feature is disabled on default (for backwards compatibility). Use restful.EnableContentEncoding = true in your initialization to enable this feature. (improve) DoNotRecover option, moved request body closer, improved ReadEntity (api change) removed Dispatcher interface, hide PathExpression changed receiver names of type functions to be more idiomatic Go (optimize) Cache the RegExp compilation of Paths. (api add) Added support for request/response filter functions (api add) Added feature to change the default Http Request Dispatch function (travis cline) (api change) Moved Swagger Webservice to swagger package (see example restful-user) See https://github.com/emicklei/go-restful/commits Initial commit"
}
] |
{
"category": "Runtime",
"file_name": "CHANGES.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Kubernetes is being enhanced to support Snapshots using native API. This is being done in phases: Phase 1: The API is supported via Snapshot Operators using CRDs as addon functionality. For more details refer to the design and examples at: kubernetes-incubator/external-storage/snapshots. Phase 2: The API will be added directly into the Kubernetes API - by 1.11/1. Phase 3: CSI will include the Snapshot API This document describes the support of Snapshot API for OpenEBS volumes using the Phase 1 implementation using the Snapshot Operators. At a very high level the feature works as follows: Cluster Administrator will have to launch the Snapshot Operators i. snapshot-controller: responsible for managing snapshots. ii. snapshot-pv-provisioner: responsible for dynamically creating a clone from snapshots. Both users and admins might create/delete snapshots, using Snapshot CRs that refer to the PVC. i. Create: The user creates a VolumeSnapshot referencing a persistent volume claim bound to a persistent volume The snapshot-controller fulfils the VolumeSnapshot by creating a snapshot using the volume plugins. A new object VolumeSnapshotData is created to represent the actual snapshot binding the VolumeSnapshot with the on-disk snapshot. ii. List: The user is able to list all the VolumeSnapshot objects in the namespace iii. Delete: The user deletes the VolumeSnapshot The controller removes the on-disk snapshot. Note: snapshots have no notion of \"reclaim policy\" - there is no way to recover the deleted snapshot. The controller removes the VolumeSnapshotData object. After snapshots are taken, users might use them to create new volumes using the snapshot, that was previously taken. i. Promote snapshot to PV (or Clone PV using a snapshot): The user creates a persistent volume claim referencing the snapshot object in the annotation. The PVC must belong to a StorageClass using the external volume snapshot provisioner. Note: the special annotation might get replaced by a dedicated attribute of the PersistentVolumeClaim in the future. The snapshot-pv-provisioner will use the VolumeSnapshotData object to create a persistent volume using the corresponding volume snapshot plugin. The PVC is bound to the newly created PV containing the data from the snapshot. The snapshot operation is a no-op for volume plugins that do not support snapshots via an API call (i.e. non-cloud storage). The snapshot objects are namespaced: i. Users should only get access to the snapshots belonging to their namespaces. For this aspect, snapshot objects should be in user namespace. Admins might want to choose to expose the snapshots they created to some users who have access to those volumes. ii. There are use cases that data from snapshots taken from one namespace need to be accessible by users in another namespace. iii. For security purpose, if a snapshot object is created by a user, kubernetes should prevent other users duplicating this object in a different namespace if they happen to use the same snapshot name. iv. There might be some existing snapshots taken by admins/users and they want to use those snapshots through kubernetes API interface. The volume snapshot controller maintains two data structures (ActualStateOfWorld and DesiredStateOfWorld) and periodically reconciles the two. The data structures are being update by the API server event handlers. If a new VolumeSnapshot is added, the add handler adds it to the DesiredStateOfWorld (DSW) If a VolumeSnapshot is deleted, the delete handler removes it from the DSW. Reconciliation loop in the controller For every VolumeSnapshot in the ActualStateOfWorld (ASW) find the corresponding VolumeSnapshot in the"
},
{
"data": "If such a snapshot does not exist, start a snapshot deletion operation: Determine the correct volume snapshot plugin to use from the VolumeSnapshotData referenced by the VolumeSnapshot Create a delete operation: only one such operation is allowed to exist for the given VolumeSnapshot andVolumeSnapshotData pair The operation is an asynchronous function using the volume plugin to delete the actual snapshot in the back-end. When the plugin finishes deleting the snapshot, delete the VolumeSnapshotData referencing it and remove theVolumeSnapshot reference from the ASW For every VolumeSnapshot in the DSW find the corresponding VolumeSnapshot in the ASW. If such a snapshot does not exist, start a snapshot creation operation: Determine the correct volume snapshot plugin to use from the VolumeSnapshotData referenced by theVolumeSnapshot Create a volume snapshot creation operation: only one such operation is allowed to exist for the given VolumeSnapshot and VolumeSnapshotData pair. The operation is an asynchronous function using the volume plugin to create the actual snapshot in the back-end. When the plugin finishes creating the snapshot a new VolumeSnapshotData is created holding a reference to the actual volume snapshot. For every snapshot present in the ASW and DSW find its VolumeSnapshotData and verify the bi-directional binding is correct: if not, update the VolumeSnapshotData reference. The following diagram describes the different components involved in supporting Snapshots. We can see how a VolumeSnapshot binds to a VolumeSnapshotData resource. This is analogous to PersistentVolumeClaims and PersistentVolumes. We can also see that VolumeSnapshotData references the actual snapshot taken by the volume provider, in the same way to how a PersistentVolume references the physical volume backing it. The Kubernetes design proposal for Snapshot provides a valid use case on how Snapshots and Clone (Promoting a Snapshot to PV) can be used for restoring data for a MySQL database (Ref: https://github.com/openebs/litmus/issues/53). In addition, we would also like to use this feature for CI/CD pipeline as follows: (Ref: https://github.com/openebs/litmus/issues/51) Felix is a DevOps admin who is responsible for maintaining Staging Databases for a large enterprise corporation with 400+ developers working on 200+ applications. The Staging database contains a pruned (for user information) and is constantly updated with production data. When developers make some data schema changes, they would like to test them out on the Staging setup with real data before pushing the changes for Review. The staging database PV, PVC and the associated application are created in a separate namespace called staging. Only Felix has access to this namespace. He creates snapshots of the production database volume. Along with creating the snapshots, he appends some information into the snapshots that will be helpful for developers like the: like the version of the applications that are running in the staging database when this snapshot was taken. Each developer has their own namespace. For example Simon, runs his development application in dev-simon-app namespace. The cluster admin authorize Simon to access (read/get) the snapshots from the staging setup. Simon gets the list of snapshots that are available. Picks up the snapshot or snapshots that are best suited for testing his application. Simon creates a PVC / PV with the select snapshot and launches his applications with modified changes on it. Simon then runs the integration tests on his application which is now accessing production like data - which helps him to identify issues with different types of data and running at scale. After completing the tests, Simon deletes the application and the associated cloned volumes. (Ref: https://github.com/openebs/litmus/issues/52) Tim is a DevOps engineer at a large Retail store who is responsible for running a complex build pipeline that involves several"
},
{
"data": "The microservices that implement a order and supply management functionalities - store the states in a set of common datastores. The Jenkins CI pipeline simulates real world interactions with the system that begin with simulating customers placing the orders to the backend systems optimizing the supply and delivery of these orders to the customers. Tim has setup the Job execution pipeline in such a way that, if there are failures, the developers can back trace the state of the database and the logs associated with each stage. The build (or job) logs are saved onto OpenEBS PV, say Logs PV The datastores are created on OpenEBS Volumes, say Datastore PVs. At the end of each job, either on success of failure, snapshots are taken of the Logs PV and the Datastore PVs. When there is a build failure, the volume snapshot information is sent to all the developers whose service were running when the job was getting executed. Each developer can bring up their own debug session in their namespace by creating a environment with cloned volumes. Either they re-run the tests manually by going back to the previous state with higher debug level or analyze the currently available data that is causing the issue. (Ref : https://github.com/openebs/openebs/issues/440) In Data Science Projects, it is common to have a Data Retriever pod that downloads data from externals sources. Once the data is made available, snapshot will be created on this data volume. Other projects/people can clone from this snapshot and access the data either in read-only or read-write. User should be able to perform all snapshot operations on the OpenEBS volumes using the API exposed from Kubernetes User should be able to perform snapshot operations on all types of OpenEBS volumes that support snapshots like Jiva, cstor. Consistency Group: This will be supported by higher level operators or meta-controllers that will send snapshot request for multiple volumes at the same time. Scheduled Snapshots. This will be supported by higher level operators or meta-controllers that will make use the native API supported in this design Offline Backup/Restore. This design doesnt include pushing the snapshot to backup - outside of the volume for later restore. Syncing the snapshots with the actual state on the Storage. For example, the K8s can have a snapshot data, which was removed from the storage system either manually using a different CLI or due to an irrecoverable disk failure. Some kind of a scheduled job can be run to make sure that the snapshot states are still valid w.r.t to the state on the storage backend. Only OpenEBS volumes are used in the Kubernetes Cluster. This is a limitation of the snapshot provisioner that only accepts a generic provisioner key: volumesnapshot.external-storage.k8s.io/snapshot-promoter. This is being addressed in the upcoming CSI based snapshot interface. Snapshots taken at the storage side dont guarantee consistency from the application side. For generating application consistent snapshots, it is recommended to pause the application before taking the snapshots. For jiva based OpenEBS Volumes: creating a clone from snapshot can take longer depending on the data size. The kubectl describe pvc can show a lot of failed attempts to connect to the cloned volume. The recommended approach in this case will be to create a PVC first and then latter associated once it is ready to a Pod/Deployment. Delete snapshot will not delete the actual snapshot on the jiva backend. This could lead to issues if the user is trying to create-delete-create snapshot with the same"
},
{
"data": "To avoid this issue - the snapname provided by user will be suffixed by a unique GUID before creating the snap on the backend. Since snapshot are not deleted, large number of volumes can run into out of space issue. In case initial sync failures due to either the clone volume (replica) restarts - the sync will be resumed from the beginning. Say 2TB volume 1.8TB was synced before the clone volume restarted. After restart, the clone volume will reset its sync status. CSI Spec has been recently updated to include the snapshot related operations. PR the OpenEBS CSI drivers. The cloned volumes share the same properties of the source volume, unless the PVC overrides them. This feature will work only after the maya-apiserver is upgraded to 0.6 version that includes the API changes. While creating snapshots will work for volumes with older version (0.5.x jiva), the clone operation will only work for volumes in 0.6 version or higher. The snapshot directly taken on the volumes using mayactl (without kubectl) will not be available for taking cloning operations. (https://github.com/openebs/openebs/issues/1416) The following diagram depicts the different components involved in snapshot management. The interactions to the snapshots is via the kubectl. The user pushes the intent for creating or deleting or promoting a snapshot to PV with YAMLs. These intent YAMLs are loaded into the Kubernetes etcd, by kube-apiserver. The openebs-snap-controller watches for these YAMLs are proceeds with performing the snapshot or clone operations. ! The new and modified components are as follows: [New] openebs-snap-controller , extends the Kubernetes Snapshot provisioner and controller: snapshot-controller : is responsible for creating and watching for the CRDs- VolumeSnapshot and VolumeSnapshotData. It watches for the creation of the VolumeSnapshot CR and invokes the maya-apiserver API to manage snapshot. snapshot-provisioner: is responsible for watching for a PVC request for creating a PV from a snapshot and invokes the maya-apiserver API to create new persistent volume via dynamic provisioning or delete a PV created from snapshot. [Modified] maya-apiserver API is extended to support: Snapshot - Create, List, Delete API Volume Create is extended to take additional parameters like Source Volume and Source Snapshot to allow for creating a cloned volume from a snapshot. [Modified] jiva-controller and replica API is enhanced to create a volume from Source Volume and Source Snapshot. The jiva volume will create a replica that will sync the data from the source volume and revert to the specified snapshot. Administrator will install the openebs-snapshot-controller using the helm charts or via the openebs-operator.yaml. Below is deployment spec for creating above mentioned openebs-snapshot-controller. ```yaml kind: Deployment apiVersion: apps/v1beta metadata: name: openebs-snapshot-controller namespace: default spec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: openebs-snapshot-controller spec: serviceAccountName: snapshot-controller-runner containers: name: snapshot-controller image: openebs/snapshot-controller:0. imagePullPolicy: Always name: snapshot-provisioner image: openebs/snapshot-provisioner:0. imagePullPolicy: Always ``` The namespace and serviceAccountName will be as per the helm chart installation or depending on the openebs-operator.yaml. When running the openebs-snapshot-controller independently, the following ServiceAccount can be created to grant the required permissions for creating and watching the CRDs. ```yaml apiVersion: v kind: ServiceAccount metadata: name: snapshot-controller-runner namespace: default apiVersion: rbac.authorization.k8s.io/v1beta kind: ClusterRole metadata: name: snapshot-controller-role namespace: default rules: apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\", \"list\", \"delete\"] apiGroups: [\"\"] resources: [\"persistentvolumes\"] verbs: [\"get\", \"list\", \"watch\", \"create\", \"delete\"] apiGroups: [\"\"] resources: [\"persistentvolumeclaims\"] verbs: [\"get\", \"list\", \"watch\", \"update\"] apiGroups: [\"storage.k8s.io\"] resources: [\"storageclasses\"] verbs: [\"get\", \"list\", \"watch\"] apiGroups: [\"\"] resources: [\"events\"] verbs: [\"list\", \"watch\", \"create\", \"update\", \"patch\"] apiGroups: [\"apiextensions.k8s.io\"] resources: [\"customresourcedefinitions\"] verbs: [\"create\", \"list\", \"watch\", \"delete\"] apiGroups: [\"volumesnapshot.external-storage.k8s.io\"] resources: [\"volumesnapshots\", \"volumesnapshotdatas\"] verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\",\"delete\"] apiGroups: [\"\"] resources: [\"services\"] verbs: [\"get\"]_ kind: ClusterRoleBinding apiVersion:"
},
{
"data": "metadata: name: snapshot-controller namespace: default roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: snapshot-controller-role subjects: kind: ServiceAccount name: snapshot-controller-runner namespace: default ``` The resources highlighted in blue(bold) above, are the ones required by the Snapshot Provisioners and Controller to interact with Kubernetes on Snapshot Management and Dynamic Volume Provisioning. The access to the Services (highlighted in light orange - bold italics) is required for accessing the maya-apiservice. Additional StorageClass is created to support the Promotion of Snapshot to PV, as required by the Snapshot Operators design. The promoter StorageClass can be defined as below, here provisioner field in the spec define which provisioner should be used and what parameters should be passed to that provisioner when dynamic provisioning is invoked. ```yaml kind: StorageClass apiVersion: storage.k8s.io/v metadata: name: openebs-snapshot-promoter provisioner: volumesnapshot.external-storage.k8s.io/snapshot-promoter ``` User will trigger the creation of the snapshot by applying a VolumeSnapshot object. The openebs-snapshot-controller receives the request and will invoke the maya-apiserver api to create the snapshot. ! Here are additional details as the request traverses through different components: User will load the following VolumeSnapshot object to create a snapshot: ```yaml apiVersion: volumesnapshot.external-storage.k8s.io/v kind: VolumeSnapshot metadata: name: snapshot-demo namespace: default spec: persistentVolumeClaimName: demo-vol1-claim ``` Snapshot Controller will process the VolumeSnapshot request and will start the following process: Does the housekeeping of creating the VolumeSnapshotData object and fetching the associated PV for this PVC. Based on the PV type, in this case (iscsi), invokes the openebs driver specific logic to create a snapshot of the persistent volume referenced by the PVC. Generate a unique snapname to be sent to the storage. The name will be generated as <pv-name><snapname>currentnano_timestamp. Including the pv_name and snapname will allow for tools like mayctl to co-relate the snapshots taken at storage with the corresponding Kubernetes PV and Snapshot objects. The driver plugin (in this case the openebs) will call maya-apiserver create snapshot API - which in turn will propagate the snapshot request to Jiva Replicas via the associated Jiva Controller. The snapshot API on maya-apiserver is invoked with: Unique Snapshot name (which is pv name + snapshot name + timestamp) Volume Name In case of any errors, the are perculated up the stack from replica up to the snapshot-controller, which will set the VolumeSnapshot Condition to errored state. Once a snapshot is created successfully, Snapshot Controller will update the snapshot ID/timestamp to the VolumeSnapshotData API object and also update its VolumeSnapshot and VolumeSnaphotData status fields. The snapshot ID is the unique snapname generated above. This way the K8s can link the user provided name with the actual name used to create a snap. The user is expected to get the current status of the snapshot by querying for the VolumeSnapshot objects that was loaded. User can query for the available snapshots using - `kubectl get volumesnapshot`. The volumesnapshot are fetched directly from the K8s configuration store. Since there are no calls made to maya-apiserver, it is possible that the information returned from the k8s configuration store can be stale, if the snapshots were deleted from maya-apiserver or jiva directly. User can query for the available snapshots using - `kubectl delete volumesnapshot <snapshot-name>`. The volumesnapshot are deleted from the K8s configuration store. Since there are no calls made to maya-apiserver, the snapshots continue to exist at the storage side. User can query for the available snapshots using - `kubectl describe volumesnapshot <snapshot-name>`. The volumesnapshot details stored in the volumesnapshot and volumesnapshotdata objects are shown to the user. This call displays the status of the volumesnapshot"
},
{
"data": "Since there are no calls made to maya-apiserver, the usage details of snapshots like how much space is occupied or if snapshots are used by clones (references) are not shown. User can create a cloned volume from a previously taken snapshot by creating a new PVC. The PVC should reference the snapshot and StorageClass should be pointing to snapshot-promoter: ```yaml apiVersion: v kind: PersistentVolumeClaim metadata: name: demo-snap-vol-claim namespace: default annotations: snapshot.alpha.kubernetes.io/snapshot: snapshot-demo spec: storageClassName: snapshot-promoter accessModes: [ \"ReadWriteOnce\" ] resources: requests: storage: 5Gi ``` Snapshot-promoter is like any other external-storage provisioner that processes a PVC request by dynamically creating a PV and Binding the PV to PVC. The workflow for creating the volume from a snapshot is as described below: ! snapshot-provisioner: Snapshot provisioner - OpenEBS Plugin will call create volume API request to maya-apiserver for creating a new volume using the given snapshot. The Plugin will pass the following details to maya-apiserver Clone Volume Name Clone Volume Namespace Clone Volume Capacity (this should be equal to the source volume) Source Volume Name Source Volume Namespace Source Volume Snapshot Unique Name OpenEBS Plugin will return errors if the Clone Operation couldnt be initiated or if the details of the cloned volume could not be obtained due to errors interacting with maya-apsierver or source controller. The snapshot provisioner will mark the PVC as unbound and keep retrying till the user deletes the PVC. On successfully creating the Cloned Volume, the OpenEBS Plugin will return PV object that will be bound to the PVC. Note that this is just initiation of the Cloned Volume and the sync can take longer. During the sync, the kubelet can keep trying to connect to the PV object that keeps failing. maya-apiserver: maya-apiserver will validate the request by checking ReplicaType==clone (type will set while making the clone request from snapshot provisioner with some other reference details), if true then invokes extended clone volume APIs of maya-apiserver. This extended volume API having some more details which are specific to invoke the jiva clone APIs: Source jiva controller IP Snapshot name (to be cloned) New PV name ( to be provisioned) Type of replica ( i.e. clone) maya-apiserver will fetch the details like source volume controller IP and source volume storage class. maya-apiserver will create the cloned volume by setting the same properties as the source volume (source storage class) when they are not overridden by the configuration in the clone PVC. Service and Controller Deployments are untouched Replica Deployment will be provided with two additional parameters - clone ip and source snapshot. Jiva: New jiva controller is started as per the Deployment spec. As part of the startup, controller will wait for its replica to register. Once the replica is registered, the controller will wait until the replica's clone status is updated to \"completed\" or \"na\" - before making the volume available for accepting IO. New jiva replicas are created as per the Deployment Spec, that includes type=clone, source controller ip and source snapshot. If the type is set as clone, this replica will contact the source controller and identify a replica that can be used for copying over the data from source replica to this replica. The code is split under external-storage/openebs/pkg, which contains the common code that is used by both openebs provisioner and the snapshot controller and provisioner. The code specific to snapshot controller and provisioner are embedded into the external-storage/snapshot So both folders - external-storage/openebs/pkg and external-storage/snapshot are built and 3 containers are created and pushed to"
},
{
"data": "openebs/openebs-k8s-provisioner openebs/snapshot-controller openebs/snapshot-provisioner Update the CI scripts to include helm based installation that includes openebs-snapshot-controller with ci-tag. Update the openebs-operator.yaml, kubernetes helm charts and the openebs helm charts with the openebs-snapshot-controller deployment specs. Snapshot Controller will be able to contact K8s Cluster using inClusterConfig or outOfCluster, secure or insecure ports. Snapshot Controller will be able to contact the maya-apiserver when running in default or non-default namespace. OpenEBS Snapshot Controller Plugin will be registered with the Snapshot Controller to be invoked for iSCSI volumes and will implement the required interfaces. Since iSCSI is a generic volume type - within the implementation, there will be checks to ensure that only OpenEBS volumes are operated by the plugin. Create Snapshot is called with PV object and a set of values that includes snapshot name. On receiving the request, do the following: Extract the PV Name and Snapname from the input parameters and generate a unique snapshot name: <pvname><snapname><timestamp> Validate that the PV belongs to OpenEBS Validate that the name generated can be within 255 characters. Call the maya-apiserver with pv-name and snapshot name. Return either success or failure based on the response from maya-apsierver. User should be able to view the details failure/success messages by performing a kubectl describe <snapshot> The failure responses: i. Unsupported PV. Only PVs provisioned by OpenEBS are handled ii. Snapshot unique name is longer than 255 characters. Only up to 255 characters are supported. iii. Unable to contact K8s server iv. Unable to contact maya-apiserver v. <Failure messages from maya-apiserver will be relayed back> Timeout response ( max time = 1 min. no-retries) i. Failed to take snapshot. Took longer than a minute to take snapshot. Delete Snapshot with the snapshot data and the PV object. On receiving this request do the following: Returns success. <Delete is currently unsupported> Describe Snapshot with the snapshot data. On receiving this request do the following: Fetch the last status - error/success and return them. <Partially implemented> Find Snapshot with the tags that include snapname. On receiving this request do the following: Returns nil. \"Find is currently unsupported\" Snapshot Provisioner will be able to contact K8s Cluster using inClusterConfig or outOfCluster, secure or insecure ports. Snapshot Provisioner will be able to contact the maya-apiserver when running in default or non-default namespace. OpenEBS Snapshot Provisioner Plugin will be registered with the Snapshot Controller to be invoked for iSCSI volumes and will implement the required interfaces. Since iSCSI is a generic volume type - within the implementation, there will be checks to ensure that only OpenEBS volumes are operated by the plugin. Snapshot Restore (or Clone Volume) API is invoked by passing the following arguments: SnapshotData Object PVC Object PV Name Parameters On receiving the request perform the following: Validate that the SnapshotData objects belongs to OpenEBS Volumes Extract the StorageClass associated with the Source PV Call the maya-apiserver create volume by passing the following details: i. Clone Volume Name ii. Clone Volume Namespace iii. Clone Volume Capacity (this should be equal to the source volume) iv. Source Volume Name v. Source Volume Namespace vi. Source Volume Snapshot Unique Name On failure conditions a nil object is returned along with the error. On success a iSCSI PV object is created with the details returned from the maya-apiserver. The failure responses: i. Unable to contact the K8s API server ii. Unable to contact maya-apiserver iii. Unable to access the Source Volume details iv. Errors returned from the maya-apiserver No retries or timeout implemented within this"
},
{
"data": "The caller of this API, snapshot-provisioner will keep retrying the creation of PV till the PVC object is deleted by the user or a PVC is successfully created. Create Snapshot API: mayactl and openebs-snapshot-controller should be able to contact maya-apiserver for creating a snapshot. maya-apiserver will have to identify the storage engine associated with Volume and delegate the create snapshot request to the storage engine. This API takes two parameters: Volume Name Snapshot Name Validation and Error responses: 500: Snapshot Name - is within 255 characters long 500: Unable to Contact the K8s server 404: Volume Name - doesnt reference a valid Volume. 500: Unable to contact the Storage Engine 500: Volume Status - is offline and snapshot cant be taken on the Volume. 500: Snapshot Name - is unique on the volume. This validation is delegated to the Storage Engine to verify 500: Error response from the Storage Engine 500: Timed out waiting for snapshot operation to complete Success Response: Only if the storage engine returns a success Clone Volume API: maya-apiserver Create Volume API is extended to include source volume and source snapshot details. The following modifications will be done to the Create Volume API, when it detects the presence of source volume and source snapshot: Fetch the Storage Class and Controller IP associated with the Source Volume. Replace the incoming Storage Class with the Storage Class of the Source Volume Process the Create Volume Request as earlier, by fetching the required details and applying the parameter override logic - PVC > Storage Class > ENV > Defaults. During the Replica Creation, include the following additional parameters: i. Controller IP ii. Source Snapshot The source volume and source snapshot details will be saved as annotations onto the Volume Objects. Validations and Error Responses: 500: Unable to contact K8s server to fetch Source Volume Details 404: Source Volume doesnt exist 500: Unable to retrieve the Storage class associated with the Volume 500: Unable to retrieve the Storage Volume - Controller IP 500: Insufficient space on the storage pool to create the clone. 500: <Error messages> that come up during the processing of the Volume Request. 500: Unable to deploy the cloned Volume in the target namespace Success: Cloned Volume has been successfully deployed. maya-apiserver should only allow Snapshot and Clone operations on PVs that have the snapshot capabilities enabled (via Policies) and if the associated backend storage engine supports it. The maya-apiserver will retrieve the StorageClass associated with the Source PV and check if Clones can be supported. The operations will be supported for jiva volumes with version 0.6 or higher. In future this code change changed to read a configuration on the storage class, or the PV that indicates whether snapshots are supported: config.openebs.io/clones: enabled|disabled. Create Snapshot API An error message will be returned by the controller if the volume is in offline state w.r.t to the replicas. IO will be quiesced and Snapshot will be taken on all the available replicas. If some of the replicas are offline, when they come back online, they will sync data and snapshots from the available replica(s) List Replica The replica status should included details like - initial sync status along with other details that is currently provides. The clone replica should not be visible in the source volume - list replica response. At Startup, the controller will check for the clone status of the replicas: If the clone is status is either NA or completed the Replica will be registered and IO will be served to the replica. Create Snapshot API At startup the replica will: Register itself with its"
},
{
"data": "If replicaType is not passed as clone, the clone status is set as na in the volume metadata file. If replicaType is passed as clone, clone status is fetched from the volume metadata file. If clone status is not set as completed, it is set to inProgress and the clone process is triggered and managed by the replica: Get the replica list from the source controller. Find a replica in RW mode(let's call it the source replica) Get the list of snapshots(snapshot chain) present at the source replica. If the snapshot to be cloned is not present error out. Set the rebuilding flag at the target replica. Sync all the snapshots taken till the required snapshot. Clear the rebuilding flag at the target replica. On completing the sync process, the clone status is updated to completed. The replica should indicate that sync is in progress. At the beginning of the sync, the amount of time required to perform the sync operation should be estimated and the current progress should be made available in the replica status. The sync process will be restarted in the following cases: Unable to fetch the source replica Restart of the clone replica Connection failures during the sync The sync process will return failures when: Snapshot is not available at the source replica The failure error messages should be accessible via the replica status API mayactl volume info : should display if the volume is a cloned volume and the status of the cloned volume. In addition to the values displayed for the regular volumes, it should also display clone specific details like: Source Volume Source Snapshot Name Initial Sync Status - in progress or Online or failed mayactl clone : should allow the user to create a new volume. The details for this command will be: clone volume name clone volume size Source volume name Source volume size Unit test needs to be added to cover the negative test cases.[WIP] Integration tests will be carried out on travis or any other K8s cluster. The prerequisites are to have admin privileges to the cluster and access to helm. Bring up jiva controller and replica using docker containers. Verify that a clone can be created from a snapshot when source volume exists. Verify that once the initial sync is completed, and cloned volume is online. A restart of the cloned replica will not trigger (restart) of sync again. Verify that source volume - List Replica doesnt show any clone replicas while the sync is in progress Verify that the clone volume - List Replica provides the status of the replica, which includes the clone status. Verify that controller returns an valid error response when the replicas are not available to process the snapshot. Verify that if the snapshot to be cloned is not present at the source, an error message is generated and the volume creation is aborted. Verify that a proper error message is displayed when the replica is unable to contact the source controller or if the source replicas are unavailable. Verify that a proper error message is displayed when there is not enough space to perform the sync. Verify that snapshots are synced with the offline replica after it becomes online. Verify that clone can be created from a snapshot on a degraded source volume. Verify that if the clone replica restarts during initial sync, after the clone replica restarted the initial sync is triggered"
},
{
"data": "Verify that if the source replica from where the data is being received is restarted, the sync is restarted by the clone replica with another available replica or when the replica becomes online. Verify that source volume (replica) doesnt have multiple sync ongoing sync requests to the same clone (replica) Verify k8s cluster is running and helm is installed Running openebs maya-apiserver and provisioner using helm charts The test operations will be triggered via mayactl Verify that snapshot can be taken on a volume and a clone is created from the volume. Verify the boundary condition for the name - the snapshot should be created with a name up to 255 chars. Verify that snapshot with the same name is not created. Verify that the cloned volume is getting deleted even if the initial sync is in progress. For example if the volume is of high data size. And if the user is associated a PVC that is still in sync with a Pod, then kubelet can run out of retry attempts and the user will try to go and delete it. Verify k8s cluster is running and helm is installed Running openebs maya-apiserver and provisioner using helm charts Running deployment for snapshot-controller which runs snapshot-controller and snapshot-provisioner as containers in single POD. Create snapshot-promoter storage class. Verify that a new PV can be created using a snapshot of existing PV. Provision PVC demo-vol and mount in a busybox application. Busybox app will write/create some files in a mountpath dir. Create a snapshot of PVC demo-vol. Provision new PVC snap-demo-vol using the above created snapshot Mount PVC snap-demo-vol volume to a new application pod Perform validation/Md5sum check on data files to verify data integrity. Verify that user is provided with proper messages if a Volume cant be created using a snapshot due to unavailability of the snapshot or other services. Verify that snapshot create request is not retried when the elapsed time is greater than 60 sec. Verify that user is provided with proper error/status messages on success and failure of the snapshot creation. The failure cases are: K8s is not reachable or unable to get the SC associated with the PVC maya-apiserver is not reachable. A max timeout of 60 sec. maya-apiserver is unable to contact the volume controller for taking snapshots. PVC or PV on which the snapshot was requested no longer exists Verify that snapshot promote - or clone volume operation is resilient against the following failures. There has to be a reconciliation loop that keeps retrying till the clone volume is created: maya-apiserver is not reachable K8s is not reachable or unable to get the SC associated with the PVC maya-apiserver is unable to contact the volume controller for taking snapshots. Source PV object is in degraded or offline state or enters a degraded or offline state while the clone volume operation is in progress. Clone PV object (replica) gets restarted or rescheduled to another node during initial sync. Clone PV object (controller) gets restarted or rescheduled to another node during initial sync. Verify that clone volume operation returns a valid error/status message that can be seen from kubectl. Some of the irrecoverable failures are: PV on which the snapshot was taken doesnt support clone - like jiva with version less than 0.6 PV on which the snapshot was requested no longer exists PV on which the snapshot was taken and clone was in progress got deleted. There is not enough space to create a clone. Verify that a failed operation is not being indefinitely"
},
{
"data": "Verify that user provided snapshots are converted into unique name when sent to the maya-apiserver. Ideally this should be handled by K8s validations, but there could be a scenario where the backend already has a snapshot with that name. Verify that snapshot operations are performed only on PV with StorageClasses that have snapshot policy enabled. Verify that Snapshot Provisioner and snapshot controller will be able to contact K8s Cluster using inClusterConfig or outOfClusterConfig, secure or insecure ports. Verify that the OpenEBS snapshots are interoperable in a environment that has other volume types like glusterfs, that also supports snapshots. Make sure that the snapshot requests are routed correctly to the respective snapshot provisioners. Verify the operations at scale - creating snapshots on multiple PVs at the same time. Typically required for issuing snapshots on a group of volumes belonging to the same application. Verify periodic backup and restore on any given volume, i.e., creation of multiple snapshots on the same PV, with clones created on each snapshot Verify that creating multiple clones using a snapshot works without any errors. Measure the impact on the IO on the source volume, while the clone is in progress. Perform relative performance/benchmark tests to note the difference. Verify that snapshots are taken within the tolerable limits of the application under non-stress and stress conditions. The snapshot operation should not result in the volume becoming offline/unavailable since the application will be quieced during snapshot creation. Verify the usage reporting of actual used and total capacity is not affected due to high number of snapshots being retained. Verify that RBAC rules are applied and sure that a user can issue snapshots on only the PVCs that belong in his/her authorized namespace. Similarly PVs can be created only on the authorized snapshots. Verify that user can create a snapshot in his namespace - which is different than that of the namespace where openebs-snapshot-controller is running. Verify that the cloned volume comes up with the same number of replicas as the source volume. In this case, the PVC doesnt override any properties. Verify that the cloned volume comes up with properties overridden from the PVC. For example, the cloned volume can be set to have one replica as opposed to 3 replicas on the source volume. Along with replica, other configurable properties should be validated. Verify that openebs-snapshot-controller can be deployed in highly available mode by modifying the values (replica, affinity, etc.,) associated with the openebs-snapshot-controller in the helm or openebs-operator.yaml. Verify that only one snapshot request is processed when there are multiple openebs-snapshot-controller running for HA. Snapshot & clone creation across Kubernetes versions (n -> (n-2) : say, 1.10.x to 1.8.x) Snapshot & clone creation on OpenShift platform (CentOS) Snapshots with filesystems (ext4, xfs) supported by provisioner Snapshot creation with inflight/ongoing I/O (verify data sync at the time of snap creation) Stricter data-integrity tests using : a. FIO where patterns are read with data_verification flags enabled from the clone (https://fio.readthedocs.io/en/latest/fio_doc.html#verification) b. Different application DI tests (each application has its own latency/sync timeouts etc.,) The tests will involve using an app-specific DI checker utility against the clone. Subject following components to chaos tests (failures / restarts / network delays). The expectation being in each case that : (a) The snapshot object, clone pvc reflects appropriate status (b) User data is maintained. maya-apiserver, provisioner, snapshot operator pod, source volume controller, source volume replicas, clone volume replicas (syncing), clone volume controller Kubernetes master failure/recovery Kubernetes nodes failure/recovery Source replica disk failures Clone replica disk failures Note : The integration tests also cover the unreachable"
},
{
"data": "Here chaos tools will be employed to randomly cause these failures followed by recovery. (Coding) Enhance the jiva controller and replica to allow for cloning volumes from snapshot (Coding) Enhance the maya-apiserver volume create API to process the clone parameters - source volume and source snapshot (Coding) Implement the openebs-snapshot-controller - snapshot controller and snapshot provisioner extensions. (Coding) Update the helm charts in openebs repo and kubernetes/stable with openebs-snapshot-controller (Coding) Update the openebs-operator with the openebs-snapshot-controller deployment. (E2E) Verify successful infrastructure setup and the basic backup(snap)-recovery(clone) workflow for a MySQL application with test database content across commonly used kubernetes versions (1.8, 1.9. 1.10) and operating systems (CentOS, Ubuntu), with the snapshot operator and application running on their respective namespaces. (8, 12) (Coding) mayactl info should show the status of the clone volume. (Coding) mayactl clone should be available. (Coding) Failure conditions are handled in the snapshot and clone operations in Jiva, maya-apiserver and the openebs-snapshot-controller with user-friendly error messages (E2E) Run backup and recovery workflows with data integrity checks on multiple applications on the cluster with active data traffic. (Refer Alices job to sync to remote backup server) (2, 4, 15, 16) (candidate for staging workload) (E2E) Run periodic backup recovery jobs on a given PostgreSQL DB with prometheus monitoring setup to get volume metrics. Ensure snapshots are discarded once backed-up. (3, 5, 6) (E2E) Run application backup and recovery workflows on Openshift (14) (E2E) Perform tool-based data-integrity validation on cloned data (17) (E2E/IT) Verify creation of snapshots on authorized PVCs & clones on authorized snapshots (same namespace) (7, 8) (E2E) Verify snapshot and clone workflows with highly available snapshot operator (11) (Litmus) Litmus test jobs are available to demonstrate the use cases with MySQL restore. (Coding) A PV which has snapshots should not be deleted unless all the associated cloned volumes are deleted. This may require changes to the openebs-provisioner. (Coding) Make the jiva resilient against component failures (Coding) Make the maya-apiserver and openebs-snapshot-controller resilient against components failures. (Coding) The snapshot and clone operations should only be allowed, if the StorageClass associated with the PVC has enabled them. (Coding) maya-apiserver should check for the available space of the storage pool before creating a replica on a node. (Coding) Snapshot info/get should provide usage details like the space used and the volumes (clones) referring to this snapshot. (E2E) Obtain relative benchmark of I/O performance on volumes upon clone creation with variables as : Cloud VM instance type (resources), Data size, Clone count, Workload type (4) (E2E/IT) Verify cloned volumes can : a) Inherit source volume properties by default b) Override source volume properties (E2E) Verify snapshot interoperability on clusters running other storage engines. (1) (E2E) Verify snapshot interoperability with multiple volume filesystems (15) (E2E) Resiliency tests for component failures (17) Snapshot scheduling using meta-controller. Provide snapshot operations through CSI Plugin Snapshot state validation using meta-controller. Make sure there are no stale snapshots. Similarly, there may be a need to create snapshot objects for the snapshots that exist on the storage side. Jiva - Snapshot delete should delete from the storage (Jiva replica). Error and Log messages are done via L10n and i18n norms. Depends on the maya-apsierver and other components already providing a framework for L10n and i18n. mayactl should be able to confirm if the user has setup the snapshot provisioners correctly. For example, it should throw a warning, if the storage class required to create clones is missing, or if the required custom resource objects are missing. Maya : https://github.com/openebs/maya/pull/283 External storage: https://github.com/openebs/external-storage/pull/37 Jiva: https://github.com/openebs/jiva/pull/48 OpenEBS: https://github.com/openebs/openebs/pull/1405"
}
] |
{
"category": "Runtime",
"file_name": "openebs-jiva-snapshot-design.md",
"project_name": "OpenEBS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "A major goal of containerd is to create a system wherein content can be used for executing containers. In order to execute on that flow, containerd requires content and to manage it. This document describes how content flows into containerd, how it is managed, and where it exists at each stage in the process. We use an example of going from a known image to explore the flow of content. Content exists in several areas in the containerd lifecycle: OCI registry, for example or containerd content store, under containerd's local storage space, for example, on a standard Linux installation at `/var/lib/containerd/io.containerd.content.v1.content` snapshots, under containerd's local storage space, for example, on a standard Linux installation at `/var/lib/containerd/io.containerd.snapshotter.v1.<type>`. For an overlayfs snapshotter, that would be at `/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs` In order to create a container, the following must occur: The image and all its content must be loaded into the content store. This normally happens via download from the OCI registry, but you can load content in directly as well. Committed snapshots must be created from each layer of content for the image. An active snapshot must be created on top of the final layer of content for the image. A container now can be created, with its root filesystem as the active snapshot. The rest of this document looks at the content in each area in detail, and how they relate to one another. Images in a registry normally are stored in the following format. An \"image\" is comprised of a JSON document known as a descriptor. A descriptor always contains an element, `mediaType`, which tells us which type it is. It is one of two options: a \"manifest\", which lists the hashes of the config file for running the image as a container, and the binary data layers that create the filesystem for the image an \"index\", which lists the hashes of manifests, one per platform, where a platform is a combination of architecture (e.g. amd64 or arm64) and operating system (e.g. linux) The purpose of an index is to allow us to pick which manifest matches our target platform. To convert an image reference, such as `redis:5.0.9`, from a registry to actual on-disk storage, we: Retrieve the descriptor (JSON document) for the image Determine from the `mediaType` if the descriptor is a manifest or an index: If the descriptor is an index, find in it the platform (architecture+os) that represents the platform on which we want to run the container, use that hash to retrieve the manifest If the descriptor already is a manifest, continue For each element in the manifest - the config and one or more layers - use the hash listed to retrieve the components and save them We use our example image, `redis:5.0.9`, to clarify the process. When we first resolve `redis:5.0.9`, we get the following JSON document: ```json { \"manifests\": [ { \"digest\": \"sha256:9bb13890319dc01e5f8a4d3d0c4c72685654d682d568350fd38a02b1d70aee6b\", \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"platform\": { \"architecture\": \"amd64\", \"os\": \"linux\" }, \"size\": 1572 }, { \"digest\": \"sha256:aeb53f8db8c94d2cd63ca860d635af4307967aa11a2fdead98ae0ab3a329f470\", \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"platform\": { \"architecture\": \"arm\", \"os\": \"linux\", \"variant\": \"v5\" }, \"size\": 1573 }, { \"digest\": \"sha256:17dc42e40d4af0a9e84c738313109f3a95e598081beef6c18a05abb57337aa5d\", \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"platform\": { \"architecture\": \"arm\", \"os\": \"linux\", \"variant\": \"v7\" }, \"size\": 1573 }, { \"digest\": \"sha256:613f4797d2b6653634291a990f3e32378c7cfe3cdd439567b26ca340b8946013\", \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"platform\": { \"architecture\": \"arm64\", \"os\": \"linux\", \"variant\": \"v8\" }, \"size\": 1573 }, { \"digest\": \"sha256:ee0e1f8d8d338c9506b0e487ce6c2c41f931d1e130acd60dc7794c3a246eb59e\", \"mediaType\":"
},
{
"data": "\"platform\": { \"architecture\": \"386\", \"os\": \"linux\" }, \"size\": 1572 }, { \"digest\": \"sha256:1072145f8eea186dcedb6b377b9969d121a00e65ae6c20e9cd631483178ea7ed\", \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"platform\": { \"architecture\": \"mips64le\", \"os\": \"linux\" }, \"size\": 1572 }, { \"digest\": \"sha256:4b7860fcaea5b9bbd6249c10a3dc02a5b9fb339e8aef17a542d6126a6af84d96\", \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"platform\": { \"architecture\": \"ppc64le\", \"os\": \"linux\" }, \"size\": 1573 }, { \"digest\": \"sha256:d66dfc869b619cd6da5b5ae9d7b1cbab44c134b31d458de07f7d580a84b63f69\", \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"platform\": { \"architecture\": \"s390x\", \"os\": \"linux\" }, \"size\": 1573 } ], \"mediaType\": \"application/vnd.docker.distribution.manifest.list.v2+json\", \"schemaVersion\": 2 } ``` The descriptor above, towards the end, shows that the `mediaType` is a \"manifest.list\", or in OCI parlance, an index. It has an array field called `manifests`, each element of which lists one platform and the hash of the manifest for that platform. The \"platform\" is a combination of \"architecture\" and \"os\". Since we will be running on the common linux on amd64, we look for an entry in `manifests` that has a `platform` entry as follows: ```json \"platform\": { \"architecture\": \"amd64\", \"os\": \"linux\" } ``` This is the first one in the list, and it has the hash of `sha256:9bb13890319dc01e5f8a4d3d0c4c72685654d682d568350fd38a02b1d70aee6b`. We then retrieve the item with that hash, specifically `docker.io/library/redis@sha256:9bb13890319dc01e5f8a4d3d0c4c72685654d682d568350fd38a02b1d70aee6b` This gives us the manifest for the image on linux/amd64: ```json { \"schemaVersion\": 2, \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"config\": { \"mediaType\": \"application/vnd.docker.container.image.v1+json\", \"size\": 7648, \"digest\": \"sha256:987b553c835f01f46eb1859bc32f564119d5833801a27b25a0ca5c6b8b6e111a\" }, \"layers\": [ { \"mediaType\": \"application/vnd.docker.image.rootfs.diff.tar.gzip\", \"size\": 27092228, \"digest\": \"sha256:bb79b6b2107fea8e8a47133a660b78e3a546998fcf0427be39ac9a0af4a97e90\" }, { \"mediaType\": \"application/vnd.docker.image.rootfs.diff.tar.gzip\", \"size\": 1732, \"digest\": \"sha256:1ed3521a5dcbd05214eb7f35b952ecf018d5a6610c32ba4e315028c556f45e94\" }, { \"mediaType\": \"application/vnd.docker.image.rootfs.diff.tar.gzip\", \"size\": 1417672, \"digest\": \"sha256:5999b99cee8f2875d391d64df20b6296b63f23951a7d41749f028375e887cd05\" }, { \"mediaType\": \"application/vnd.docker.image.rootfs.diff.tar.gzip\", \"size\": 7348264, \"digest\": \"sha256:bfee6cb5fdad6b60ec46297f44542ee9d8ac8f01c072313a51cd7822df3b576f\" }, { \"mediaType\": \"application/vnd.docker.image.rootfs.diff.tar.gzip\", \"size\": 98, \"digest\": \"sha256:fd36a1ebc6728807cbb1aa7ef24a1861343c6dc174657721c496613c7b53bd07\" }, { \"mediaType\": \"application/vnd.docker.image.rootfs.diff.tar.gzip\", \"size\": 409, \"digest\": \"sha256:97481c7992ebf6f22636f87e4d7b79e962f928cdbe6f2337670fa6c9a9636f04\" } ] } ``` The `mediaType` tell us that this is a \"manifest\", and it fits the correct format: one `config`, whose hash is `sha256:987b553c835f01f46eb1859bc32f564119d5833801a27b25a0ca5c6b8b6e111a` one or more `layers`; in this example, there are 6 layers Each of these elements - the index, the manifests, the config file and each of the layers - is stored separately in the registry, and is downloaded independently. When content is loaded into containerd's content store, it stores them very similarly to how the registry does. Each component is stored in a file whose name is the hash of it. Continuing our redis example, if we do `client.Pull()` or `ctr pull`, we will get the following in our content store: `sha256:2a9865e55c37293b71df051922022898d8e4ec0f579c9b53a0caee1b170bc81c` - the index `sha256:9bb13890319dc01e5f8a4d3d0c4c72685654d682d568350fd38a02b1d70aee6b` - the manifest for `linux/amd64` `sha256:987b553c835f01f46eb1859bc32f564119d5833801a27b25a0ca5c6b8b6e111a` - the config `sha256:97481c7992ebf6f22636f87e4d7b79e962f928cdbe6f2337670fa6c9a9636f04` - layer 0 `sha256:5999b99cee8f2875d391d64df20b6296b63f23951a7d41749f028375e887cd05` - layer 1 `sha256:bfee6cb5fdad6b60ec46297f44542ee9d8ac8f01c072313a51cd7822df3b576f` - layer 2 `sha256:fd36a1ebc6728807cbb1aa7ef24a1861343c6dc174657721c496613c7b53bd07` - layer 3 `sha256:bb79b6b2107fea8e8a47133a660b78e3a546998fcf0427be39ac9a0af4a97e90` - layer 4 `sha256:1ed3521a5dcbd05214eb7f35b952ecf018d5a6610c32ba4e315028c556f45e94` - layer 5 If we look in our content store, we see exactly these (I filtered and sorted to make it easier to read): ```console $ tree /var/lib/containerd/io.containerd.content.v1.content/blobs /var/lib/containerd/io.containerd.content.v1.content/blobs sha256 2a9865e55c37293b71df051922022898d8e4ec0f579c9b53a0caee1b170bc81c 9bb13890319dc01e5f8a4d3d0c4c72685654d682d568350fd38a02b1d70aee6b 987b553c835f01f46eb1859bc32f564119d5833801a27b25a0ca5c6b8b6e111a 97481c7992ebf6f22636f87e4d7b79e962f928cdbe6f2337670fa6c9a9636f04 5999b99cee8f2875d391d64df20b6296b63f23951a7d41749f028375e887cd05 bfee6cb5fdad6b60ec46297f44542ee9d8ac8f01c072313a51cd7822df3b576f fd36a1ebc6728807cbb1aa7ef24a1861343c6dc174657721c496613c7b53bd07 bb79b6b2107fea8e8a47133a660b78e3a546998fcf0427be39ac9a0af4a97e90 1ed3521a5dcbd05214eb7f35b952ecf018d5a6610c32ba4e315028c556f45e94 ``` We can see the same thing if we use the containerd interface. Again, we sorted it for consistent easier viewing. ```console $ ctr content ls DIGEST SIZE AGE LABELS sha256:2a9865e55c37293b71df051922022898d8e4ec0f579c9b53a0caee1b170bc81c 1.862kB 20 minutes containerd.io/distribution.source.docker.io=library/redis,containerd.io/gc.ref.content.m.0=sha256:9bb13890319dc01e5f8a4d3d0c4c72685654d682d568350fd38a02b1d70aee6b,containerd.io/gc.ref.content.m.1=sha256:aeb53f8db8c94d2cd63ca860d635af4307967aa11a2fdead98ae0ab3a329f470,containerd.io/gc.ref.content.m.2=sha256:17dc42e40d4af0a9e84c738313109f3a95e598081beef6c18a05abb57337aa5d,containerd.io/gc.ref.content.m.3=sha256:613f4797d2b6653634291a990f3e32378c7cfe3cdd439567b26ca340b8946013,containerd.io/gc.ref.content.m.4=sha256:ee0e1f8d8d338c9506b0e487ce6c2c41f931d1e130acd60dc7794c3a246eb59e,containerd.io/gc.ref.content.m.5=sha256:1072145f8eea186dcedb6b377b9969d121a00e65ae6c20e9cd631483178ea7ed,containerd.io/gc.ref.content.m.6=sha256:4b7860fcaea5b9bbd6249c10a3dc02a5b9fb339e8aef17a542d6126a6af84d96,containerd.io/gc.ref.content.m.7=sha256:d66dfc869b619cd6da5b5ae9d7b1cbab44c134b31d458de07f7d580a84b63f69 sha256:9bb13890319dc01e5f8a4d3d0c4c72685654d682d568350fd38a02b1d70aee6b 1.572kB 20 minutes containerd.io/distribution.source.docker.io=library/redis,containerd.io/gc.ref.content.config=sha256:987b553c835f01f46eb1859bc32f564119d5833801a27b25a0ca5c6b8b6e111a,containerd.io/gc.ref.content.l.0=sha256:bb79b6b2107fea8e8a47133a660b78e3a546998fcf0427be39ac9a0af4a97e90,containerd.io/gc.ref.content.l.1=sha256:1ed3521a5dcbd05214eb7f35b952ecf018d5a6610c32ba4e315028c556f45e94,containerd.io/gc.ref.content.l.2=sha256:5999b99cee8f2875d391d64df20b6296b63f23951a7d41749f028375e887cd05,containerd.io/gc.ref.content.l.3=sha256:bfee6cb5fdad6b60ec46297f44542ee9d8ac8f01c072313a51cd7822df3b576f,containerd.io/gc.ref.content.l.4=sha256:fd36a1ebc6728807cbb1aa7ef24a1861343c6dc174657721c496613c7b53bd07,containerd.io/gc.ref.content.l.5=sha256:97481c7992ebf6f22636f87e4d7b79e962f928cdbe6f2337670fa6c9a9636f04 sha256:987b553c835f01f46eb1859bc32f564119d5833801a27b25a0ca5c6b8b6e111a 7.648kB 20 minutes containerd.io/distribution.source.docker.io=library/redis,containerd.io/gc.ref.snapshot.overlayfs=sha256:33bd296ab7f37bdacff0cb4a5eb671bcb3a141887553ec4157b1e64d6641c1cd sha256:97481c7992ebf6f22636f87e4d7b79e962f928cdbe6f2337670fa6c9a9636f04 409B 20 minutes containerd.io/distribution.source.docker.io=library/redis,containerd.io/uncompressed=sha256:d442ae63d423b4b1922875c14c3fa4e801c66c689b69bfd853758fde996feffb sha256:5999b99cee8f2875d391d64df20b6296b63f23951a7d41749f028375e887cd05 1.418MB 20 minutes containerd.io/distribution.source.docker.io=library/redis,containerd.io/uncompressed=sha256:223b15010c47044b6bab9611c7a322e8da7660a8268949e18edde9c6e3ea3700 sha256:bfee6cb5fdad6b60ec46297f44542ee9d8ac8f01c072313a51cd7822df3b576f 7.348MB 20 minutes containerd.io/distribution.source.docker.io=library/redis,containerd.io/uncompressed=sha256:b96fedf8ee00e59bf69cf5bc8ed19e92e66ee8cf83f0174e33127402b650331d sha256:fd36a1ebc6728807cbb1aa7ef24a1861343c6dc174657721c496613c7b53bd07 98B 20 minutes containerd.io/distribution.source.docker.io=library/redis,containerd.io/uncompressed=sha256:aff00695be0cebb8a114f8c5187fd6dd3d806273004797a00ad934ec9cd98212 sha256:bb79b6b2107fea8e8a47133a660b78e3a546998fcf0427be39ac9a0af4a97e90 27.09MB 19 minutes containerd.io/distribution.source.docker.io=library/redis,containerd.io/uncompressed=sha256:d0fe97fa8b8cefdffcef1d62b65aba51a6c87b6679628a2b50fc6a7a579f764c sha256:1ed3521a5dcbd05214eb7f35b952ecf018d5a6610c32ba4e315028c556f45e94 1.732kB 20 minutes containerd.io/distribution.source.docker.io=library/redis,containerd.io/uncompressed=sha256:832f21763c8e6b070314e619ebb9ba62f815580da6d0eaec8a1b080bd01575f7 ``` Note that each blob of content has several labels on it. This sub-section describes the labels. This is not intended to be a comprehensive overview of labels. For images pulled from remotes, the"
},
{
"data": "label is added to each blob of the image to indicate its source. ``` containerd.io/distribution.source.docker.io=library/redis ``` If the blob is shared by different repos in the same registry, the repo name will be appended: ``` containerd.io/distribution.source.docker.io=library/redis,myrepo/redis ``` We start with the layers themselves. These have only one label: `containerd.io/uncompressed`. These files are gzipped tar files; the value of the label gives the hash of them when uncompressed. You can get the same value by doing: ```console $ cat <file> | gunzip - | sha256sum - ``` For example: ```console $ cat /var/lib/containerd/io.containerd.content.v1.content/blobs/sha256/1ed3521a5dcbd05214eb7f35b952ecf018d5a6610c32ba4e315028c556f45e94 | gunzip - | sha256sum - 832f21763c8e6b070314e619ebb9ba62f815580da6d0eaec8a1b080bd01575f7 ``` That aligns precisely with the last layer: ``` sha256:1ed3521a5dcbd05214eb7f35b952ecf018d5a6610c32ba4e315028c556f45e94 1.732kB 20 minutes containerd.io/distribution.source.docker.io=library/redis,containerd.io/uncompressed=sha256:832f21763c8e6b070314e619ebb9ba62f815580da6d0eaec8a1b080bd01575f7 ``` We have a single config layer, `sha256:987b553c835f01f46eb1859bc32f564119d5833801a27b25a0ca5c6b8b6e111a`. It has a label prefixed with `containerd.io/gc.ref.` indicating that it is a label that impacts garbage collection. In this case, the label is `containerd.io/gc.ref.snapshot.overlayfs` and has a value of `sha256:33bd296ab7f37bdacff0cb4a5eb671bcb3a141887553ec4157b1e64d6641c1cd`. This is used to connect this config to a snapshot. We will look at that shortly when we discuss snapshots. The labels on the manifest also begin with `containerd.io/gc.ref`, indicating that they are used to control garbage collection. A manifest has several \"children\". These normally are the config and the layers. We want to ensure that as long as the image remains around, i.e. the manifest, the children do not get garbage collected. Thus, we have labels referencing each child: `containerd.io/gc.ref.content.config` references the config `containerd.io/gc.ref.content.l.<index>` reference the layers In our example, the manifest is `sha256:9bb13890319dc01e5f8a4d3d0c4c72685654d682d568350fd38a02b1d70aee6b`, and the labels are as follows. ``` containerd.io/gc.ref.content.config=sha256:df57482065789980ee9445b1dd79ab1b7b3d1dc26b6867d94470af969a64c8e6 containerd.io/gc.ref.content.l.0=sha256:97481c7992ebf6f22636f87e4d7b79e962f928cdbe6f2337670fa6c9a9636f04 containerd.io/gc.ref.content.l.1=sha256:5999b99cee8f2875d391d64df20b6296b63f23951a7d41749f028375e887cd05 containerd.io/gc.ref.content.l.2=sha256:bfee6cb5fdad6b60ec46297f44542ee9d8ac8f01c072313a51cd7822df3b576f containerd.io/gc.ref.content.l.3=sha256:fd36a1ebc6728807cbb1aa7ef24a1861343c6dc174657721c496613c7b53bd07 containerd.io/gc.ref.content.l.4=sha256:bb79b6b2107fea8e8a47133a660b78e3a546998fcf0427be39ac9a0af4a97e90 containerd.io/gc.ref.content.l.5=sha256:1ed3521a5dcbd05214eb7f35b952ecf018d5a6610c32ba4e315028c556f45e94 ``` These are precisely those children of the manifest - the config and layers - that are stored in our content store. The labels on the index also begin with `containerd.io/gc.ref`, indicating that they are used to control garbage collection. An index has several \"children\", i.e. the manifests, one for each platform, as discussed above. We want to ensure that as long as the index remains around, the children do not get garbage collected. Thus, we have labels referencing each child, `containerd.io/gc.ref.content.m.<index>`. In our example, the index is `sha256:2a9865e55c37293b71df051922022898d8e4ec0f579c9b53a0caee1b170bc81c`, and the labels are as follows: ``` containerd.io/gc.ref.content.m.0=sha256:9bb13890319dc01e5f8a4d3d0c4c72685654d682d568350fd38a02b1d70aee6b containerd.io/gc.ref.content.m.1=sha256:aeb53f8db8c94d2cd63ca860d635af4307967aa11a2fdead98ae0ab3a329f470 containerd.io/gc.ref.content.m.2=sha256:17dc42e40d4af0a9e84c738313109f3a95e598081beef6c18a05abb57337aa5d containerd.io/gc.ref.content.m.3=sha256:613f4797d2b6653634291a990f3e32378c7cfe3cdd439567b26ca340b8946013 containerd.io/gc.ref.content.m.4=sha256:ee0e1f8d8d338c9506b0e487ce6c2c41f931d1e130acd60dc7794c3a246eb59e containerd.io/gc.ref.content.m.5=sha256:1072145f8eea186dcedb6b377b9969d121a00e65ae6c20e9cd631483178ea7ed containerd.io/gc.ref.content.m.6=sha256:4b7860fcaea5b9bbd6249c10a3dc02a5b9fb339e8aef17a542d6126a6af84d96 containerd.io/gc.ref.content.m.7=sha256:d66dfc869b619cd6da5b5ae9d7b1cbab44c134b31d458de07f7d580a84b63f69 ``` Notice that there are 8 children to the index, but all of them are for platforms other than ours, `linux/amd64`, and thus only one of them, `sha256:9bb13890319dc01e5f8a4d3d0c4c72685654d682d568350fd38a02b1d70aee6b` actually is in our content store. That doesn't hurt; it just means that the others will not be garbage collected either. Since they aren't there, they won't be removed. The content in the content store is immutable, but also in formats that often are unusable. For example, most container layers are in a tar-gzip format. One cannot simply mount a tar-gzip file. Even if one could, we want to leave our immutable content not only unchanged, but unchangeable, even by accident, i.e. immutable. In order to use it, we create snapshots of the content. The process is as follows: The snapshotter creates a snapshot from the parent. In the case of the first layer, that is blank. This is now an \"active\" snapshot. The diff applier, which has knowledge of the internal format of the layer blob, applies the layer blob to the active snapshot. The snapshotter commits the snapshot after the diff has been applied. This is now a \"committed\""
},
{
"data": "The committed snapshot is used as the parent for the next layer. Returning to our example, each layer will have a corresponding immutable snapshot layer. Recalling that our example has 6 layers, we expect to see 6 committed snapshots. The output has been sorted to make viewing easier; it matches the layers from the content store and manifest itself. ```console $ ctr snapshot ls KEY PARENT KIND sha256:d0fe97fa8b8cefdffcef1d62b65aba51a6c87b6679628a2b50fc6a7a579f764c Committed sha256:2ae5fa95c0fce5ef33fbb87a7e2f49f2a56064566a37a83b97d3f668c10b43d6 sha256:d0fe97fa8b8cefdffcef1d62b65aba51a6c87b6679628a2b50fc6a7a579f764c Committed sha256:a8f09c4919857128b1466cc26381de0f9d39a94171534f63859a662d50c396ca sha256:2ae5fa95c0fce5ef33fbb87a7e2f49f2a56064566a37a83b97d3f668c10b43d6 Committed sha256:aa4b58e6ece416031ce00869c5bf4b11da800a397e250de47ae398aea2782294 sha256:a8f09c4919857128b1466cc26381de0f9d39a94171534f63859a662d50c396ca Committed sha256:bc8b010e53c5f20023bd549d082c74ef8bfc237dc9bbccea2e0552e52bc5fcb1 sha256:aa4b58e6ece416031ce00869c5bf4b11da800a397e250de47ae398aea2782294 Committed sha256:33bd296ab7f37bdacff0cb4a5eb671bcb3a141887553ec4157b1e64d6641c1cd sha256:bc8b010e53c5f20023bd549d082c74ef8bfc237dc9bbccea2e0552e52bc5fcb1 Committed ``` Each snapshot has a parent, except for the root. It is a tree, or a stacked cake, starting with the first layer. This matches how the layers are built, as layers. The key, or name, for the snapshot does not match the hash from the content store. This is because the hash from the content store is the hash of the original content, in this case tar-gzipped. The snapshot expands it out into the filesystem to make it useful. It also does not match the uncompressed content, i.e. the tar file without gzip, and as given on the label `containerd.io/uncompressed`. Rather the name is the result of applying the layer to the previous one and hashing it. By that logic, the very root of the tree, the first layer, should have the same hash and name as the uncompressed value of the first layer blob. Indeed, it does. The root layer is `sha256:bb79b6b2107fea8e8a47133a660b78e3a546998fcf0427be39ac9a0af4a97e90 ` which, when uncompressed, has the value `sha256:d0fe97fa8b8cefdffcef1d62b65aba51a6c87b6679628a2b50fc6a7a579f764c`, which is the first layer in the snapshot, and also the label on that layer in the content store: ``` sha256:bb79b6b2107fea8e8a47133a660b78e3a546998fcf0427be39ac9a0af4a97e90 27.09MB 19 minutes containerd.io/distribution.source.docker.io=library/redis,containerd.io/uncompressed=sha256:d0fe97fa8b8cefdffcef1d62b65aba51a6c87b6679628a2b50fc6a7a579f764c ``` The final, or top, layer, is the point at which you would want to create an active snapshot to start a container. Thus, we would need to track it. This is exactly the label that is placed on the config. In our example, the config is at `sha256:987b553c835f01f46eb1859bc32f564119d5833801a27b25a0ca5c6b8b6e111a` and had the label `containerd.io/gc.ref.snapshot.overlayfs=sha256:33bd296ab7f37bdacff0cb4a5eb671bcb3a141887553ec4157b1e64d6641c1cd`. Looking at our snapshots, the value of the final layer of the stack is, indeed, that: ``` sha256:33bd296ab7f37bdacff0cb4a5eb671bcb3a141887553ec4157b1e64d6641c1cd sha256:bc8b010e53c5f20023bd549d082c74ef8bfc237dc9bbccea2e0552e52bc5fcb1 Committed ``` Note as well, that the label on the config in the content store starts with `containerd.io/gc.ref`. This is a garbage collection label. It is this label that keeps the garbage collector from removing the snapshot. Because the config has a reference to it, the top layer is \"protected\" from garbage collection. This layer, in turn, depends on the next layer down, so it is protected from collection, and so on until the root or base layer. With the above in place, we know how to create an active snapshot that is useful for the container. We simply need to the active snapshot, passing it an ID and the parent, in this case the top layer of committed snapshots. Thus, the steps are: Get the content into the content store, either via , or via loading it in the Unpack the image to create committed snapshots for each layer, using . Alternatively, if you use , you can pass it an option to unpack when pulling, using Create an active snapshot using . You can skip this step if you plan on creating a container, as you can pass it as an option to the next step. Create a container using , optionally telling it to create a snapshot with"
}
] |
{
"category": "Runtime",
"file_name": "content-flow.md",
"project_name": "containerd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: Interactive Deployment menu_order: 20 search_type: Documentation Weave Net can be launched interactively on the command line, and as long as Docker is configured to start on boot, the network will survive host reboots without the use of a systemd. However, since launching Weave Net commands in interactive mode is not amenable to automation and configuration management, it is recommended that deploying Weave Net in this mode be reserved for exploration and evaluation only. On the initial peer: weave launch On the new peer: weave launch <extant peers> Where, `<extant peers>` indicates all peers in the network, initial and subsequently added, which have not been explicitly removed. It should include peers that are temporarily offline or stopped. To ensure that the new peer has joined the existing network, execute the following: weave prime Before adding any new peers, you must wait for this to complete. If this command waits and does not exit, it means that there is some issue (such as a network partition or failed peers) that is preventing a quorum from being reached - you will need to [address that](/site/troubleshooting.md) before moving on. A peer can be stopped temporarily with the following command: weave stop A temporarily stopped peer will remember IP address allocation information on the next `weave launch` but will forget any discovered peers or modifications to the initial peer list that were made with `weave connect` or `weave forget`. Note that if the host reboots, Docker automatically restarts the peer. On the peer to be removed: weave reset Then optionally on each remaining peer: weave forget <removed peer> This step is not mandatory, but it will eliminate log noise and spurious network traffic by stopping reconnection attempts and preventing further connection attempts after a restart."
}
] |
{
"category": "Runtime",
"file_name": "interactive.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: Sync Accounts between Multiple Hosts sidebar_position: 7 slug: /syncaccountsbetweenmultiplehosts JuiceFS supports Unix file permission, you can manage permissions by directory or file granularity, just like a local file system. To provide users with an intuitive and consistent permission management experience (e.g. the files accessible by user A on host X should be accessible by the same user on host Y), the same user who wants to access JuiceFS should have the same UID and GID on all hosts. Here we provide a simple playbook to demonstrate how to ensure an account with same UID and GID on multiple hosts. :::note If you are using JuiceFS in Hadoop environment, besides sync accounts between multiple hosts, you can also specify a global user list and user group file. Please refer to for more information. ::: Select a host as a which can access all hosts using `ssh` with the same privileged account like `root` or other sudo account. Then, install Ansible on this host. Refer to for details. Create `account-sync/play.yaml` as follows: ```yaml hosts: all tasks: name: \"Ensure group {{ group }} with gid {{ gid }} exists\" group: name: \"{{ group }}\" gid: \"{{ gid }}\" state: present name: \"Ensure user {{ user }} with uid {{ uid }} exists\" user: name: \"{{ user }}\" uid: \"{{ uid }}\" group: \"{{ gid }}\" state: present ``` Create the Ansible inventory `hosts`, which contains IP addresses of all hosts that need to create account. Here we ensure an account `alice` with UID 1200 and group `staff` with GID 500 on 2 hosts: ```shell ~/account-sync$ cat hosts 172.16.255.163 172.16.255.180 ~/account-sync$ ansible-playbook -i hosts -u root --ssh-extra-args \"-o StrictHostKeyChecking=no\" \\ --extra-vars \"group=staff gid=500 user=alice uid=1200\" play.yaml PLAY [all] TASK [Gathering Facts] ok: [172.16.255.180] ok: [172.16.255.163] TASK [Ensure group staff with gid 500 exists] * ok:"
},
{
"data": "ok: [172.16.255.180] TASK [Ensure user alice with uid 1200 exists] * changed: [172.16.255.180] changed: [172.16.255.163] PLAY RECAP 172.16.255.163 : ok=3 changed=1 unreachable=0 failed=0 172.16.255.180 : ok=3 changed=1 unreachable=0 failed=0 ``` Now the new account `alice:staff` has been created on these 2 hosts. If the specified UID or GID has been allocated to another user or group on some hosts, the creation would fail. ```shell ~/account-sync$ ansible-playbook -i hosts -u root --ssh-extra-args \"-o StrictHostKeyChecking=no\" \\ --extra-vars \"group=ubuntu gid=1000 user=ubuntu uid=1000\" play.yaml PLAY [all] TASK [Gathering Facts] ok: [172.16.255.180] ok: [172.16.255.163] TASK [Ensure group ubuntu with gid 1000 exists] * ok: [172.16.255.163] fatal: [172.16.255.180]: FAILED! => {\"changed\": false, \"msg\": \"groupmod: GID '1000' already exists\\n\", \"name\": \"ubuntu\"} TASK [Ensure user ubuntu with uid 1000 exists] ok: [172.16.255.163] to retry, use: --limit @/home/ubuntu/account-sync/play.retry PLAY RECAP 172.16.255.163 : ok=3 changed=0 unreachable=0 failed=0 172.16.255.180 : ok=1 changed=0 unreachable=0 failed=1 ``` In the above example, the group ID 1000 has been allocated to another group on host `172.16.255.180`. So we should change the GID or delete the group with GID 1000 on host `172.16.255.180`, and then run the playbook again. :::caution If the UID / GID of an existing user is changed, the user may lose permissions to previously accessible files. For example: ```shell $ ls -l /tmp/hello.txt -rw-r--r-- 1 alice staff 6 Apr 26 21:43 /tmp/hello.txt $ id alice uid=1200(alice) gid=500(staff) groups=500(staff) ``` We change the UID of alice from 1200 to 1201 ```shell ~/account-sync$ ansible-playbook -i hosts -u root --ssh-extra-args \"-o StrictHostKeyChecking=no\" \\ --extra-vars \"group=staff gid=500 user=alice uid=1201\" play.yaml ``` Now we have no permission to remove this file as its owner is not alice: ```shell $ ls -l /tmp/hello.txt -rw-r--r-- 1 1200 staff 6 Apr 26 21:43 /tmp/hello.txt $ rm /tmp/hello.txt rm: remove write-protected regular file '/tmp/hello.txt'? y rm: cannot remove '/tmp/hello.txt': Operation not permitted ``` :::"
}
] |
{
"category": "Runtime",
"file_name": "sync_accounts_between_multiple_hosts.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This document describes the various ways that weave as a binary gets launched, and where and how it processes options. The goal is to enable maintainers to modify functionality without having to reverse engineer the code to find all of the places you can start a weave binary and what it depends upon. `weaver` is the primary binary: it sets up the weave bridge to which containers are attached, manages IP address allocation, serves DNS requests and does many more things. Since version 2.0, `weaver` also bundles the Docker Plugin (legacy version) and Docker API Proxy which used to be shipped as separate containers. weaveutil is a binary which provides a number of functions for managing an existing weave network. It gets information about the network, attaches and detaches containers, etc. For the majority of interactions with an existing weave network, you will launch `weaveutil` in some manner or another. Almost all options can be passed as `--option` to `weaveutil`. These options are created in `prog/weaveutil/main.go` with a table mapping each command to a dedicated golang function. In addition to operating in normal mode, `weaveutil` has several additional operating modes. Before processing any commands, `weaveutil` checks the filename with which it was called. If it was `weave-ipam` it delegates responsibility to the `cniIPAM()` function in `prog/weaveutil/cni.go`, which, in turn, calls the standard CNI plugin function `cni.PluginMain()`, passing it weave's IPAM implementation from `plugin/ipam`. If it was `weave-net` it delegates responsibility to the `cniNet()` function in `prog/weaveutil/cni.go`, which, in turn, calls the standard CNI plugin function `cni.PluginMain()`, passing it weave's net plugin implementation from `plugin/net`. Finally, `weaveutil` is called by `weaver` when it needs to operate in a different network namespace. Go programs cannot safely switch namespaces; see this . The call chain is `weaver`->`nsenter`->`weaveutil`. Wrapping `weaver` and `weaveutil` is `weave`, a `sh` script that provides help information and calls `weaveutil` as relevant. weave-kube is an image with the weave binaries and a wrapper script installed. It is responsible for: Setting up the weave network Connecting to peers Copy the weave CNI-compatible binary plugin `weaveutil` to `/opt/cni/bin/weave-net` and weave config in"
},
{
"data": "Running the weave network policy controller (weave-npc) to implement kubernetes' `NetworkPolicy` Once installation is complete, each network-relevant change in a container leads `kubelet` to: Set certain environment variables to configure the CNI plugin, mostly related to the container ID Launch `/opt/cni/bin/weave-net` Pass the CNI config - the contents of `/etc/cni/net.d/10-weave.conflist` to `weave-net` Thus, when each container is changed, and `kubelet` calls weave as a CNI plugin, it really just is launching `weaveutil` as `weave-net`. The installation and setup of all of the above - and therefore the entrypoint to the weave-kube image - is the script `prog/weave-kube/launch.sh`. `launch.sh` does the following: Read configuration variables from the environment. When the documentation for `weave-kube` describes configuring the weave network by changing the environment variables in the daemonset in the `.yml` file, `launch.sh` reads these environment variables. Set up the config file at `/etc/cni/net.d/10-weave.conflist` Run the `weave` initialization script Run `weaver` with the correct configuration passed as command-line options To add new options to how weave should run with each invocation, you would do the following: determine to which `weave` command you want to add the option(s). `weave` normally is launched as `weave <command> <options> ...`. Add the option to `weave` script help for each `weave` command you wish to make it available. As of this writing, all of the commands and their options are listed in the function `usagenoexist()`. Add the option to `weave` script option processing for the `weave` command, under the `case $COMMAND in` in the main `weave` script. In the function for the `weave` command, determine if the command should be passed on to `weaveutil` or `weaver`. Pass the option on to `weaveutil` or `weaver`, as appropriate, in the format `--option`, in the function for the `weave` command. Add a command-line option `--option` to `weaveutil` or `weaver` as appropriate. If the option can be configured for CNI: have `prog/weave-kube/launch.sh` read it as an environment variable and set inside set a default for the environment variable in `prog/weave-kube/weave-daemonset-k8s-N.N.yaml` if it should be set via `weaver` globally on its one-time initialization invocation, pass it on in `launch.sh` if it needs to be set via `weaveutil` on each invocation of `weave-net`, have `launch.sh` save it as an option in the CNI config file `/etc/cni/net.d/10-weave.conflist` and then have the CNI code in `plugin/net/cni.go` in `weaveutil` read it and use where appropriate Document it!"
}
] |
{
"category": "Runtime",
"file_name": "entrypoints.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "This guide shows you how to use the host network for DRBD replication. By default, DRBD will use the container network to replicate volume data. This ensures replication works on a wide range of clusters without further configuration. It also enables use of `NetworkPolicy` to block unauthorized access to DRBD traffic. Since the network interface of the Pod is tied to the lifecycle of the Pod, it also means DRBD will temporarily disrupt replication when the LINSTOR Satellite Pod is restarted. In contrast, using the host network for DRBD replication will cause replication to work independent of the LINSTOR Satellite Pod. The host network might also offer better performance than the container network. As a downside, you will have to manually ensure connectivity between Nodes on the relevant ports. To follow the steps in this guide, you should be familiar with editing `LinstorSatelliteConfiguration` resources. Switching from the default container network to host network is possible at any time. Existing DRBD resources will be reconfigured to use the host network interface. To configure the host network for the LINSTOR Satellite, apply the following configuration: ```yaml apiVersion: piraeus.io/v1 kind: LinstorSatelliteConfiguration metadata: name: host-network spec: podTemplate: spec: hostNetwork: true ``` After the Satellite Pods are recreated, they will use the host network. Any existing DRBD resources are reconfigured to use the new IP Address instead. Switching back from host network to container network involves manually resetting the configured peer addresses used by DRBD. This can either be achieved by rebooting every node, or by manually resetting the address using `drbdadm`. First, you need to remove the `LinstorSatelliteConfiguration` that set `hostNetwork: true`: ``` $ kubectl delete linstorsatelliteconfigurations.piraeus.io host-network linstorsatelliteconfiguration.piraeus.io \"host-network\" deleted ``` Then, reboot each cluster node, either one by one or multiple at once. In general, replication will not work between rebooted nodes and non-rebooted nodes. The non-rebooted nodes will continue to use the host network addresses, which are generally not reachable from the container network. After all nodes are rebooted, all resources are configured to use the container network, and all DRBD connections should be connected again. During this procedure, ensure no new volumes or snapshots are created: otherwise the migration to the container network might not be applied to all resources. First, you need to temporarily stop all replication and suspend all DRBD volumes using `drbdadm suspend-io all`. The command is executed once on each LINSTOR Satellite Pod. ``` $ kubectl exec ds/linstor-satellite.node1.example.com -- drbdadm suspend-io all $ kubectl exec ds/linstor-satellite.node2.example.com -- drbdadm suspend-io all $ kubectl exec ds/linstor-satellite.node3.example.com -- drbdadm suspend-io all ``` Next, you will need to disconnect all DRBD connections on all nodes. ``` $ kubectl exec ds/linstor-satellite.node1.example.com -- drbdadm disconnect --force all $ kubectl exec ds/linstor-satellite.node2.example.com -- drbdadm disconnect --force all $ kubectl exec ds/linstor-satellite.node3.example.com -- drbdadm disconnect --force all ``` Now, we can safely reset all DRBD connection paths, which frees the connection to be moved to the container network. ``` $ kubectl exec ds/linstor-satellite.node1.example.com -- drbdadm del-path all $ kubectl exec ds/linstor-satellite.node2.example.com -- drbdadm del-path all $ kubectl exec ds/linstor-satellite.node3.example.com -- drbdadm del-path all ``` Finally, removing the `LinstorSatelliteConfiguration` that set `hostNetwork: true` will trigger the creation of new LINSTOR Satellite Pods using the container network: ``` $ kubectl delete linstorsatelliteconfigurations.piraeus.io host-network linstorsatelliteconfiguration.piraeus.io \"host-network\" deleted ``` After the Pods are recreated and the LINSTOR Satellites are `Online`, the DRBD resource will be reconfigured and resume IO."
}
] |
{
"category": "Runtime",
"file_name": "drbd-host-networking.md",
"project_name": "Piraeus Datastore",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "User namespaces is a feature of Linux that can be used to separate the user IDs and group IDs between the host and containers. It can provide a better isolation and security: the privileged user `root` in the container can be mapped to a non-privileged user on the host. rkt's implementation is based on systemd-nspawn. A pod can transparently use user IDs in the range 0-65535 and this range is mapped on the host to a high range chosen randomly. Before the pod is started, the ACIs are rendered to the filesystem and the owners of the files are set with `chown` in that high range. When starting several pods with user namespaces, they will each get a random UID range. Although very unlikely, it is possible that two distincts containers get the same UID range. If this happens, user namespaces will not provide any additional isolation between the two containers, exactly like when user namespaces are not used. The two containers will however still not use the same UID range as the host, so using user namespaces is better than not using them. In order to avoid collisions, it is planned to implement a locking mechanism so that two pods will always have a different UID range. The initial implementation works only with `--no-overlay`. Ideally, preparing a pod should not have to iterate over all files to call `chown`. It is planned to add kernel support for a mount option to shift the user IDs in the correct range (see ). It would make it work with overlayfs. When mounting a volume from the host into the pod, the ownership of the files is not shifted, so it makes volumes difficult if not impossible to use with user namespaces. The same kernel support should help here too ()."
}
] |
{
"category": "Runtime",
"file_name": "user-namespaces.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Status: Accepted During long-running restic backups/restores, there is no visibility into what (if anything) is happening, making it hard to know if the backup/restore is making progress or hung, how long the operation might take, etc. We should capture progress during restic operations and make it user-visible so that it's easier to reason about. This document proposes an approach for capturing progress of backup and restore operations and exposing this information to users. Provide basic visibility into restic operations to inform users about their progress. Capturing progress for non-restic backups and restores. (Omitted, see introduction) The `restic backup` command provides progress reporting to stdout in JSON format, which includes the completion percentage of the backup. This progress will be read on some interval and the PodVolumeBackup Custom Resource's (CR) status will be updated with this information. The `restic stats` command returns the total size of a backup. This can be compared with the total size the volume periodically to calculate the completion percentage of the restore. The PodVolumeRestore CR's status will be updated with this information. A new `Progress` field will be added to PodVolumeBackupStatus and PodVolumeRestoreStatus of type `PodVolumeOperationProgress`: ``` type PodVolumeOperationProgress struct { TotalBytes int64 BytesDone int64 } ``` restic added support for in 0.9.5. Our current images ship restic 0.9.4, and so the Dockerfile will be updated to pull the new version: https://github.com/heptio/velero/blob/af4b9373fc73047f843cd4bc3648603d780c8b74/Dockerfile-velero#L21. With the `--json` flag, `restic backup` outputs single lines of JSON reporting the status of the backup: ``` {\"messagetype\":\"status\",\"percentdone\":0,\"totalfiles\":1,\"totalbytes\":21424504832} {\"messagetype\":\"status\",\"action\":\"scanfinished\",\"item\":\"\",\"duration\":0.219241873,\"datasize\":49461329920,\"metadatasize\":0,\"total_files\":10} {\"messagetype\":\"status\",\"percentdone\":0,\"totalfiles\":10,\"totalbytes\":49461329920,\"current_files\":[\"/file3\"]} {\"messagetype\":\"status\",\"percentdone\":0.0003815984736061056,\"totalfiles\":10,\"totalbytes\":49461329920,\"bytesdone\":18874368,\"currentfiles\":[\"/file1\",\"/file3\"]} {\"messagetype\":\"status\",\"percentdone\":0.0011765952936188255,\"totalfiles\":10,\"totalbytes\":49461329920,\"bytesdone\":58195968,\"currentfiles\":[\"/file1\",\"/file3\"]} {\"messagetype\":\"status\",\"percentdone\":0.0019503921984312064,\"totalfiles\":10,\"totalbytes\":49461329920,\"bytesdone\":96468992,\"currentfiles\":[\"/file1\",\"/file3\"]} {\"messagetype\":\"status\",\"percentdone\":0.0028089887640449437,\"totalfiles\":10,\"totalbytes\":49461329920,\"bytesdone\":138936320,\"currentfiles\":[\"/file1\",\"/file3\"]} ``` The will be updated to include the `--json` flag. The code to run the `restic backup` command (https://github.com/heptio/velero/blob/af4b9373fc73047f843cd4bc3648603d780c8b74/pkg/controller/podvolumebackup_controller.go#L241) will be changed to include a Goroutine that reads from the command's stdout stream. The implementation of this will largely follow of this. The Goroutine will periodically read the stream (every 10 seconds) and get the last printed status line, which will be converted to JSON. If `bytesdone` is empty, restic has not finished scanning the volume and hasn't calculated the `totalbytes`. In this case, we will not update the PodVolumeBackup and instead will wait for the next iteration. Once we get a non-zero value for `bytesdone`, the `bytesdone` and `total_bytes` properties will be read and the PodVolumeBackup will be patched to update `status.Progress.BytesDone` and `status.Progress.TotalBytes` respectively. Once the backup has completed successfully, the PodVolumeBackup will be patched to set `status.Progress.BytesDone = status.Progress.TotalBytes`. This is done since the main thread may cause early termination of the Goroutine once the operation has finished, preventing a final update to the `BytesDone`"
},
{
"data": "The `restic stats <snapshot_id> --json` command provides information about the size of backups: ``` {\"totalsize\":10558111744,\"totalfile_count\":11} ``` Before beginning the restore operation, we can use the output of `restic stats` to get the total size of the backup. The PodVolumeRestore will be patched to set `status.Progress.TotalBytes` to the total size of the backup. The code to run the `restic restore` command will be changed to include a Goroutine that periodically (every 10 seconds) gets the current size of the volume. To get the current size of the volume, we will recursively walkthrough all files in the volume to accumulate the total size. The current total size is the number of bytes transferred so far and the PodVolumeRestore will be patched to update `status.Progress.BytesDone`. Once the restore has completed successfully, the PodVolumeRestore will be patched to set `status.Progress.BytesDone = status.Progress.TotalBytes`. This is done since the main thread may cause early termination of the Goroutine once the operation has finished, preventing a final update to the `BytesDone` property. The output that describes detailed information about and will be updated to calculate and display a completion percentage from `status.Progress.TotalBytes` and `status.Progress.BytesDone` if available. Can we assume that the volume we are restoring in will be empty? Can it contain other artefacts? Based on discussion in this PR, we are okay making the assumption that the PVC is empty and will proceed with the above proposed approach. If we cannot assume that the volume we are restoring into will be empty, we can instead use the output from `restic snapshot` to get the list of files in the backup. This can then be used to calculate the current total size of just those files in the volume, so that we avoid considering any other files unrelated to the backup. The above proposed approach is simpler than this one, as we don't need to keep track of each file in the backup, but this will be more robust if the volume could contain other files not included in the backup. It's possible that certain volume types may contain hidden files that could attribute to the total size of the volume, though these should be small enough that the BytesDone calculation will only be slightly inflated. Another option is to contribute progress reporting similar to `restic backup` for `restic restore` upstream. This may take more time, but would give us a more native view on the progress of a restore. There are several issues about this already in the restic repo (https://github.com/restic/restic/issues/426, https://github.com/restic/restic/issues/1154), and what looks like an abandoned attempt (https://github.com/restic/restic/pull/2003) which we may be able to pick up. N/A"
}
] |
{
"category": "Runtime",
"file_name": "restic-backup-and-restore-progress.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "StratoVirt controls VM's lifecycle and external api interface with in the current version. When running StratoVirt, you must create QMP in cmdline arguments as a management interface. StratoVirt supports UnixSocket-type QMP and TcpSocket-type QMP, you can set it by: ```shell -qmp unix:/path/to/api/socket,server,nowait ``` ```shell -qmp tcp:ip:port,server,nowait ``` Where, the information about 'server' and 'nowait' can be found in On top of that, monitor can be used to create QMP connection as well. The following commands can be used to create a monitor. Three properties can be set for monitor. id: unique device id. chardev: char device of monitor. mode: the model of monitor. NB: currently only \"control\" is supported. ```shell -chardev socket,path=/path/to/monitor/sock,id=chardev_id,server,nowait -mon chardev=chardevid,id=monitorid,mode=control ``` ```shell -chardev socket,host=ip,port=port,id=chardev_id,server,nowait -mon chardev=chardevid,id=monitorid,mode=control ``` After StratoVirt started, you can connect to StratoVirt's QMP and manage it by QMP. Several steps to connect QMP are showed as following: ```shell $ ncat -U /path/to/api/socket ``` ```shell $ ncat ip port ``` Once connection is built, you will receive a `greeting` message from StratoVirt. ```json { \"QMP\": { \"version\": { \"StratoVirt\": { \"micro\":1, \"minor\":0, \"major\":0 }, \"package\":\"\" }, \"capabilities\":[] } } ``` Now you can input QMP command to control StratoVirt. Add a block backend. `node-name` : the name of the block driver node, must be unique. `file` : the backend file information. `media` : indicate media type of the backend file. Possible values are `disk` or `cdrom`. If not set, default is `disk`. `cache` : if use direct io. `read-only` : if readonly. `driver` : the block image format. Possible values are `raw` or `qcow2`. If not set, default is `raw`. `aio` : the aio type of block device. Micro VM `node-name` in `blockdev-add` should be same as `id` in `device_add`. For `addr`, it start at `0x0` mapping in guest with `vda` on x86_64 platform, and start at `0x1` mapping in guest with `vdb` on aarch64 platform. For `driver`, only `raw` is supported. ```json -> { \"execute\": \"blockdev-add\", \"arguments\": { \"node-name\": \"drive-0\", \"file\": { \"driver\": \"file\", \"filename\": \"/path/to/block\", \"aio\": \"native\" }, \"cache\": { \"direct\": true }, \"read-only\": false } } <- { \"return\": {} } ``` Remove a block backend. `node-name` : the name of the block driver node. ```json -> { \"execute\": \"blockdev-del\", \"arguments\": { \"node-name\": \"drive-0\" } } <- { \"return\": {} } ``` Add a network backend. `id` : the device's ID, must be unique. `ifname` : the backend tap dev name. `fd` : the opened tap fd. `fds` : the opened tap fds. `queues` : the num of queues for multi-queue. `vhost` : whether to run as a vhost-net device. `vhostfd` : the vhost-net device fd. `vhostfds` : the vhost-net device fds. `chardev` : the chardev name for vhost-user net. Micro VM `id` in `netdevadd` should be same as `id` in `deviceadd`. For `addr`, it start at `0x0` mapping in guest with `eth0`. It does not support multi-queue. ```json -> { \"execute\": \"netdev_add\", \"arguments\": { \"id\": \"net-0\", \"ifname\": \"tap0\" } } <- { \"return\": {} } ``` Remove a network backend. `id` : the device's ID. ```json -> { \"execute\": \"netdev_del\", \"arguments\": { \"id\": \"net-0\" } } <- { \"return\": {} } ``` Add a camera backend. `id` : the device's ID, must be unique. `driver` : the backend camera type, eg. v4l2 or demo. `path` : the backend camera file's path, eg. /dev/video0 MicroVM is not supported. ```json -> { \"execute\": \"cameradev_add\", \"arguments\": { \"id\": \"cam-0\", \"driver\": \"v4l2\", \"path\": \"/dev/video0\" } } <- { \"return\": {} } ``` Remove a camera"
},
{
"data": "`id` : the device's ID. MicroVM is not supported. ```json -> { \"execute\": \"cameradev_del\", \"arguments\": { \"id\": \"cam-0\" } } <- { \"return\": {} } ``` Currently, It only supports Standard VM. Add a character device backend. `id` : the character device's ID, must be unique. `backend` : the chardev backend info. Standard VM `id` in `chardev-add` should be same as `id` in `netdev_add`. ```json -> { \"execute\": \"chardev-add\", \"arguments\": { \"id\": \"chardev_id\", \"backend\": { \"type\": \"socket\", \"data\": { \"addr\": { \"type\": \"unix\", \"data\": { \"path\": \"/path/to/socket\" } }, \"server\": false } } } } <- { \"return\": {} } ``` Remove a character device backend. `id` : the character device's ID. ```json -> { \"execute\": \"chardev-remove\", \"arguments\": { \"id\": \"chardev_id\" } } <- { \"return\": {} } ``` StratoVirt supports hot-plug virtio-blk and virtio-net devices with QMP. Standard VM supports hot-plug vfio and vhost-user net devices. Add a device. `id` : the device's ID, must be unique. `driver` : the name of the device's driver. `addr` : the address device insert into. `host` : the PCI device info in the system that contains domain, bus number, slot number and function number. `bus` : the bus device insert into. Only for Standard VM. `mac` : the mac of the net device. `netdev` : the backend of the net device. `drive` : the backend of the block device. `serial` : the serial of the block device. Standard VM Currently, the device can only be hot-plugged to the pcie-root-port device. Therefore, you need to configure the root port on the cmdline before starting the VM. Guest kernel config: CONFIGHOTPLUGPCI_PCIE=y You are not advised to hot plug/unplug devices during VM startup, shutdown or suspension, or when the VM is under high pressure. In this case, the driver in the VM may not respond to requests, causing VM exceptions. ```json -> { \"execute\": \"device_add\", \"arguments\": { \"id\": \"net-0\", \"driver\": \"virtio-net-mmio\", \"addr\": \"0x0\" } } <- { \"return\": {} } ``` Remove a device from a guest. `id` : the device's ID. The device is actually removed when you receive the DEVICE_DELETED event ```json -> { \"execute\": \"device_del\", \"arguments\": { \"id\": \"net-0\" } } <- { \"event\": \"DEVICE_DELETED\", \"data\": { \"device\": \"net-0\", \"path\": \"net-0\" }, \"timestamp\": { \"seconds\": 1614310541, \"microseconds\": 554250 } } <- { \"return\": {} } ``` With QMP, you can control VM's lifecycle by command `stop`, `cont`, `quit` and check VM state by `query-status`. Stop all guest VCPUs execution. ```json -> { \"execute\": \"stop\" } <- { \"event\": \"STOP\", \"data\": {}, \"timestamp\": { \"seconds\": 1583908726, \"microseconds\": 162739 } } <- { \"return\": {} } ``` Resume all guest VCPUs execution. ```json -> { \"execute\": \"cont\" } <- { \"event\": \"RESUME\", \"data\": {}, \"timestamp\": { \"seconds\": 1583908853, \"microseconds\": 411394 } } <- { \"return\": {} } ``` Reset all guest VCPUs execution. ```json -> { \"execute\": \"system_reset\" } <- { \"return\": {} } <- { \"event\": \"RESET\", \"data\": { \"guest\": true }, \"timestamp\": { \"seconds\": 1677381086, \"microseconds\": 432033 } } ``` Requests that a guest perform a powerdown operation. ```json -> { \"execute\": \"system_powerdown\" } <- { \"return\": {} } <- { \"event\": \"POWERDOWN\", \"data\": {}, \"timestamp\": { \"seconds\": 1677850193, \"microseconds\": 617907 } } ``` This command will cause StratoVirt process to exit gracefully. ```json -> { \"execute\": \"quit\" } <- { \"return\": {} } <- { \"event\": \"SHUTDOWN\", \"data\": { \"guest\": false, \"reason\": \"host-qmp-quit\" }, \"timestamp\": { \"ds\": 1590563776, \"microseconds\": 519808 } } ``` Query the running status of all"
},
{
"data": "```json -> {\"execute\": \"query-status\"} <- {\"return\": { \"running\": true,\"singlestep\": false,\"status\": \"running\"}} ``` With QMP command you can set target memory size of guest and get memory size of guest. Set target memory size of guest. `value` : the memory size. ```json -> { \"execute\": \"balloon\", \"arguments\": { \"value\": 2147483648 } } <- { \"return\": {} } ``` Get memory size of guest. ```json -> { \"execute\": \"query-balloon\" } <- { \"return\": { \"actual\": 2147483648 } } ``` Take a snapshot of the VM into the specified directory. `uri` : template path. ```json -> { \"execute\": \"migrate\", \"arguments\": { \"uri\": \"file:path/to/template\" } } <- { \"return\": {} } ``` Get snapshot state. Now there are 5 states during snapshot: `None`: Resource is not prepared all. `Setup`: Resource is setup, ready to do snapshot. `Active`: In snapshot. `Completed`: Snapshot succeed. `Failed`: Snapshot failed. ```json -> { \"execute\": \"query-migrate\" } <- { \"return\": { \"status\": \"completed\" } } ``` Create disk internal snapshot. `device` - the valid block device. `name` - the snapshot name. ```json -> { \"execute\": \"blockdev-snapshot-internal-sync\", \"arguments\": { \"device\": \"disk0\", \"name\": \"snapshot1\" } } <- { \"return\": {} } ``` Delete disk internal snapshot. `device` - the valid block device. `name` - the snapshot name. ```json -> { \"execute\": \"blockdev-snapshot-delete-internal-sync\", \"arguments\": { \"device\": \"disk0\", \"name\": \"snapshot1\" } } <- { \"return\": { \"id\": \"1\", \"name\": \"snapshot0\", \"vm-state-size\": 0, \"date-sec\": 1000012, \"date-nsec\": 10, \"vm-clock-sec\": 100, vm-clock-nsec\": 20, \"icount\": 220414 } } ``` Query vcpu register value. `addr` : the register address. `vcpu` : vcpu id. The VM will pause during the query and then resume. Only aarch64 is supported now. ```json -> {\"execute\": \"query-vcpu-reg\", \"arguments\": {\"addr\": \"603000000013df1a\", \"vcpu\": 0}} <- {\"return\": \"348531C5\"} ``` Query the value of the guest physical address. `gpa` : the guest physical address. ```json -> {\"execute\": \"query-mem-gpa\", \"arguments\": {\"gpa\": \"13c4d1d00\" }} <- {\"return\": \"B9000001\"} ``` Query the display image of virtiogpu. Currently only stdvm and gtk supports. ```json -> { \"execute\": \"query-display-image\" } <- { \"return\": { \"fileDir\": \"/tmp/stratovirt-images\", \"isSuccess\": true } } ``` Query whether the trace state is enabled. `name` : Pattern used to match trace name. ```json -> { \"execute\": \"trace-get-state\", \"arguments\": { \"name\": \"trace_name\" } } <- { \"return\": [ { \"name\": \"trace_name\", \"state\": \"disabled\" } ] } ``` Set the state of trace. `name` : Pattern used to match trace name. `enable` : Whether to enable trace state. ```json -> { \"execute\": \"trace-set-state\", \"arguments\": { \"name\": \"trace_name\",\"enable\": true } } <- { \"return\": {} } ``` Receive a file descriptor via SCM rights and assign it a name. ```json -> { \"execute\": \"getfd\", \"arguments\": { \"fdname\": \"fd1\" } } <- { \"return\": {} } ``` Control if the scream device can use host's microphone record. `authorized` : \"on\" means scream can use host's microphone record, \"off\" opposites in meaning. ```json -> { \"execute\": \"switch-audio-record\", \"arguments\": { \"authorized\": \"on\" } } <- { \"return\": {} } ``` When some events happen, connected client will receive QMP events. Now StratoVirt supports these events: `SHUTDOWN`: Emitted when the virtual machine has shut down, indicating that StratoVirt is about to exit. `RESET`: Emitted when the virtual machine is reset. `STOP`: Emitted when the virtual machine is stopped. `RESUME`: Emitted when the virtual machine resumes execution. `POWERDOWN`: Emitted when the virtual machine powerdown execution. `DEVICE_DELETED`: Emitted whenever the device removal completion is acknowledged by the guest. `BALLOON_CHANGED`: Emitted when the virtual machine changes the actual BALLOON level. QMP use `leak bucket` to control QMP command flow. Now QMP server accept 100 commands per second."
}
] |
{
"category": "Runtime",
"file_name": "qmp.md",
"project_name": "StratoVirt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This page shows how to develop a \"Hello World\" application, build a \"Hello World\" image and run a \"Hello World\" container in a Kubernetes cluster. Note: this is an experimental and demonstrative guide. Please don't deploy it in product. You need to have a Kubernetes cluster and the nodes' hardware in the cluster must support Intel SGX. If you do not already have a cluster,you can create one following the documentation . Make sure you have one of the following operating systems: Ubuntu 18.04 server 64bits Develop a \"Hello World\" occlum application in an occlum SDK container. Build a \"Hello World\" image from the application. Run the \"Hello World\" Pod in Kubernetes cluster. Occlum supports running any executable binaries that are based on . It does not support Glibc. A good way to develop occlum applications is in an occlum SDK container. You can choose one suitable occlum SDK image from the list in , the version of the Occlum SDK image must be same as the occlum version listed in release page. Step 1. Apply the following yaml file ```yaml cat << EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: labels: run: occlum-app-builder name: occlum-app-builder namespace: default spec: hostNetwork: true containers: command: sleep infinity image: docker.io/occlum/occlum:0.21.0-ubuntu18.04 imagePullPolicy: IfNotPresent securityContext: privileged: true name: occlum-app-builder EOF ``` This will create a Pod with image `docker.io/occlum/occlum:0.21.0-ubuntu18.04` and the filed `securityContext.privileged` should be set to `true` in order to build and push docker image in container.<br /> Step 2. Wait for the pod status to `Ready` It will take about one minute to create the pod, you need to check and wait for the pod status to `Ready`. Run command `kubectl get pod occlum-app-builder`, the output looks like this: ```bash $ kubectl get pod occlum-app-builder NAME READY STATUS RESTARTS AGE occlum-app-builder 1/1 Running 0 15s ``` Step 3. Login the occlum-app-builder container ```bash kubectl exec -it occlum-app-builder -c occlum-app-builder -- /bin/bash ``` Step 4. Install docker in the container Install docker following the . Note that the `systemd`is not installed in the container by default, so you can't manage docker service by `systemd`. Step 5. Start the docker service by the following command: ```bash nohup dockerd -b docker0 --storage-driver=vfs & ``` Step"
},
{
"data": "Make sure the docker service started Run command `docker ps`, the output should be like this: ``` $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ``` If you were to write an SGX Hello World project using some SGX SDK, the project would consist of hundreds of lines of code. And to do that, you have to spend a great deal of time to learn the APIs, the programming model, and the built system of the SGX SDK.<br />Thanks to Occlum, you can be freed from writing any extra SGX-aware code and only need to type some simple commands to protect your application with SGX transparently. Note that the version of Linux SGX software stack must be same with the one . Please run this command to check the version: ```shell /opt/intel/sgxsdk/bin/x64/sgx_sign -version ``` Step 1. Create a working directory in the container ```c mkdir /root/occlumworkspace && cd /root/occlumworkspace/ ``` Step 2. Write the \"Hello World\" code in C language ```c cat << EOF > /root/occlumworkspace/helloworld.c int main() { while(1){ printf(\"Hello World!\\n\"); fflush(stdout); sleep(5); } } EOF ``` Step 3. Compile the user program with the Occlum toolchain (e.g., `occlum-gcc`) ```bash occlum-gcc -o helloworld helloworld.c ``` Step 4. Initialize a directory as the Occlum context via `occlum init` ```bash mkdir occlumcontext && cd occlumcontext occlum init ``` The `occlum init` command creates the compile-time and run-time state of Occlum in the current working directory. The `occlum new` command does basically the same thing but in a new instance diretory. Each Occlum instance directory should be used for a single instance of an application; multiple applications or different instances of a single application should use different Occlum instances. Step 5. Generate a secure Occlum FS image and Occlum SGX enclave via `occlum build` ```bash cp ../hello_world image/bin/ occlum build ``` The content of the `image` directory is initialized by the `occlum init` command. The structure of the `image` directory mimics that of an ordinary UNIX FS, containing directories like `/bin`, `/lib`, `/root`, `/tmp`, etc. After copying the user program `hello_world` into `image/bin/`, the `image` directory is packaged by the `occlum build` command to generate a secure Occlum FS image as well as the Occlum SGX enclave. The FS image is integrity protected by default, if you want to protect the confidentiality and integrity with your own key, please check out . Step 6. Run the user program inside an SGX enclave via `occlum run` ``` occlum run /bin/hello_world ``` The `occlum run` command starts up an Occlum SGX enclave, which, behind the scene, verifies and loads the associated occlum FS image, spawns a new LibOS process to execute `/bin/hello_world`, and eventually prints the message. Step"
},
{
"data": "Write the Dockerfile ```dockerfile cat << EOF >Dockerfile FROM scratch ADD image / ENTRYPOINT [\"/bin/hello_world\"] EOF ``` It is recommended that you use the scratch as the base image. The scratch image is an empty image, it makes the docker image size small enough, which means a much smaller Trusted Computing Base (TCB) and attack surface. `ADD image /`add the occlum image directory into the root directory of the docker image, `ENTRYPOINT [\"/bin/helloworld\"]`set the command `/bin/helloworld`as the container entry point. Step 2. Build and push the \"Hello World\" image to your docker registry Build and push the image to your docker registry. For example, you create a docker repository named occlum-hello-world in namespace inclavarecontainers, then you can push the image to `docker.io/inclavarecontainers/occlum-hello-world:scratch`. ```dockerfile docker build -f \"Dockerfile\" -t \"docker.io/inclavarecontainers/occlum-hello-world:scratch\" . docker push \"docker.io/inclavarecontainers/occlum-hello-world:scratch\" ``` If you want to run the \"Hello World\" Container on off-cloud signing scheme, please modify configuration as following: ```bash sed -i 's/server/client/g' /etc/inclavare-containers/config.toml ``` Step 1. Create the \"Hello World\" Pod Exit from the occlum SDK container, apply the following yaml to create the \"Hello World\" Pod. ```yaml cat << EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: labels: run: helloworld name: helloworld spec: runtimeClassName: rune containers: command: /bin/hello_world env: name: RUNE_CARRIER value: occlum image: docker.io/inclavarecontainers/occlum-hello-world:scratch imagePullPolicy: IfNotPresent name: helloworld workingDir: /var/run/rune EOF ``` Note: The field `runtimeClassName`should be set to `rune` which means the container will be handled by rune, specify the environment `RUNE_CARRIER`to `occlum`telling the `shim-rune` to create and run an occlum application.<br /> <br />You can also configure enclave through these environment variables | Environment Variable Name | Default Value | Other Value | | | | | | OCCLUMRELEASEENCLAVE | 0 (debug enclave) | 1 (product enclave) | | ENCLAVERUNTIMELOGLEVEL | \"info\" | \"trace\", \"debug\", \"warning\", \"error\", \"fatal\", \"panic\", \"off\" | | OCCLUMUSERSPACE_SIZE | 256MB | | | OCCLUMKERNELSPACEHEAPSIZE | 32MB | | | OCCLUMKERNELSPACESTACKSIZE | 1MB | | | OCCLUMMAXNUMOFTHREADS | 32 | | | OCCLUMPROCESSDEFAULTSTACKSIZE | 4MB | | | OCCLUMPROCESSDEFAULTHEAPSIZE | 32MB | | | OCCLUMPROCESSDEFAULTMMAPSIZE | 80MB | | | OCCLUMDEFAULTENV | OCCLUM=yes | | | OCCLUMUNTRUSTEDENV | EXAMPLE | | Step 2. Wait for the pod status to `Ready` ```yaml kubectl get pod helloworld ``` Step 3. Print the container's logs via `kubectl logs` Execute the command `kubectl logs -f helloworld`, a line \"Hello world\" will be printed on the terminal every 5 seconds. The output looks like this: ``` $ kubectl logs -f helloworld Hello World! Hello World! Hello World! ``` Use the following commands to delete the two pods `helloworld`and `occlum-app-builder` ```yaml kubectl delete pod helloworld kubectl delete pod occlum-app-builder ```"
}
] |
{
"category": "Runtime",
"file_name": "develop_and_deploy_hello_world_application_in_kubernetes_cluster.md",
"project_name": "Inclavare Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "How to Submit Patches to the libseccomp-golang Project =============================================================================== https://github.com/seccomp/libseccomp-golang This document is intended to act as a guide to help you contribute to the libseccomp-golang project. It is not perfect, and there will always be exceptions to the rules described here, but by following the instructions below you should have a much easier time getting your work merged with the upstream project. A number of tests and lint related recipes are provided in the Makefile, if you want to run the standard regression tests, you can execute the following: In order to use it, the 'golangci-lint' tool is needed, which can be found at: https://github.com/golangci/golangci-lint Any submissions which add functionality, or significantly change the existing code, should include additional tests to verify the proper operation of the proposed changes. At the top of every patch you should include a description of the problem you are trying to solve, how you solved it, and why you chose the solution you implemented. If you are submitting a bug fix, it is also incredibly helpful if you can describe/include a reproducer for the problem in the description as well as instructions on how to test for the bug and verify that it has been fixed. The sign-off is a simple line at the end of the patch description, which certifies that you wrote it or otherwise have the right to pass it on as an open-source patch. The \"Developer's Certificate of Origin\" pledge is taken from the Linux Kernel and the rules are pretty simple: Developer's Certificate of Origin 1.1 By making a contribution to this project, I certify that: (a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified"
},
{
"data": "(d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved. ... then you just add a line to the bottom of your patch description, with your real name, saying: Signed-off-by: Random J Developer <[email protected]> You can add this to your commit description in `git` with `git commit -s` The libseccomp project accepts both GitHub pull requests and patches sent via the mailing list. GitHub pull requests are preferred. This sections below explain how to contribute via either method. Please read each step and perform all steps that apply to your chosen contribution method. Depending on how you decided to work with the libseccomp code base and what tools you are using there are different ways to generate your patch(es). However, regardless of what tools you use, you should always generate your patches using the \"unified\" diff/patch format and the patches should always apply to the libseccomp source tree using the following command from the top directory of the libseccomp sources: If you are not using git, stacked git (stgit), or some other tool which can generate patch files for you automatically, you may find the following command helpful in generating patches, where \"libseccomp.orig/\" is the unmodified source code directory and \"libseccomp/\" is the source code directory with your changes: When in doubt please generate your patch and try applying it to an unmodified copy of the libseccomp sources; if it fails for you, it will fail for the rest of us. Finally, you will need to email your patches to the mailing list so they can be reviewed and potentially merged into the main libseccomp repository. When sending patches to the mailing list it is important to send your email in text form, no HTML mail please, and ensure that your email client does not mangle your patches. It should be possible to save your raw email to disk and apply it directly to the libseccomp source code; if that fails then you likely have a problem with your email client. When in doubt try a test first by sending yourself an email with your patch and attempting to apply the emailed patch to the libseccomp repository; if it fails for you, it will fail for the rest of us trying to test your patch and include it in the main libseccomp repository. See if you've never done this before."
}
] |
{
"category": "Runtime",
"file_name": "CONTRIBUTING.md",
"project_name": "runc",
"subcategory": "Container Runtime"
}
|
[
{
"data": "The Rook project in under . We accept contributions via GitHub pull requests. This document outlines some of the conventions related to development workflow, commit message formatting, contact points and other resources to make it easier to get your contribution accepted. By contributing to this project you agree to the Developer Certificate of Origin (DCO). This document was created by the Linux Kernel community and is a simple statement that you, as a contributor, have the legal right to make the contribution. See the file for details. Contributors sign-off that they adhere to these requirements by adding a Signed-off-by line to commit messages. For example: ``` This is my commit message Signed-off-by: Random J Developer <[email protected]> ``` Git even has a -s command line option to append this automatically to your commit message: ```console git commit -s -m 'This is my commit message' ``` If you have already made a commit and forgot to include the sign-off, you can amend your last commit to add the sign-off with the following command, which can then be force pushed. ```console git commit --amend -s ``` We use a to enforce the DCO on each pull request and branch commits. Fork the repository on GitHub Read the document for build and test instructions Play with the project, submit bugs, submit patches! This is a rough outline of what a contributor's workflow looks like: Create a branch from where you want to base your work (usually master). Make your changes and arrange them in readable commits. Make sure your commit messages are in the proper format (see below). Push your changes to the branch in your fork of the repository. Make sure all tests pass, and add any new tests as appropriate. Submit a pull request to the original repository. For detailed contribution instructions, refer to the . Rook projects are written in golang and follows the style guidelines dictated by the go fmt as well as go vet tools. Comments should be added to all new methods and structures as is appropriate for the coding language. Additionally, if an existing method or structure is modified sufficiently, comments should be created if they do not yet exist and updated if they do. The goal of comments is to make the code more readable and grokkable by future"
},
{
"data": "Once you have made your code as understandable as possible, add comments to make sure future developers can understand (A) what this piece of code's responsibility is within Rook's architecture and (B) why it was written as it was. The below blog entry explains more the why's and how's of this guideline. https://blog.codinghorror.com/code-tells-you-how-comments-tell-you-why/ For Go, Rook follows standard godoc guidelines. A concise godoc guideline can be found here: https://blog.golang.org/godoc-documenting-go-code We follow a rough convention for commit messages that is designed to answer two questions: what changed and why. The subject line should feature the what and the body of the commit should describe the why. ```console ceph: update MON to use rocksdb this enables us to remove leveldb from the codebase. ``` The format can be described more formally as follows: ``` <subsystem>: <what changed> <BLANK LINE> <why this change was made> <BLANK LINE> <footer> ``` The first line is the subject and should be no longer than 70 characters, the second line is always blank, and other lines should be wrapped at 80 characters. This allows the message to be easier to read on GitHub as well as in various git tools. The Rook project aims to empower contributors to approve and merge code changes autonomously. The maintainer team does not have sufficient resources to fully review and approve all proposed code changes, so trusted members of the community are given these abilities according to the process described in this section. The goal of this process is to increase the code velocity of all storage providers and streamline their day to day operations such as pull request approval and merging. The model for approving changes is largely based on the , where a set of roles are defined for different portions of the code base and have different responsibilities: Reviewers* are able to review code for quality and correctness on some part of the project, but cannot merge changes. Maintainers* are able to both review and approve code contributions. While code review is focused on code quality and correctness, approval is focused on holistic acceptance of a contribution. Maintainers can merge changes. (A Rook maintainer is similar in scope to a K8s approver in the link above.) Both of these roles will require a time commitment to the project in order to keep the change approval process moving forward at a reasonable pace. When automation is implemented to auto assign members to review pull requests, it will be done in a round-robin fashion, so all members must be able to dedicate the time"
},
{
"data": "Note that neither of these roles have voting powers in conflict resolution, these roles are for the code base change approval process only. The general flow for a pull request approval process is as follows: Author submits the pull request Reviewers and maintainers for the applicable code areas review the pull request and provide feedback that the author integrates Reviewers and/or maintainers signify their LGTM on the pull request A maintainer approves the pull request based on at least one LGTM from the previous step Note that the maintainer can heavily lean on the reviewer for examining the pull request at a finely grained detailed level. The reviewers are trusted members and maintainers can leverage their efforts to reduce their own review burden. A maintainer merges the pull request into the target branch (master, release, etc.) All roles will be assigned by the usage of files committed to the code base. These assignments will be initially be defined in a single file at the root of the repo and it will describe all assigned roles for the entire code base. As we incorporate automation (i.e. bots) into this change acceptance process in the future, we can reorganize this initial single owners file into separate files amongst the codebase as the automation necessitates. The format of the file can start with simply listing the reviewers and maintainers for areas of the code base using a YAML format: ```yaml areas: feature-foo: maintainers: alice bob reviewers: carol ``` The process for adding or removing reviewers/maintainers is described in the . Role assignees will be made part of the following Rook organization teams with the given permissions: Reviewers: added to a new Reviewers team so they have write permissions to the repo to assign issues, add labels to issues, add issues to milestones and projects, etc. but cannot merge to protected branches such as `master` and `release-`. Maintainers:* added to a Maintainers team that has access to merge to protected branches. This process can be further improved by automation and bots to automatically assign the PR to reviewers/maintainers, add labels to the PR, and merge the PR. We should explore this further with some experimentation and potentially leveraging what Kubernetes has done, but automation isnt strictly required to adopt and implement this model. The built in support in GitHub for files was considered. However, this only supports the automated assignment of reviewers to pull requests. It has no tiering or differentiation between roles like the proposed maintainers/reviewers model has and is therefore not a good fit."
}
] |
{
"category": "Runtime",
"file_name": "CONTRIBUTING.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This document proposes a solution that allows user to specify a backup order for resources of specific resource type. During backup process, user may need to back up resources of specific type in some specific order to ensure the resources were backup properly because these resources are related and ordering might be required to preserve the consistency for the apps to recover itself from the backup image (Ex: primary-secondary database pods in a cluster). Enable user to specify an order of backup resources belong to specific resource type Use a plugin to backup an resources and all the sub resources. For example use a plugin for StatefulSet and backup pods belong to the StatefulSet in specific order. This plugin solution is not generic and requires plugin for each resource type. User will specify a map of resource type to list resource names (separate by semicolons). Each name will be in the format \"namespaceName/resourceName\" to enable ordering across namespaces. Based on this map, the resources of each resource type will be sorted by the order specified in the list of resources. If a resource instance belong to that specific type but its name is not in the order list, then it will be put behind other resources that are in the list. Add new field to BackupSpec type BackupSpec struct { ... // OrderedResources contains a list of key-value pairs that represent the order // of backup of resources that belong to specific resource type // +optional // +nullable OrderedResources map[string]string } Function getResourceItems collects all items belong to a specific resource type. This function will be enhanced to check with the map to see whether the OrderedResources has specified the order for this resource type. If such order exists, then sort the items by such order being process before return. Add new flag \"--ordered-resources\" to Velero backup create command which takes a string of key-values pairs which represents the map between resource type and the order of the items of such resource type. Key-value pairs are separated by semicolon, items in the value are separated by commas. Example: velero backup create mybackup --ordered-resources \"pod=ns1/pod1,ns1/pod2;persistentvolumeclaim=n2/slavepod,ns2/primarypod\" In the CLI, the design proposes to use commas to separate items of a resource type and semicolon to separate key-value pairs. This follows the convention of using commas to separate items in a list (For example: --include-namespaces ns1,ns2). However, the syntax for map in labels and annotations use commas to separate key-value pairs. So it introduces some inconsistency. For pods that managed by Deployment or DaemonSet, this design may not work because the pods' name is randomly generated and if pods are restarted, they would have different names so the Backup operation may not consider the restarted pods in the sorting algorithm. This problem will be addressed when we enhance the design to use regular expression to specify the OrderResources instead of exact match."
}
] |
{
"category": "Runtime",
"file_name": "backup-resources-order.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "``MinIO`` community welcomes your contribution. To make the process as seamless as possible, we recommend you read this contribution guide. Start by forking the MinIO GitHub repository, make changes in a branch and then send a pull request. We encourage pull requests to discuss code changes. Here are the steps in details: Fork source repository to your own personal repository. Copy the URL of your MinIO fork (you will need it for the `git clone` command below). ```sh git clone https://github.com/minio/minio go install -v ls /go/bin/minio ``` ```sh $ cd minio $ git remote add upstream https://github.com/minio/minio $ git fetch upstream $ git merge upstream/master ... ``` Before making code changes, make sure you create a separate branch for these changes ``` git checkout -b my-new-feature ``` After your code changes, make sure To add test cases for the new code. If you have questions about how to do it, please ask on our channel. To run `make verifiers` To squash your commits into a single commit. `git rebase -i`. It's okay to force update your pull request. To run `make test` and `make build` completes. After verification, commit your changes. This is a on how to write useful commit messages ``` git commit -am 'Add some feature' ``` Push your locally committed changes to the remote origin (your fork) ``` git push origin my-new-feature ``` Pull requests can be created via GitHub. Refer to for detailed steps on how to create a pull request. After a Pull Request gets peer reviewed and approved, it will be merged. ``MinIO`` uses `go mod` to manage its dependencies. Run `go get foo/bar` in the source folder to add the dependency to `go.mod` file. To remove a dependency Edit your code and remove the import reference. Run `go mod tidy` in the source folder to remove dependency from `go.mod` file. ``MinIO`` is fully conformant with Golang style. Refer: article from Golang project. If you observe offending code, please feel free to send a pull request or ping us on ."
}
] |
{
"category": "Runtime",
"file_name": "CONTRIBUTING.md",
"project_name": "MinIO",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "gVisor helps users secure their infrastructure by running containers in a dedicated kernel that is isolated from the host. But wouldn't it be nice if you could tell when someone attempts to break out? Or get an early warning that your web server might have been compromised? Now you can do it with gVisor! We are pleased to announce support for runtime monitoring. Runtime monitoring provides the ability for an external process to observe application behavior and detect threats at runtime. Using this mechanism, gVisor users can watch actions performed by the container and generate alerts when something unexpected occurs. A monitoring process can connect to the gVisor sandbox and receive a stream of actions that the application is performing. The monitoring process decides what actions are allowed and what steps to take based on policies for the given application. gVisor communicates with the monitoring process via a simple protocol based on , which is the basis for and is well supported in several languages. The monitoring process runs isolated from the application inside the sandbox for security reasons, and can be shared among all sandboxes running on the same machine to save resources. Trace points can be individually configured when creating a tracing session to capture only what's needed. Let's go over a simple example of a web server that gets compromised while being monitored. The web server can execute files from `/bin`, read files from `/etc` and `/html` directories, create files under `/tmp`, etc. All these actions are reported to a monitoring process which analyzes them and deems them normal application behavior. Now suppose that an attacker takes control over the web server and starts executing code inside the container. The attacker writes a script under `/tmp` and, in an attempt to make it executable, runs `chmod u+x /tmp/exploit.sh`. The monitoring process determines that making a file executable is not expected in the normal web server execution and raises an alert to the security team for investigation. Additionally, it can also decide to kill the container and stop the attacker from making more progress. is an Open Source Cloud Native Security monitor that detects threats at runtime by observing the behavior of your applications and containers. Falco . All the Falco rules and tooling work seamlessly with gVisor. You can use to learn how to configure Falco and gVisor together. More information can be found on the . We're looking for more projects to take advantage of the runtime monitoring system and the visibility that it provides into the sandbox. There are a few unique capabilities provided by the system that makes it easy to monitor applications inside gVisor, like resolving file descriptors to full paths, providing container ID with traces, separating processes that were exec'ed into the container, internal procfs state access, and many more. If you would like to explore it further, there is a and with more details about the configuration and communication protocol. In addition, the is a great way to see it in action. We would like to thank , , and the Falco team for their support while building this feature."
}
] |
{
"category": "Runtime",
"file_name": "2022-08-31-threat-detection.md",
"project_name": "gVisor",
"subcategory": "Container Runtime"
}
|
[
{
"data": "disklayout [-c] [-f file] [-e excluded,...] [-s spares] [-w width] [layout] disklayout generates a JSON description of a ZFS pool configuration suitable for use by the mkzpool(8) utility. The utility may be run in two modes; when the -f option is given, the specified file is taken to be the output of the diskinfo(8) command and used as the source of information about the available disks. Otherwise, the disks currently present on the system will be enumerated and the utility will attempt to generate a pool layout that uses as many of the available disks as possible. The generated layout will not contain any removable disks. The utility does not create any pools, nor does it alter the contents of the disks. The generated JSON is written to standard output. Its format is described in detail in OUTPUT FIELDS below. Unless the -f option is given, diskinfo must be used in the global zone only so that it can enumerate the host's available disks. Devices will be assigned to one of three basic roles: primary storage, dedicated intent log (\"log\"), or second-level ARC (or \"cache\"). Devices that are not known to be solid-state will not be assigned the log or cache roles. The assignment of solid-state devices to the log role will be made before any are assigned the cache role. Broadly, the intent is to define a pool configuration that provides a good balance of performance, durability, and usable capacity given the available device inventory. If there are inadequate devices to do this, disklayout will attempt to define a functional pool with redundancy. If there is a single device, disklayout will define a pool with that single device. If the number of spares has not been explicitly specified with the -s option, then when at least 5 primary storage devices are available, disklayout tries to allocate at least one spare. Additional spares may be allocated as the total number of primary storage devices increases and/or if the number of available primary storage devices does not divide evenly by the number of devices per vdev with enough left over to provide the minimum number of spares. When constructing RAIDZ-type layouts, disklayout will consider a range of stripe widths (i.e., number of leaf devices per RAIDZ-n vdev). The number of leaf devices per vdev will be at least 3 for RAIDZ, at least 5 for RAIDZ-2, and at least 7 for RAIDZ-3. Some versions of this utility may consider only stripes wider than the limits documented"
},
{
"data": "Other than as described here, the heuristics used to select layouts and to optimise allocation of devices are not an interface and are subject to change at any time without notice. -c Prevent disklayout from allocating any disks as cache devices. -e disk,disks... Exclude any disks specified in the given comma-separated list. -f file Use file as the source of information about available disks. The running system will not be interrogated; in this mode, the utility may be used in a zone if desired. -s spares Specify spares as the number of disks to be be allocated as spares. -w width Specify width as the number of disks in the mirror or raidz vdevs. layout Specify the class of pool layout to generate. By default, disklayout selects a pool layout class based on the type, number, and size of available storage devices. If you specify a layout class, it will generate a configuration of that class instead. If it is not possible to do so given the available devices, an error will occur; see ERRORS below. The set of supported layouts includes \"single\", \"mirror\", \"raidz1\", \"raidz2\" and \"raidz3\", and will be listed if you specify an unsupported layout. \"spares\" An array of device specifications that are allocated as hot spares. \"vdevs\" An array of vdev specifications allocated to the active pool. \"capacity\" The number of bytes of usable storage in the pool. This is the amount of user data that the pool can store, taking into account devices reserved for spares and mirrored/parity devices. \"logs\" An array of device specifications that are allocated as dedicated intent log devices. There is no internal structure; all log devices are striped. \"cache\" An array of device specifications that are allocated as dedicated second-level ARC devices. There is no internal structure. A vdev specification contains the following properties: \"type\" The vdev type, as defined by ZFS. See zpool(8). \"devices\" An array of device specifications allocated to the vdev. Each device is specified by the following properties: \"name\" The base name of the device's nodes under /dev/dsk. \"vid\" The vendor identification string of the device. See diskinfo(8). \"pid\" The product identification string of the device. See diskinfo(8). \"size\" The storage capacity in bytes of the device. If the requested layout class cannot be satisfied by the available devices, or if the set of available devices does not include any usable primary storage devices, an error will occur. The resulting JSON output will contain the original device roster (in JSON format) and a text description of the error. This message is not localised. diskinfo(8), mkzpool(8), sd(7D), zpool(8)"
}
] |
{
"category": "Runtime",
"file_name": "disklayout.8.md",
"project_name": "SmartOS",
"subcategory": "Container Runtime"
}
|
[
{
"data": "The target of this document is to outline corner cases and common pitfalls in conjunction with the Container Runtime Interface (CRI). This document outlines CRI-O's interpretation of certain aspects of the interface, which may not be completely formalized. The main documentation of the CRI can be found [in the corresponding protobuf definition][0], whereas this document follows it on the `service`/`rpc` level. `ListImages` lists existing images. Its response consists of an array of `Image` types. Besides other information, an `Image` contains `repo_tags` and `repo_digests`, which are defined as: ```proto // Other names by which this image is known. repeated string repo_tags = 2; // Digests by which this image is known. repeated string repo_digests = 3; ``` Both tags and digests will be used by: The kubelet, which displays them in the node status as a flat list, for example: ```shell kubectl get node 127.0.0.1 -o json | jq .status.images ``` ```json [ { \"names\": [ \"registry.k8s.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108\", \"registry.k8s.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\", \"registry.k8s.io/pause:3.2\" ], \"sizeBytes\": 688049 } ] ``` Right now, the amount of images shown is limited by the kubelet flag `--node-status-max-images` (). The scheduler uses this list to score nodes based on the information if a container image already exists. crictl, which is able to output the image list in a human readable way: ```shell sudo crictl images --digests IMAGE TAG DIGEST IMAGE ID SIZE registry.k8s.io/pause 3.2 4a1c4b21597c1 80d28bedfe5de 688kB ``` CRI-O implements the function to follow the following self-defined rules: always return at least one `repo_digests` value return zero or more `repo_tags` values There are multiple use-cases where this behavior is relevant. Those will be covered separately by using real world examples. This is the standard behavior and already shown in the `pause` image example above. `crictl` is able to display all information, like the image name, tag and digest. There are multiple digests available for this image, which gets a correct representation within the `kubelet`'s node status. Let's assume we pulled the image `quay.io/saschagrunert/hello-world` and afterwards its `latest` tag got updated. Now we pull the image again, which results in untagging the local image in favor of the new remote one. CRI-O would now have no available `RepoTags` nor `RepoDigests` within the `storage.ImageResult`. In this case, CRI-O uses an assembled `repoDigests` value from the `PreviousName` and the image digest: ```go repoDigests = []string{from.PreviousName + \"@\" + string(from.Digest)} ``` This allows tools like `crictl` to output the image name by adding a `<none>` placeholder for the tag: ```shell sudo crictl images --digests ``` ```text IMAGE TAG DIGEST IMAGE ID SIZE quay.io/saschagrunert/hello-world <none> 2403474085c1e 14c28051b743c 5.88MB quay.io/saschagrunert/hello-world latest ca810c5740f66 d1165f2212346 17.7kB ``` The `kubelet` is still able to list the image by its digest, which could be referenced by a Kubernetes container: ```shell kubectl get node 127.0.0.1 -o json | jq .status.images ``` ```json { \"names\": [ \"quay.io/saschagrunert/hello-world@sha256:2403474085c1e68c0aa171eb1b2b824a841a4aa636a4f2500c8d2e2f6d3cb422\" ], \"sizeBytes\": 5884835 } ``` We assume that we consecutively build a container image locally like this: ```shell sudo podman build --no-cache -t test . ``` The previous image tag gets removed by Podman and applied to the current build. In that case CRI-O will use the `PreviousName` in the same way as described in the use-case above. If we pull a container image by its digest like this: ```shell sudo crictl pull docker.io/alpine@sha256:2a8831c57b2e2cb2cda0f3a7c260d3b6c51ad04daea0b3bfc5b55f489ebafd71 ``` Then CRI-O will not be able to provide a `RepoTags` result, but a single entry in `RepoDigests`. The output for tools like `crictl` will be the same as described in the examples above. In the same way the node status receives the single digest entry: ```json { \"names\": [ \"docker.io/library/alpine@sha256:2a8831c57b2e2cb2cda0f3a7c260d3b6c51ad04daea0b3bfc5b55f489ebafd71\" ], \"sizeBytes\": 5850080 } ```"
}
] |
{
"category": "Runtime",
"file_name": "cri.md",
"project_name": "CRI-O",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This document aims to provide the guidelines for ORAS contributors to improve existing error messages and error handling method as well as the new error output format. It will also provide recommendations and examples for ORAS CLI contributors for how to write friendly and standard error messages, avoid generating inconsistent and ambiguous error messages. A clear and actionable error message is very important when raising an error, so make sure your error message describes clearly what the error is and tells users what they need to do if possible. First and foremost, make the error messages descriptive and informative. Error messages are expected to be helpful to troubleshoot where the user has done something wrong and the program is guiding them in the right direction. A great error message is recommended to contain the following elements: HTTP status code: optional, when the logs are generated from the server side, it's recommended to print the HTTP status code in the error description Error description: describe what the error is Suggestion: for those errors that have potential solution, print out the recommended solution. Versioned troubleshooting document link is nice to have Second, when necessary, it is highly suggested for ORAS CLI contributors to provide recommendations for users how to resolve the problems based on the error messages they encountered. Showing descriptive words and straightforward prompt with executable commands as a potential solution is a good practice for error messages. Third, for unhandled errors you didn't expect the user to run into. For that, have a way to view full traceback information as well as full debug or verbose logs output, and instructions on how to submit a bug. Fourth, signal-to-noise ratio is crucial. The more irrelevant output you produce, the longer it's going to take the user to figure out what they did wrong. If your program produces multiple errors of the same type, consider grouping them under a single explanatory header instead of printing many similar-looking lines. Fifth, CLI program termination should follow the standard to report execution status information about success or failure. ORAS returns `EXIT_FAILURE` if and only if ORAS reports one or more errors. Last, error logs can also be useful for post-mortem debugging and can also be written to a file, truncate them occasionally and make sure they don't contain ansi color codes. Provide full description if the user input does not match what ORAS CLI expected. A full description should include the actual input received from the user and expected input Use the capital letter ahead of each line of any error message Print human readable error message. If the error message is mainly from the server and varies by different servers, tell users that the error response is from server. This implies that users may need to contact server side for troubleshooting Provide specific and actionable prompt message with argument suggestion or show the example usage for reference."
},
{
"data": "Instead of showing flag or argument options is missing, please provide available argument options and guide users to \"--help\" to view more examples) If the actionable prompt message is too long to show in the CLI output, consider guide users to ORAS user manual or troubleshooting guide with the versioned permanent link If the error message is not enough for troubleshooting, guide users to use \"--verbose\" to print much more detailed logs If server returns an error without any , such as the example 13 below, consider providing customized and trimmed error logs to make it clearer. The original server logs can be displayed in debug mode Do not use a formula-like or a programming expression in the error message. (e.g, `json: cannot unmarshal string into Go value of type map[string]map[string]string.`, or `Parameter 'xyz' must conform to the following pattern: '^[-\\\\w\\\\._\\\\(\\\\)]+$'`) Do not use ambiguous expressions which mean nothing to users. (e.g, `Something unexpected happens`, or `Error: accepts 2 arg(s), received 0`) Do not print irrelevant error message to make the output noisy. The more irrelevant output you produce, the longer it's going to take the user to figure out what they did wrong. Here is a sample structure of an error message: ```text {Error|Error response from registry}: {Error description (HTTP status code can be printed out if any)} [Usage: {Command usage}] [{Recommended solution}] ``` HTTP status code is an optional information. Printed out the HTTP status code if the error message is generated from the server side. Command usage is also an optional information but it's recommended to be printed out when user input doesn't follow the standard usage or examples. Recommended solution (if any) should follow the general guiding principles described above. Here are some examples of writing error message with helpful prompt actionable information: Current behavior and output: ```console $ oras cp Error: accepts 2 arg(s), received 0 ``` Suggested error message: ```console $ oras cp Error: \"oras copy\" requires exactly 2 arguments but received 0. Usage: oras copy ] Please specify 2 arguments as source and destination respectively. Run \"oras copy -h\" for more options and examples ``` Current behavior and output: ```console $ oras tag list ghcr.io/oras-project/oras Error: unable to add tag for 'list': invalid reference: missing repository ``` Suggested error message: ```console $ oras tag list ghcr.io/oras-project/oras Error: There is no \"list\" sub-command for \"oras tag\" command. Usage: oras tag [flags] <name>{:<tag>|@<digest>} <new_tag> [...] If you want to list available tags in a repository, use \"oras repo tags\" ``` Current behavior and output: ```console $ oras manifest fetch --oci-layout /tmp/ginkgo1163328512 Error: \"/tmp/ginkgo1163328512\": no tag or digest when expecting <name:tag|name@digest> ``` Suggested error message: ```console $ oras manifest fetch --oci-layout /tmp/ginkgo1163328512 Error: \"/tmp/ginkgo1163328512\": no tag or digest specified Usage: oras manifest fetch [flags] <name>{:<tag>|@<digest>} You need to specify an artifact reference in the form of \"<name>:<tag>\" or \"<name>@<digest>\". Run \"oras manifest fetch -h\" for more options and examples ``` Current behavior and output: ```console $ oras manifest push --oci-layout /sample/images:foobar:mediatype manifest.json Error: media type is not recognized. ``` Suggested error message: ```console $ oras manifest push --oci-layout /sample/images:foobar:mediatype manifest.json Error: media type is not specified via the flag \"--media-type\" nor in the manifest.json Usage: oras manifest push |@<digest>] <file> You need to specify a valid media type in the manifest JSON or via the \"--media-type\" flag ``` Current behavior and output: ```console $ oras attach --artifact-type oras/test localhost:5000/command/images:foobar --distribution-spec"
},
{
"data": "Error: unknown distribution specification flag: v1.0 ``` Suggested error message: ```console $ oras attach --artifact-type oras/test localhost:5000/sample/images:foobar --distribution-spec ??? Error: unknown distribution specification flag: \"v1.0\". Available options: v1.1-referrers-api, v1.1-referrers-tag ``` Current behavior and output: ```console $ oras attach --artifact-type sbom/example localhost:5000/sample/images:foobar Error: no blob is provided ``` Suggested error message: ```console $ oras attach --artifact-type sbom/example localhost:5000/sample/images:foobar Error: neither file nor annotation provided in the command Usage: oras attach [flags] --artifact-type=<type> <name>{:<tag>|@<digest>} <file>[:<type>] [...] To attach to an existing artifact, please provide files via argument or annotations via flag \"--annotation\". Run \"oras attach -h\" for more options and examples ``` Current behavior and output: ```console $ oras push --annotation-file sbom.json ghcr.io/library/alpine:3.9 Error: failed to load annotations from sbom.json: json: cannot unmarshal string into Go value of type map[string]map[string]string. Please refer to the document at https://oras.land/docs/howtoguides/manifest_annotations ``` Suggested error message: ```console $ oras push --annotation-file annotation.json ghcr.io/library/alpine:3.9 Error: invalid annotation json file: failed to load annotations from annotation.json. Annotation file doesn't match the required format. Please refer to the document at https://oras.land/docs/howtoguides/manifest_annotations ``` Current behavior and output: ```console $ oras push --annotation \"key:value\" ghcr.io/library/alpine:3.9 Error: missing key in `--annotation` flag: key:value ``` Suggested error message: ```console $ oras push --annotation \"key:value\" ghcr.io/library/alpine:3.9 Error: annotation value doesn't match the required format. Please use the correct format in the flag: --annotation \"key=value\" ``` ```console $ oras pull docker.io/nginx:latest Error: failed to resolve latest: GET \"https://registry-1.docker.io/v2/nginx/manifests/latest\": response status code 401: unauthorized: authentication required: [map[Action:pull Class: Name:nginx Type:repository]] ``` Suggested error message: ```console $ oras pull docker.io/nginx:latest Error response from registry: pull access denied for docker.io/nginx:latest : unauthorized: requested access to the resource is denied Namespace is missing, do you mean `oras pull docker.io/library/nginx:latest`? ``` Current behavior and output: ```console $ oras push /oras --format json Error: Head \"https:///v2/oras/manifests/sha256:ffa50b27cd0096150c0338779c5090db41ba50d01179d851d68afa50b393c3a3\": http: no Host in request URL ``` Suggested error message: ```console $ oras push /oras --format json Error: \"/oras\" is an invalid reference Usage: oras push ] <file>[:<type>] [...] Please specify a valid reference in the form of <registry>/<repo>[:tag|@digest] ``` Current behavior and output: ```console $ oras push localhost:5000/oras:v1 hello.txt Error: failed to stat /home/user/hello.txt: stat /home/user/hello.txt: no such file or directory ``` Suggested error message: ```console $ oras push localhost:5000/oras:v1 hello.txt Error: /home/user/hello.txt: no such file or directory ``` Current behavior and output: ```console $ oras pull localhost:7000/repo:tag --registry-config auth.config Error: failed to resolve tag: GET \"http://localhost:7000/v2/repo/manifests/tag\": credential required for basic auth ``` Suggested error message: ```console $ oras pull localhost:7000/repo:tag --registry-config auth.config Error: failed to authenticate when attempting to pull: no valid credential found in auth.config Please check whether the registry credential stored in the authentication file is correct ``` Current behavior and output: ```console oras resolve localhost:7000/command/artifacts:foobar -u t -p 2 WARNING! Using --password via the CLI is insecure. Use --password-stdin. Error response from registry: <nil> ``` Suggested error message: ```console oras resolve localhost:7000/command/artifacts:foobar -u t -p 2 WARNING! Using --password via the CLI is insecure. Use --password-stdin. Error response from registry: recognizable error message not found: failed to resolve digest: HEAD \"http://localhost:7000/v2/test/manifests/bar\": response status code 401: Unauthorized Authentication failed. Please verify your login credentials and try again. ``` Parts of the content are borrowed from these guidelines."
}
] |
{
"category": "Runtime",
"file_name": "error-handling-guideline.md",
"project_name": "ORAS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: \"Backup Storage Locations and Volume Snapshot Locations\" layout: docs Velero has two custom resources, `BackupStorageLocation` and `VolumeSnapshotLocation`, that are used to configure where Velero backups and their associated persistent volume snapshots are stored. A `BackupStorageLocation` is defined as a bucket or a prefix within a bucket under which all Velero data is stored and a set of additional provider-specific fields (AWS region, Azure storage account, etc.). Velero assumes it has control over the location you provide so you should use a dedicated bucket or prefix. If you provide a prefix, then the rest of the bucket is safe to use for multiple purposes. The captures the configurable parameters for each in-tree provider. A `VolumeSnapshotLocation` is defined entirely by provider-specific fields (AWS region, Azure resource group, Portworx snapshot type, etc.) The captures the configurable parameters for each in-tree provider. The user can pre-configure one or more possible `BackupStorageLocations` and one or more `VolumeSnapshotLocations`, and can select at backup creation time the location in which the backup and associated snapshots should be stored. This configuration design enables a number of different use cases, including: Take snapshots of more than one kind of persistent volume in a single Velero backup. For example, in a cluster with both EBS volumes and Portworx volumes Have some Velero backups go to a bucket in an eastern USA region, and others go to a bucket in a western USA region, or to a different storage provider For volume providers that support it, like Portworx, you can have some snapshots stored locally on the cluster and have others stored in the cloud Velero supports multiple credentials for `BackupStorageLocations`, allowing you to specify the credentials to use with any `BackupStorageLocation`. However, use of this feature requires support within the plugin for the object storage provider you wish to use. All support this feature. If you are using a plugin from another provider, please check their documentation to determine if this feature is supported. Velero only supports a single set of credentials for `VolumeSnapshotLocations`. Velero will always use the credentials provided at install time (stored in the `cloud-credentials` secret) for volume snapshots. Volume snapshots are still limited by where your provider allows you to create snapshots. For example, AWS and Azure do not allow you to create a volume snapshot in a different region than where the volume is. If you try to take a Velero backup using a volume snapshot location with a different region than where your cluster's volumes are, the backup will fail. Each Velero backup has one `BackupStorageLocation`, and one `VolumeSnapshotLocation` per volume provider. It is not possible (yet) to send a single Velero backup to multiple backup storage locations simultaneously, or a single volume snapshot to multiple locations simultaneously. However, you can always set up multiple scheduled backups that differ only in the storage locations used if redundancy of backups across locations is important. Cross-provider snapshots are not supported. If you have a cluster with more than one type of volume, like EBS and Portworx, but you only have a `VolumeSnapshotLocation` configured for EBS, then Velero will only snapshot the EBS volumes. Restic data is stored under a prefix/subdirectory of the main Velero bucket, and will go into the bucket corresponding to the `BackupStorageLocation` selected by the user at backup creation"
},
{
"data": "Velero's backups are split into 2 pieces - the metadata stored in object storage, and snapshots/backups of the persistent volume data. Right now, Velero itself does not encrypt either of them, instead it relies on the native mechanisms in the object and snapshot systems. A special case is restic, which backs up the persistent volume data at the filesystem level and send it to Velero's object storage. Velero's compression for object metadata is limited, using Golang's tar implementation. In most instances, Kubernetes objects are limited to 1.5MB in size, but many don't approach that, meaning that compression may not be necessary. Note that restic has not yet implemented compression, but does have de-deduplication capabilities. If you have `VolumeSnapshotLocations` configured for a provider, you must always specify a valid `VolumeSnapshotLocation` when creating a backup, even if you are using for volume backups. You can optionally decide to set the flag using the `velero server`, which lists the default `VolumeSnapshotLocation` Velero should use if a `VolumeSnapshotLocation` is not specified when creating a backup. If you only have one `VolumeSnapshotLocation` for a provider, Velero will automatically use that location as the default. Let's look at some examples of how you can use this configuration mechanism to address some common use cases: During server configuration: ```shell velero snapshot-location create ebs-us-east-1 \\ --provider aws \\ --config region=us-east-1 velero snapshot-location create portworx-cloud \\ --provider portworx \\ --config type=cloud ``` During backup creation: ```shell velero backup create full-cluster-backup \\ --volume-snapshot-locations ebs-us-east-1,portworx-cloud ``` Alternately, since in this example there's only one possible volume snapshot location configured for each of our two providers (`ebs-us-east-1` for `aws`, and `portworx-cloud` for `portworx`), Velero doesn't require them to be explicitly specified when creating the backup: ```shell velero backup create full-cluster-backup ``` In this example, two `BackupStorageLocations` will be created within the same account but in different regions. They will both use the credentials provided at install time and stored in the `cloud-credentials` secret. If you need to configure unique credentials for each `BackupStorageLocation`, please refer to the . During server configuration: ```shell velero backup-location create backups-primary \\ --provider aws \\ --bucket velero-backups \\ --config region=us-east-1 \\ --default velero backup-location create backups-secondary \\ --provider aws \\ --bucket velero-backups \\ --config region=us-west-1 ``` A \"default\" backup storage location (BSL) is where backups get saved to when no BSL is specified at backup creation time. You can change the default backup storage location at any time by setting the `--default` flag using the `velero backup-location set` command and configure a different location to be the default. Examples: ```shell velero backup-location set backups-secondary --default ``` During backup creation: ```shell velero backup create full-cluster-backup ``` Or: ```shell velero backup create full-cluster-alternate-location-backup \\ --storage-location backups-secondary ``` During server configuration: ```shell velero snapshot-location create portworx-local \\ --provider portworx \\ --config type=local velero snapshot-location create portworx-cloud \\ --provider portworx \\ --config type=cloud ``` During backup creation: ```shell velero backup create local-snapshot-backup \\ --volume-snapshot-locations portworx-local ``` Or: ```shell velero backup create cloud-snapshot-backup \\ --volume-snapshot-locations portworx-cloud ``` If you don't have a use case for more than one location, it's still easy to use Velero. Let's assume you're running on AWS, in the `us-west-1` region: During server configuration: ```shell velero backup-location create backups-primary \\ --provider aws \\ --bucket velero-backups \\ --config region=us-west-1 \\ --default velero snapshot-location create ebs-us-west-1 \\ --provider aws \\ --config region=us-west-1 ``` During backup creation: ```shell velero backup create full-cluster-backup ``` It is possible to create additional `BackupStorageLocations` that use their own"
},
{
"data": "This enables you to save backups to another storage provider or to another account with the storage provider you are already using. If you create additional `BackupStorageLocations` without specifying the credentials to use, Velero will use the credentials provided at install time and stored in the `cloud-credentials` secret. Please see the for details on how to create multiple `BackupStorageLocations` that use the same credentials. This feature requires support from the you wish to use. All plugins maintained by the Velero team support this feature. If you are using a plugin from another provider, please check their documentation to determine if this is supported. The you wish to use must be . You must create a file with the object storage credentials. Follow the instructions provided by your object storage provider plugin to create this file. Once you have installed the necessary plugin and created the credentials file, create a in the Velero namespace that contains these credentials: ```shell kubectl create secret generic -n velero credentials --from-file=bsl=</path/to/credentialsfile> ``` This will create a secret named `credentials` with a single key (`bsl`) which contains the contents of your credentials file. Next, create a `BackupStorageLocation` that uses this Secret by passing the Secret name and key in the `--credential` flag. When interacting with this `BackupStroageLocation` in the future, Velero will fetch the data from the key within the Secret you provide. For example, a new `BackupStorageLocation` with a Secret would be configured as follows: ```bash velero backup-location create <bsl-name> \\ --provider <provider> \\ --bucket <bucket> \\ --config region=<region> \\ --credential=<secret-name>=<key-within-secret> ``` The `BackupStorageLocation` is ready to use when it has the phase `Available`. You can check the status with the following command: ```bash velero backup-location get ``` To use this new `BackupStorageLocation` when performing a backup, use the flag `--storage-location <bsl-name>` when running `velero backup create`. You may also set this new `BackupStorageLocation` as the default with the command `velero backup-location set --default <bsl-name>`. By default, `BackupStorageLocations` will use the credentials provided at install time and stored in the `cloud-credentials` secret in the Velero namespace. You can modify these existing credentials by , however, these changes will apply to all locations using this secret. This may be the desired outcome, for example, in the case where you wish to rotate the credentials used for a particular account. You can also opt to modify an existing `BackupStorageLocation` such that it uses its own credentials by using the `backup-location set` command. If you have a credentials file that you wish to use for a `BackupStorageLocation`, follow the instructions above to create the Secret with that file in the Velero namespace. Once you have created the Secret, or have an existing Secret which contains the credentials you wish to use for your `BackupStorageLocation`, set the credential to use as follows: ```bash velero backup-location set <bsl-name> \\ --credential=<secret-name>=<key-within-secret> ``` If you're using Azure's AKS, you may want to store your volume snapshots outside of the \"infrastructure\" resource group that is automatically created when you create your AKS cluster. This is possible using a `VolumeSnapshotLocation`, by specifying a `resourceGroup` under the `config` section of the snapshot location. See the for details. If you're using Azure, you may want to store your Velero backups across multiple storage accounts and/or resource groups/subscriptions. This is possible using a `BackupStorageLocation`, by specifying a `storageAccount`, `resourceGroup` and/or `subscriptionId`, respectively, under the `config` section of the backup location. See the for details."
}
] |
{
"category": "Runtime",
"file_name": "locations.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List tunnel endpoint entries ``` cilium-dbg bpf tunnel list [flags] ``` ``` -h, --help help for list -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Tunnel endpoint map"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_bpf_tunnel_list.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "The MIT License (MIT) Copyright (c) 2014 Brian Goff Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE."
}
] |
{
"category": "Runtime",
"file_name": "LICENSE.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "oep-number: draft Upgrade 20190710 title: Upgrade via Kubernetes Job authors: \"@kmova\" owners: \"@amitkumardas\" \"@vishnuitta\" editor: \"@kmova\" creation-date: 2019-07-10 last-updated: 2019-07-31 status: implementable see-also: NA replaces: current upgrade steps with scripts superseded-by: NA - - - - - - - - - - This design is aimed at providing a design for upgrading the data plane components via `kubectl` and also allow administrators to automate the upgrade process with higher level operators. This proposed design will be rolled out in phases while maintaining backward compatibility with the upgrade process defined in prior releases starting with OpenEBS 1.1. At a high level the design is implemented in the following phases: Phase 1: Ability to perform storage pool and volume upgrades using a Kubernetes Job Phase 2: Allow for saving the history of upgrades on a given pool or volume on a Kubernetes custom resource called `UpgradeTask`. Manage the cleanup of Upgrade Jobs and UpgradeTask CRs along with the resource on which they operate. Phase 3: An upgrade operator that: Automatically trigger the upgrade of pools and volumes when the control plane is upgraded. The upgrade operator will create a UpgradeTask and pass that to a Upgrade Job. Ability to set which pools or volumes should be automatically upgraded or not. OpenEBS is Kubernetes native implementation comprising of a set of Deployments and Custom resources - that can be divided into two broad categories: control plane and data plane. The control plane components install custom resources and help with provisioning and managing volumes that are backed by different storage engines (aka data plane). The control plane components are what an administrator installs into the cluster and creates storage configuration (or uses default configuration) that includes storage classes. The developers only need to know about the storage classes at their disposal. OpenEBS can be installed using several ways like any other Kubernetes application like using a `kubectl apply` or `helm` or via catalogs maintained by Managed Kubernetes services like Rancher, AWS market place, OpenShift Operator Hub and so forth. Hence upgrading of OpenEBS involves: Upgrading the OpenEBS Control plane using one of many options in which Kubernetes apps are installed. Upgrading of the Data Plane components or deployments and custom resources that power the storage pools and volumes. Up until 1.0, the process of upgrading involved a multi-step process of upgrading the control plane followed by: upgrading the storage pools upgrading the persistent volumes one at a time. The upgrade itself involved downloading scripts out of band, customizing them and executing them. While this may work for smaller clusters and when upgrades are infrequent, the dependency on a manual intervention for upgrades becomes a bottleneck on Ops team when running thousands of clusters across different geographic locations. In addition to the scalability limitation, in some environments, additional steps may be required like running the upgrades from an external shell - that has access to the cluster and has live connection to the cluster in the process of"
},
{
"data": "Cluster administrators would like to automate the process of upgrade, with push-button access for upgrades and/or flexibility on which pools and volumes are upgraded. Ease the process of upgrading OpenEBS control plane components. Ease the process of upgrading OpenEBS data plane components. Automate the process of upgrading OpenEBS data plane components. Automating the OpenEBS control plane upgrades As an cluster administrator - I want to automate the upgrades of OpenEBS on thousands of Edge Clusters that are running in my organization. As an OpenEBS user - I want the upgrade steps to be standardized, so that I don't need to learn how to upgrade, every time there is a new release. As an developer of managed Kubernetes platform - I want to provide an option for my end-user (cluster administrator) an user-interface to easily select the volumes and pools and schedule an upgrade. Upgrading of the OpenEBS storage components should be possible via `kubectl`. Upgrade of a resource should be a single command and should be idempotent. Administrators should be able to use the GitOps to manage the OpenEBS configuration. This design proposes the following key changes: Control Plane components can handle the older versions of data plane components. This design enforces that the control plane component that manage a resource has to take care of either: Support the reading of resource with old schema. This is the case where the schema (data) is user generated. Auto upgrading of the schema to new format. This is the case where resource or the attributes are created and owned by the control plane component. For example, a cstor-operator reads a user generated resource called SPC, as well as creates a resource called CSP. It is possible that both of these user-facing and internal resources go through a change to support some new functionality. With this design, when the cstor operator is upgraded to latest version, the following will happen: The non-user facing changes in SPC are handled by the cstor operator, which could be like adding a finalizer or some annotation that will help with internal bookkeeping. It will also continue to read and operate on the existing schema. If there has been a user api change, the steps will be provided for the administrator to make the appropriate changes to their SPC (which is probably saved in their private Git) and re-apply the YAML. The changes to the CSP will be managed completely by the upgraded cstor operator. User/Administrator need not interact with CSP. Note: This eliminates the need for pre-upgrade scripts that were used till 1.0. Status: Available in 1.1 Data plane components have an interdependence on each other and upgrading a volume typically involves upgrading multiple related components at once. For example upgrading a jiva volume involves upgrading the target and replica deployments. The functionality to upgrade these various components are provided and delivered in a container hosted at"
},
{
"data": "A Kubernetes Job with this upgrade container can be launched to carry out the upgrade process. The upgrade job will have to run in the same namespace as openebs control plane and with same service account. Upgrade of each resource will be launched as a separate Job, that will allow for parallel upgrading of resources as well as granular control on which resource is being upgraded. Upgrades are almost seamless, but still have to be planned to be run at different intervals for different applications to minimize the impact of unforeseen events. As the upgrades are done via Kubernetes job, a pod is scheduled to perform the task within the cluster, eliminating any network dependencies that might there between machine that triggers the upgrade to cluster. Also the logs of the upgrade are saved on the pod. UpgradeJob is idempotent, in the sense it will start execution from the point where it left off and finish the job. If the UpgradeJob is executed on an already upgraded resource, it will return success. Note: This replaces the script based upgrades from OpenEBS 1.0. Status: Available in 1.1 and supports upgrading of Jiva Volumes, cStor Volumes and cStor pools from 1.0 to 1.1 A custom resource called `UpgradeTask` will be defined with the details of the object being upgraded, and will in turn be updated with the status of the upgrade by the upgrade-container. This will allow for tools to be written to capture the status of the upgrades and take any correction actions. One of the consumer for this UpgradeTask is openebs-operator itself that will automate the upgrade of all resources. The `UpgradeTask` resource will be created by the Upgrade Job. The UpgradeJob will also be enhanced to receive `UpgradeTask` as input will all the details of the resources included. In this case, the Upgrade Job will append the results of the operation to the provided UpgradeTask. This is also implemented to allow for higher level operators to eliminate steps like determining what is the name of the `UpgradeTask` created by the UpgradeJob. Status: Under Development, planned for 1.2 Improvements to the backward compatibility checks added to the OpenEBS Control Plane in (1). The backward compatibility checks will involve checking for multiple resources and this process is triggered on a restart. This process will be optimized to use a flag to check if the backward compatibility checks are indeed required. On the resource being managed, the following internal spec will be maintained that indicates if the backward compatibility checks need to be maintained. ```yaml versionDetails: currentVersion: desiredVersion: dependentsUpgraded: status: phase: #STARTED, SUCCESS or ERROR can be the supported phases message: Unable to set desired replication factor to CV reason: invalid config for the volume lastUpdateTime: \"2019-10-03T14:37:03Z\" ``` The above spec will be used as a sub-resource under all the custom resource managed by OpenEBS. Status: Under Development, planned for 1.3 Downgrading a version. There are scenarios where the volumes will have to be downgraded to earlier"
},
{
"data": "Some of the challenges around this are after upgrading a resource with a breaking change, falling back to older version might make the resource un-readable. To avoid this, the earlier version of the resource will be saved under a versioned name. When downgrading from higher (currentVersion) to lower (desiredVersion), the backup copy of the resource will be applied. Note: This section will have to be revisited for detailed design after scoping this item into a release. Status: Under Development, planned for TBD Automated upgrades of data plane components. OpenEBS operator will check for all the volumes and resources that need to be upgraded and will launch a Kubernetes Job for any resource that has the autoUpgrade set to true. In case there is a failure to upgrade, the volume will be downgraded back to its current version. Administrators can make use of this auto-upgrade facility as a choice. The auto-upgrade true can be set on either SPC, StorageClass or the PVC and the flag will be trickled down to the corresponding resources during provisioning. Note: This section will have to be revisited for detailed design after scoping this item into a release. Status: Under Development, planned for TBD This design overrides the earlier upgrade methodology. This section captures some of the alternative design considered and the reasoning behind selecting a certain approach. A generic UpgradeTask CR vs task specific ones like JivaUpgradeTask, CStorVolumeUpgradeTask, etc. Having specific tasks has the following advantages: Each tasks can have its own spec. the fields may vary depending on the resource being upgraded. Writing specific operators for each upgrade that operates on a given type of task will make the upgrades more modular. However, this also means that every time a new resource type is added, another CR needs to be introduced, managed and learned by the user. This may still be OK. But a similar pattern where different specs are required is already addressed by the PVC. To keep the resources management at a minimum, the PVC type, sub-resource spec pattern will be used to specify different resources under a generic UpgradeTask CR. Note that, selecting the Generic Task - doesn't preclude from writing specific upgrade task operators and upgrades task CRs as long as the management of these specific upgrade tasks are completely management by the operators and user need not interact with them. One possible future implementation could be: The upgrade-operator can look at the upgrade task (consider it as a claim) and create a specific upgrade task and bind it. Then the specific upgrade operator will operate on the their own resources. Should the UpgradeTask have a provision to perform upgrades on multiple resources of the same kind. For example, can a list of jiva volumes be specified in a single UpgradeTask. Adding multiple jobs will result in adding additional status objects, which in turn will make the co-relation of the object to its result harder to get. UpgradeTask CR provides a basic building block for constructing higher order"
},
{
"data": "In the future, a BulkUpgrade CR can be launched that can take multiple objects - probably even based a namespace group or pv labels etc. The controller watching the BulkUpgrade can then launch individual UpgradeTasks. How does this design compared to the `kubectl` based upgrade The current design proposed in this document builds on top of the 0.8.2 to 0.9 design, by improving on usability and agility of the upgrade code development. The implementation made use of the CAS Template feature and comprised of the following: Each type of upgrade - like jiva volume upgrade, cstor pool upgrade, were converted from shell to an CAS Template, with a series of upgrade tasks (aka runtasks) to be executed. A custom resource called upgraderesults was defined to capture the results while executing the run tasks. An upgrade agent, that can read the upgrade CAS Template and execute the run tasks on a given volume or pool. Administrator will have to create a Kubernetes Job out of the upgrade agent - by passing the upgrade cas template and the object details via a config map. While the above approach was good enough to be automated via `kubectl`, the steps that can be executed as part of upgrade were limited by the features or constructs available within the CAST/RunTasks. Another short-coming of the CAST/RunTasks is the lack of developer friendly constructs for rapidly writing new CAST/RunTasks. This was severely impacting the time taken to push out new releases. Here is an example of Kubernetes Job spec for upgrading the jiva volume. ``` apiVersion: batch/v1 kind: Job metadata: name: jiva-vol-100110-pvc-713e3bb6-afd2-11e9-8e79-42010a800065 namespace: openebs spec: backoffLimit: 4 template: spec: serviceAccountName: openebs-maya-operator containers: name: upgrade args: \"jiva-volume\" \"--from-version=1.0.0\" \"--to-version=1.1.0\" \"--pv-name=pvc-713e3bb6-afd2-11e9-8e79-42010a800065\" \"--v=4\" env: name: OPENEBS_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace tty: true image: quay.io/openebs/m-upgrade:1.1.0 restartPolicy: OnFailure ``` Execute the Upgrade Job Spec ``` $ kubectl apply -f jiva-vol-100110-pvc713.yaml ``` You can check the status of the Job using commands like: ``` $ kubectl get job -n openebs $ kubectl get pods -n openebs #to check on the name for the job pod $ kubectl logs -n openebs jiva-upg-100111-pvc-713e3bb6-afd2-11e9-8e79-42010a800065-bgrhx ``` Here is an example of UpgradeTask for upgrading Jiva Volume. The conditions are added by the Upgrade Job ``` apiVersion: openebs.io/v1alpha1 kind: UpgradeTask metadata: name: upgrade-jiva-pv-001 namespace: openebs spec: fromVersion: 0.9.0 toVersion: 1.0.0 flags: timeout: 0 #wait forever resourceType: jivaVolume: pvName: pvc-3d290e5f-7ada-11e9-b8a5-54e1ad4a9dd4 flags: status: phase: #STARTED, SUCCESS or ERROR can be the supported phases startTime: 2019-07-11T17:39:01Z completedTime: 2019-07-11T17:40:01Z upgradeDetailedStatuses: step: # Upgrade Stage - PREUPGRADE, TARGETUPGRADE, ...,VERIFY startTime: lastUpdatedAt: state: waiting: message: Initiated rollout of \"deployment/pvc-dep-name\" errored: message: Unable to patch \"deployment/pvc-dep-name\" reason: ErrorRollout completed: message: patched \"deployment/pvc-dep-name\" ``` Owner acceptance of `Summary` and `Motivation` sections - 2090731 Agreement on `Proposal` section - 2090731 Date implementation started - 2090731 First OpenEBS release where an initial version of this OEP was available - 2090731 OpenEBS 1.1.0 Version of OpenEBS where this OEP graduated to general availability - YYYYMMDD If this OEP was retired or superseded - YYYYMMDD NA NA"
}
] |
{
"category": "Runtime",
"file_name": "volume-pools-upgrade.md",
"project_name": "OpenEBS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "As a centralized distributed storage system, Curve consist of Client which is used for processing user's requests, Chunkserver for underlying data storage, metadata server MDS and snapshot system. As a library, Curve Client can be linked by other processes like QEMU and Curve-NBD for utilizing services provided by Curve (Curve client will no longer provide services directly to QEMU and Curve-NBD after the hot upgrade, see for more details). Thus, Curve client can be regarded as the entrance of user's operations. Curve provides block storage service based on its underlying distributed filesystem, and Curve client provides block device interfaces. From the user's perspective, what Curve have provided is a block device capable for reading and writing randomly, which corresponds to a file in CurveFS that stored on different nodes of the cluster. To make it clear, let's use an example to illustrate this: When the user creates a 1T block device, this block device corresponds to a 1T size file on CurveFS. It should be noticed that this 1T file is nothing but only a logical concept, and only Curve client and MDS will notice its existence. This file is impercetible for Chunkserver. <p align=\"center\"> <img src=\"../images/curve-file.png\" alt=\"curve-file\" width=\"700\" /><br> <font size=3>Figure 1: Example of the mapping between user's block device and its storage</font> </p> For the user, the 1T block device is a continuous address for reading and writing. But from Curve's side, this block device is a space consists of many chunks with unique ID, and every chunk corresponds to a piece of physical space on Chunkserver. Figure 1 above illustrates this relationship. Curve client communicate with storage cluster by RPC, and from the description above we can try to make a conclusion on what Curve client should do: User operates block devices using interface provided by Curve client, including create, open, read and write. For achieving the best performance, asynchronous read/write interfaces are provided. As we can see from Figure 1, user's block devices are scattered on multiple storage nodes in the underlying physical storage in chunks, this is for higher concurrency and data reliability. When writing to Curve client, the client will receive the offset, length and actual data of the I/O. The corresponding space of offset + length may cover more than one chunk, and chunks may be placed in different Chunkserver, so the Curve client should transfer user's block device I/O request to the chunk request toward different Chunkserver and dispatch. I/O are split according to the offset and length. For example, if a user send an I/O request with offset = 0 and length = 2 * chunk size, this request will be split into at least two when it arrives at the client since the I/O target space crosses two chunks on different Chunkserver. In the client, both the I/O interface and writing chunk are asynchronous, and as a result tracking every split I/Os and record their returns are necessary. The request will return to the user only when every splitted I/Os have returned. Curve client is a stateless component without saving any metadata of any file. But the I/O splitting we discussed above require the mapping data of chunk to physical chunk on Chunkserver, which are fetched from"
},
{
"data": "MDS will persist routing information of files, when user write a block device, client will fetch the metadata of this file from MDS and save them in RAM for following using. The metadata that a user I/O requires including: File chunk -> chunk id: The mapping between logical chunks user's file and the physical chunks on Chunkserver. Chunk id -> copyset id(raft group): The Raft group that those physical chunks belong to. Copyset id -> serverlist: Chunk server list that Raft groups belong to. MDS provides metadata services, and in order to improve availability, mds serves as a cluster. Only one MDS provides services at the same time, and others will monitor through Etcd, getting ready to replace the leader at any time. Curve client will communicate with MDS through RPC when I/O is dispatched and the control request is called, during which the address of working MDS is required. Therefore, the clients also needs to change the address if the service node in MDS cluster is switched. Similar with the high availability design of MDS, Chunkserver utilized Raft algorithm for multi-node data copying and high availability guarantee. When dispatching I/O request on client, the fetching of current leader of the Raft group is required, and the request will then sent to the leader node. Curve client will control the data flow when overload of the cluster is detected. I/O flow control from Curve client is essentially a method to relief the workload, but not a solution. In the sections above we made an conclusion on what the client can do. In this section we'll focus on the architecture of the client from its modules and thread model. Figure 2 shows the modules of the client. It should be mentioned that client QoS module has not implemented yet currently. <p align=\"center\"> <img src=\"../images/curve-client-arch.png\" alt=\"curve-client-arch.png\" width=\"600\" /><br> <font size=3>Figure 2: Curve client modules</font> </p> <p align=\"center\"> <img src=\"../images/curve-client-thread-model.png\" alt=\"curve-client-thread-model.png\" width=\"600\" /><br> <font size=3>Figure 3: Thread model of Curve client</font> </p> Figure 3 shows the thread model of Curve client. Use asynchronous I/O request as an example, for the client there are 4 kinds of threads involved: User thread: When a user initiates an asynchronous I/O request, AioWrite interface of Curve client will be called. AioWrite will encapsulate user's request and push it into a task queue, and this user call will return at this stage. Task queue thread: This thread will monitor the task queue, and once an I/O task enter, it will fetch from the queue and split the I/O into sub-I/Os and push them into the I/O dispatching queue. I/O dispatching thread: According to the info of sub-I/Os from above, this thread will sends RPC requests to corresponding Chunkserver nodes. BRPC thread: This thread deals with RPC sending and receives RPC response. The call back function Closure will be called after the RPC request returned. If every sub-I/Os return successfully, the call back function of their corresponding I/O will also be called (by the BRPC thread). During the I/O splitting, metadata of the file is required. These data will not be persisted in the client, and will only be fetched from the MDS when"
},
{
"data": "In order to avoid frequent communication from client to MDS, the client will cache the metadata fetched from MDS, and we have mentioned about these data in section 2.4. Once a file has been allocated its chunk ID and copyset ID, these information will remain unchanged. In the three types of metadata we've mentioned above, the only one that will change is the mapping between copyset ID and server list. This is because of the configuration change caused by situations like nodes outage, unbalance load. In these cases, the Chunkserver list of the corresponding copyset will change. Curve client trigger metadata update also by RPC. The metadata being updated including: Leader information of the Raft group The request of the client will be sent to the leader of the Raft group, and the client will fetch the leader info of current Raft group (cached in meta cache). If a request was sent to the Chunkserver that is no longer the leader, the client will initiate a GetLeader request to other Chunkserver in current Raft group. This request can be sent to any of them since every node will know the leader if a new one has been elected. After knowing the new leader, the client will update the meta cache, then resend the request to the new leader. Peer info in Raft group In such an extreme case that all peers in a Raft group are change due to unexpected issues like node failure, GetLeader request will not able to get the real leader. Client will fetch the new Raft group info from MDS in this case after retried for some times. I/O dispatching thread will send asynchronous RPC requests after getting I/O tasks, and will call the call back function after the RPC request returned. In the call back function, the return value of the RPC request will be examined to determine whether the request has succeeded. If failed and retry is needed, the RPC request will be resent in the call back function. For read/write request, the retry time will be set to a relatively high value in order to let the I/O request return successfully as much as possible. Because for block device, the error returned will makes the user believes that the block device has damaged and the disk has error. There will be pre-processing for the RPC retries in two cases: Chunk server overload In this case, the error code of the RPC response will be OVERLOAD, which means the Chunkserver is suffering a high workload. If the client retry directly in this time, it's very likely that another OVERLOAD will return. In this scenario, the client should sleep for a while before next retry. On client side, we introduced Exponential Backoff for the retry interval and random jitter for the sleeping time to avoid retry right after many of the request returned OVERLOAD. RPC Timeout There are many explanations for this result, but timeout caused by the overload of Chunkserver due to too many requests is usually the most common case. In this scenario, if the threshold for RPC timeout remain unchanged, the same thing would probably happen again, and the user's I/O request will not return even after a long time. Thus, the retry will first increase the threshold in this case."
}
] |
{
"category": "Runtime",
"file_name": "client_en.md",
"project_name": "Curve",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This is a very simple quick-start guide to getting a Firecracker guest connected to the network. If you're using Firecracker in production, or even want to run multiple guests, you'll need to adapt this setup. Note Currently firecracker supports only TUN/TAP network backend with no multi queue support. The simple steps in this guide assume that your internet-facing interface is `eth0`, you have nothing else using `tap0` and no other `iptables` rules. Check out the Advanced: sections if that doesn't work for you. The first step on the host is to create a `tap` device: ```bash sudo ip tuntap add tap0 mode tap ``` Then you have a few options for routing traffic out of the tap device, through your host's network interface. One option is NAT, set up like this: ```bash sudo ip addr add 172.16.0.1/24 dev tap0 sudo ip link set tap0 up sudo sh -c \"echo 1 > /proc/sys/net/ipv4/ip_forward\" sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE sudo iptables -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT sudo iptables -A FORWARD -i tap0 -o eth0 -j ACCEPT ``` Note: The IP of the TAP device should be chosen such that it's not in the same subnet as the IP address of the host. Advanced: If you are running multiple Firecracker MicroVMs in parallel, or have something else on your system using `tap0` then you need to create a `tap` for each one, with a unique name. Advanced: You also need to do the `iptables` set up for each new `tap`. If you have `iptables` rules you care about on your host, you may want to save those rules before starting. ```bash sudo iptables-save > iptables.rules.old ``` Before starting the guest, configure the network interface using Firecracker's API: ```bash curl --unix-socket /tmp/firecracker.socket -i \\ -X PUT 'http://localhost/network-interfaces/eth0' \\ -H 'Accept: application/json' \\ -H 'Content-Type: application/json' \\ -d '{ \"iface_id\": \"eth0\", \"guest_mac\": \"AA:FC:00:00:00:01\", \"hostdevname\": \"tap0\" }' ``` If you are using a configuration file instead of the API, add a section to your configuration file like this: ```json \"network-interfaces\": [ { \"iface_id\": \"eth0\", \"guest_mac\": \"AA:FC:00:00:00:01\", \"hostdevname\": \"tap0\" } ], ``` Alternatively, if you are using firectl, add --tap-device=tap0/AA:FC:00:00:00:01\\` to your command line. Once you have booted the guest, bring up networking within the guest: ```bash ip addr add 172.16.0.2/24 dev eth0 ip link set eth0 up ip route add default via"
},
{
"data": "dev eth0 ``` Now your guest should be able to route traffic to the internet (assuming that your host can get to the internet). To do anything useful, you probably want to resolve DNS names. In production, you'd want to use the right DNS server for your environment. For testing, you can add a public DNS server to `/etc/resolv.conf` by adding a line like this: ```console nameserver 8.8.8.8 ``` Create a bridge interface ```bash sudo ip link add name br0 type bridge ``` Add tap interface to the bridge ```bash sudo ip link set dev tap0 master br0 ``` Define an IP address in your network for the bridge. For example, if your gateway were on `192.168.1.1` and you wanted to use this for getting dynamic IPs, you would want to give the bridge an unused IP address in the `192.168.1.0/24` subnet. ```bash sudo ip address add 192.168.1.7/24 dev br0 ``` Add firewall rules to allow traffic to be routed to the guest ```bash sudo iptables -t nat -A POSTROUTING -o br0 -j MASQUERADE ``` Define an unused IP address in the bridge's subnet e.g., `192.168.1.169/24`. _Note: Alternatively, you could rely on DHCP for getting a dynamic IP address from your gateway._ ```bash ip addr add 192.168.1.169/24 dev eth0 ``` Set the interface up. ```bash ip link set eth0 up ``` Create a route to the bridge device ```bash ip r add 192.168.1.1 via 192.168.1.7 dev eth0 ``` Create a route to the internet via the bridge ```bash ip r add default via 192.168.1.7 dev eth0 ``` When done, your route table should look similar to the following: ```bash ip r default via 192.168.1.7 dev eth0 192.168.1.0/24 dev eth0 scope link 192.168.1.1 via 192.168.1.7 dev eth0 ``` Add your nameserver to `resolve.conf` ```bash nameserver 192.168.1.1 ``` The first step to cleaning up is deleting the tap device: ```bash sudo ip link del tap0 ``` If you don't have anything else using `iptables` on your machine, clean up those rules: ```bash sudo iptables -F sudo sh -c \"echo 0 > /proc/sys/net/ipv4/ip_forward\" # usually the default ``` If you have an existing iptables setup, you'll want to be more careful about cleaning up. Advanced: If you saved your iptables rules in the first step, then you can restore them like this: ```bash if [ -f iptables.rules.old ]; then sudo iptables-restore < iptables.rules.old fi ``` Advanced: If you created a bridge interface, delete it using the following: ```bash sudo ip link del br0 ```"
}
] |
{
"category": "Runtime",
"file_name": "network-setup.md",
"project_name": "Firecracker",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Copyright (c) 2018-2023, Sylabs Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE."
}
] |
{
"category": "Runtime",
"file_name": "LICENSE.md",
"project_name": "CRI-O",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: Advanced configuration All CephNFS daemons are configured using shared RADOS objects stored in a Ceph pool named `.nfs`. Users can modify the configuration object for each CephNFS cluster if they wish to customize the configuration. By default, Rook creates the `.nfs` pool with Ceph's default configuration. If you wish to change the configuration of this pool (for example to change its failure domain or replication factor), you can create a CephBlockPool with the `spec.name` field set to `.nfs`. This pool must be replicated and cannot be erasure coded. contains a sample for reference. Ceph uses servers. The config file format for these objects is documented in the . Use Ceph's `rados` tool from the toolbox to interact with the configuration object. The below command will get you started by dumping the contents of the config object to stdout. The output will look something like the example shown if you have already created two exports as documented above. It is best not to modify any of the export objects created by Ceph so as not to cause errors with Ceph's export management. ```console $ rados --pool <pool> --namespace <namespace> get conf-nfs.<cephnfs-name> - %url \"rados://<pool>/<namespace>/export-1\" %url \"rados://<pool>/<namespace>/export-2\" ``` `rados ls` and `rados put` are other commands you will want to work with the other shared configuration objects. Of note, it is possible to pre-populate the NFS configuration and export objects prior to creating CephNFS server clusters. !!! warning RGW NFS export is experimental for the moment. It is not recommended for scenario of modifying existing content. For creating an NFS export over RGW(CephObjectStore) storage backend, the below command can be used. This creates an export for the `/testrgw` pseudo path on an existing bucket bkt4exp as an example. You could use `/testrgw` pseudo for nfs mount operation afterwards. ```console ceph nfs export create rgw my-nfs /testrgw bkt4exp ```"
}
] |
{
"category": "Runtime",
"file_name": "nfs-advanced.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "```bash $ cfs-cli volume expand {volume name} {capacity / GB} ``` This interface is used to increase the volume capacity space. ::: tip Note DpReadOnlyWhenVolFull can strictly limit volume capacity. How to configure the configuration to stop writing when the volume is full: (1)master When creating the volume, set the DpReadOnlyWhenVolFull parameter to true; If the value of the created volume is false, use the update interface to change it to true. After this value is set to true, when the volume is full, the master will change the status of all DPs of the volume to readonly. (2)client Upgrade the client and set the \"minWriteAbleDataPartitionCnt\" parameter in the client's configuration to 0. ::: The more readable and writable data partitions (DPs), the more dispersed the data, and the better the read and write performance of the volume. CubeFS adopts a dynamic space allocation mechanism. After creating a volume, a certain number of data partition DPs will be pre-allocated for the volume. When the number of readable and writable DPs is less than 10, the number of DPs will be automatically increased. If you want to manually increase the number of readable and writable DPs, you can use the following command: ```bash $ cfs-cli volume create-dp {volume name} {number} ``` ::: tip Note The default size of a DP is 120GB. Please create DPs based on the actual usage of the volume to avoid overdraw of all DPs. ::: ```bash $ cfs-cli volume shrink {volume name} {capacity in GB} ``` This interface is used to reduce the volume capacity space. It will be calculated based on the actual usage. If the set value is less than 120% of the used capacity, the operation will fail. Prepare new data nodes (DNs) and metadata nodes (MNs), and configure the existing master address in the configuration file to automatically add the new nodes to the cluster."
}
] |
{
"category": "Runtime",
"file_name": "capacity.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Previous change logs can be found at Hardware: 3 nodes (3mds, 9metaserver), each with: Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz 256G RAM disk cache: INTEL SSDSC2BB80 800G(iops is about 30000+,bw is about 300MB) performance is as follows: s3 backend is minio and the cto is disable(). and as you know, the performance of read may associated with read cache hit. | minio + diskcache(free)| iops/bandwidth | avg-latency(ms) | clat 99.00th (ms) | clat 99.99th (ms) | | :-: | :-: | :-: | :-: | :-: | | (numjobs 1) (50G filesize) 4k randwrite | 3539 | 0.281 | 1.5 | 16 | | (numjobs 1) (50G filesize) 4k randread | 2785 | 0.357 | 0.9 | 5.8| | (numjobs 1) (50G filesize) 512k write | 290 MB/s | 1 | 600 ms | 248| | (numjobs 1) (50G filesize) 512k read | 216 MB/s | 4.3 | 275 ms | 7.6 | | minio + diskcache(near full)| iops/bandwidth | avg-latency(ms) | clat 99.00th (ms) | clat 99.99th (ms) | | :-: | :-: | :-: | :-: | :-: | | (numjobs 1) (50G filesize) 4k randwrite | 2988 | 0.3 | 1.2 | 18 | | (numjobs 1) (50G filesize) 4k randread | 1559 | 0.6 | 1.9 | 346| | (numjobs 1) (50G filesize) 512k write | 266 MB/s | 0.9| 600 ms | 396| | (numjobs 1) (50G filesize) 512k read | 82 MB/s | 86 | 275 ms | 901| | minio + diskcache(full)| iops/bandwidth | avg-latency(ms) | clat 99.00th (ms) | clat 99.99th (ms) | | :-: | :-: | :-: | :-: | :-: | | (numjobs 1) (20G filesize * 5) 4k randwrite | 2860 |"
},
{
"data": "14 | 41 | | (numjobs 1) (20G filesize * 5) 4k randread | 76 | 65 | 278 | 725| | (numjobs 1) (20G filesize * 5) 512k write | 240 MB/s | 10| 278 | 513| | (numjobs 1) (20G filesize * 5) 512k read | 192 MB/s | 12 | 40 | 1955 | Cluster Topology: 3 nodes (3mds, 6metaserver), each metaserver is deployed on a separate SATA SSD Configuration: use default configuration Test tool: mdtest v3.4.0 Test cases: case 1: the directory structure is relatively flat and total 130,000 dirs and files. mdtest -z 2 -b 3 -I 10000 -d /mountpoint case 2: the directory structure is relatively deep and total 204,700 dirs and files. mdtest -z 10 -b 2 -I 100 -d /mountpoint | Case | Dir creation | Dir stat | Dir rename | Dir removal | File creation | File stat | File read | File removal | Tree creation | Tree removal | | | | | | | | | | | | | | case 1 | 1320 | 5853 | 149 | 670 | 1103 | 5858 | 1669 | 1419 | 851 | 64 | | case 2 | 1283 | 5205 | 147 | 924 | 1081 | 5316 | 1634 | 1260 | 1302 | 887 | You can set configuration item fuseClient.enableMultiMountPointRename to false in client.conf if you don't need concurrent renames on multiple mountpoints on the same filesystem. It will improve the performance of metadata. fuseClient.enableMultiMountPointRename=false | Case | Dir creation | Dir stat | Dir rename | Dir removal | File creation | File stat | File read | File removal | Tree creation | Tree removal | | | | | | | | | | | | | | case 1 | 1537 | 7885 | 530 | 611 | 1256 | 7998 | 1861 | 1614 | 1050 | 72 | | case 2 | 1471 | 6328 | 509 | 1055 | 1237 | 6874 | 1818 | 1454 | 1489 | 1034 |"
}
] |
{
"category": "Runtime",
"file_name": "CHANGELOG-2.3.md",
"project_name": "Curve",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "id: \"intro\" sidebar_position: 1 sidebar_label: \"What is HwameiStor\" HwameiStor is an HA local storage system for cloud-native stateful workloads. HwameiStor creates a local storage resource pool for centrally managing all disks such as HDD, SSD, and NVMe. It uses the CSI architecture to provide distributed services with local volumes, and provides data persistence capabilities for stateful cloud-native workloads or components. HwameiStor is an open source, lightweight, and cost-efficient local storage system that can replace expensive traditional SAN storage. The system architecture of HwameiStor is as follows. By using the CAS pattern, users can achieve the benefits of higher performance, better cost-efficiency, and easier management of their container storage. It can be deployed by helm charts or directly use the independent installation. You can easily enable high-performance local storage across the entire cluster with one click and automatically identify disks. HwameiStor is easy to deploy and ready to go. Automated Maintenance Disks can be automatically discovered, identified, managed, and allocated. Smart scheduling of applications and data based on affinity. Automatically monitor disk status and give early warning. High Availability Use cross-node replicas to synchronize data for high availability. When a problem occurs, the application will be automatically scheduled to the high-availability data node to ensure the continuity of the application. Full-Range support of Storage Medium Aggregate HDD, SSD, and NVMe disks to provide low-latency, high-throughput data services. Agile Linear Scalability Dynamically expand the cluster according to flexibly meet the data persistence requirements of the application."
}
] |
{
"category": "Runtime",
"file_name": "what.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Once a pod is prepared with rkt , it can be run by executing `rkt run-prepared UUID`. ``` UUID APP ACI STATE NETWORKS c9fad0e6 etcd coreos.com/etcd prepared 2015/10/01 16:44:08 Setting up stage1 2015/10/01 16:44:08 Wrote filesystem to /var/lib/rkt/pods/run/c9fad0e6-8236-4fc2-ad17-55d0a4c7d742 2015/10/01 16:44:08 Pivoting to filesystem /var/lib/rkt/pods/run/c9fad0e6-8236-4fc2-ad17-55d0a4c7d742 2015/10/01 16:44:08 Execing /init ``` | Flag | Default | Options | Description | | | | | | | `--dns` | `` | IP Address | Name server to write in `/etc/resolv.conf`. It can be specified several times | | `--dns-opt` | `` | Option as described in the options section in resolv.conf(5) | DNS option to write in `/etc/resolv.conf`. It can be specified several times | | `--dns-search` | `` | Domain name | DNS search domain to write in `/etc/resolv.conf`. It can be specified several times | | `--hostname` | \"rkt-$PODUUID\" | A host name | Set pod's host name. | | `--interactive` | `false` | `true` or `false` | Run pod interactively. If true, only one image may be supplied | | `--ipc` | `auto` | `auto`, `private` or `parent` | Whether to stay in the host IPC namespace. | | `--mds-register` | `false` | `true` or `false` | Register pod with metadata service. It needs network connectivity to the host (`--net=(default|default-restricted|host)` | | `--net` | `default` | A comma-separated list of networks. Syntax: `--net[=n[:args], ...]` | Configure the pod's networking. Optionally, pass a list of user-configured networks to load and set arguments to pass to each network, respectively | See the table with ."
}
] |
{
"category": "Runtime",
"file_name": "run-prepared.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This page lists all active maintainers of this repository. If you were a maintainer and would like to add your name to the Emeritus list, please send us a PR. See for governance guidelines and how to become a maintainer. See for general contribution guidelines. , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC"
}
] |
{
"category": "Runtime",
"file_name": "MAINTAINERS.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Ceph Cluster Helm Chart <! Document is generated by `make helm-docs`. DO NOT EDIT. Edit the corresponding *.gotmpl.md file instead --> Creates Rook resources to configure a cluster using the package manager. This chart is a simple packaging of templates that will optionally create Rook resources such as: CephCluster, CephFilesystem, and CephObjectStore CRs Storage classes to expose Ceph RBD volumes, CephFS volumes, and RGW buckets Ingress for external access to the dashboard Toolbox Kubernetes 1.22+ Helm 3.x Install the The `helm install` command deploys rook on the Kubernetes cluster in the default configuration. The section lists the parameters that can be configured during installation. It is recommended that the rook operator be installed into the `rook-ceph` namespace. The clusters can be installed into the same namespace as the operator or a separate namespace. Rook currently publishes builds of this chart to the `release` and `master` channels. Before installing, review the values.yaml to confirm if the default settings need to be updated. If the operator was installed in a namespace other than `rook-ceph`, the namespace must be set in the `operatorNamespace` variable. Set the desired settings in the `cephClusterSpec`. The are only an example and not likely to apply to your cluster. The `monitoring` section should be removed from the `cephClusterSpec`, as it is specified separately in the helm settings. The default values for `cephBlockPools`, `cephFileSystems`, and `CephObjectStores` will create one of each, and their corresponding storage classes. All Ceph components now have default values for the pod resources. The resources may need to be adjusted in production clusters depending on the load. The resources can also be disabled if Ceph should not be limited (e.g. test clusters). The release channel is the most recent release of Rook that is considered stable for the community. The example install assumes you have first installed the and created your customized values.yaml. ```console helm repo add rook-release https://charts.rook.io/release helm install --create-namespace --namespace rook-ceph rook-ceph-cluster \\ --set operatorNamespace=rook-ceph rook-release/rook-ceph-cluster -f values.yaml ``` !!! Note --namespace specifies the cephcluster namespace, which may be different from the rook operator namespace. The following table lists the configurable parameters of the rook-operator chart and their default values. | Parameter | Description | Default | |--|-|| | `cephBlockPools` | A list of CephBlockPool configurations to deploy | See | | `cephBlockPoolsVolumeSnapshotClass` | Settings for the block pool snapshot class | See | | `cephClusterSpec` | Cluster configuration. | See | | `cephFileSystemVolumeSnapshotClass` | Settings for the filesystem snapshot class | See | | `cephFileSystems` | A list of CephFileSystem configurations to deploy | See | | `cephObjectStores` | A list of CephObjectStore configurations to deploy | See | | `clusterName` | The metadata.name of the CephCluster CR | The same as the namespace | | `configOverride` | Cluster ceph.conf override | `nil` | | `csiDriverNamePrefix` | CSI driver name prefix for cephfs, rbd and nfs. | `namespace name where rook-ceph operator is deployed` | |"
},
{
"data": "| Enable an ingress for the ceph-dashboard | `{}` | | `kubeVersion` | Optional override of the target kubernetes version | `nil` | | `monitoring.createPrometheusRules` | Whether to create the Prometheus rules for Ceph alerts | `false` | | `monitoring.enabled` | Enable Prometheus integration, will also create necessary RBAC rules to allow Operator to create ServiceMonitors. Monitoring requires Prometheus to be pre-installed | `false` | | `monitoring.prometheusRule.annotations` | Annotations applied to PrometheusRule | `{}` | | `monitoring.prometheusRule.labels` | Labels applied to PrometheusRule | `{}` | | `monitoring.rulesNamespaceOverride` | The namespace in which to create the prometheus rules, if different from the rook cluster namespace. If you have multiple rook-ceph clusters in the same k8s cluster, choose the same namespace (ideally, namespace with prometheus deployed) to set rulesNamespaceOverride for all the clusters. Otherwise, you will get duplicate alerts with multiple alert definitions. | `nil` | | `operatorNamespace` | Namespace of the main rook operator | `\"rook-ceph\"` | | `pspEnable` | Create & use PSP resources. Set this to the same value as the rook-ceph chart. | `false` | | `toolbox.affinity` | Toolbox affinity | `{}` | | `toolbox.containerSecurityContext` | Toolbox container security context | `{\"capabilities\":{\"drop\":[\"ALL\"]},\"runAsGroup\":2016,\"runAsNonRoot\":true,\"runAsUser\":2016}` | | `toolbox.enabled` | Enable Ceph debugging pod deployment. See | `false` | | `toolbox.image` | Toolbox image, defaults to the image used by the Ceph cluster | `nil` | | `toolbox.priorityClassName` | Set the priority class for the toolbox if desired | `nil` | | `toolbox.resources` | Toolbox resources | `{\"limits\":{\"memory\":\"1Gi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"128Mi\"}}` | | `toolbox.tolerations` | Toolbox tolerations | `[]` | The `CephCluster` CRD takes its spec from `cephClusterSpec.*`. This is not an exhaustive list of parameters. For the full list, see the topic. The cluster spec example is for a converged cluster where all the Ceph daemons are running locally, as in the host-based example (cluster.yaml). For a different configuration such as a PVC-based cluster (cluster-on-pvc.yaml), external cluster (cluster-external.yaml), or stretch cluster (cluster-stretched.yaml), replace this entire `cephClusterSpec` with the specs from those examples. The `cephBlockPools` array in the values file will define a list of CephBlockPool as described in the table below. | Parameter | Description | Default | | | -- | - | | `name` | The name of the CephBlockPool | `ceph-blockpool` | | `spec` | The CephBlockPool spec, see the documentation. | `{}` | | `storageClass.enabled` | Whether a storage class is deployed alongside the CephBlockPool | `true` | | `storageClass.isDefault` | Whether the storage class will be the default storage class for PVCs. See for details. | `true` | | `storageClass.name` | The name of the storage class | `ceph-block` | | `storageClass.parameters` | See documentation or the helm values.yaml for suitable values | see values.yaml | | `storageClass.reclaimPolicy` | The default to apply to PVCs created with this storage class. | `Delete` | | `storageClass.allowVolumeExpansion` | Whether is allowed by default. | `true` | | `storageClass.mountOptions` | Specifies the mount options for storageClass | `[]` | |"
},
{
"data": "| Specifies the for storageClass | `[]` | The `cephFileSystems` array in the values file will define a list of CephFileSystem as described in the table below. | Parameter | Description | Default | | | -- | - | | `name` | The name of the CephFileSystem | `ceph-filesystem` | | `spec` | The CephFileSystem spec, see the documentation. | see values.yaml | | `storageClass.enabled` | Whether a storage class is deployed alongside the CephFileSystem | `true` | | `storageClass.name` | The name of the storage class | `ceph-filesystem` | | `storageClass.pool` | The name of , without the filesystem name prefix | `data0` | | `storageClass.parameters` | See documentation or the helm values.yaml for suitable values | see values.yaml | | `storageClass.reclaimPolicy` | The default to apply to PVCs created with this storage class. | `Delete` | | `storageClass.mountOptions` | Specifies the mount options for storageClass | `[]` | The `cephObjectStores` array in the values file will define a list of CephObjectStore as described in the table below. | Parameter | Description | Default | | | -- | - | | `name` | The name of the CephObjectStore | `ceph-objectstore` | | `spec` | The CephObjectStore spec, see the documentation. | see values.yaml | | `storageClass.enabled` | Whether a storage class is deployed alongside the CephObjectStore | `true` | | `storageClass.name` | The name of the storage class | `ceph-bucket` | | `storageClass.parameters` | See documentation or the helm values.yaml for suitable values | see values.yaml | | `storageClass.reclaimPolicy` | The default to apply to PVCs created with this storage class. | `Delete` | | `ingress.enabled` | Enable an ingress for the object store | `false` | | `ingress.annotations` | Ingress annotations | `{}` | | `ingress.host.name` | Ingress hostname | `\"\"` | | `ingress.host.path` | Ingress path prefix | `/` | | `ingress.tls` | Ingress tls | `/` | | `ingress.ingressClassName` | Ingress tls | `\"\"` | If you have an existing CephCluster CR that was created without the helm chart and you want the helm chart to start managing the cluster: Extract the `spec` section of your existing CephCluster CR and copy to the `cephClusterSpec` section in `values.yaml`. Add the following annotations and label to your existing CephCluster CR: ```yaml annotations: meta.helm.sh/release-name: rook-ceph-cluster meta.helm.sh/release-namespace: rook-ceph labels: app.kubernetes.io/managed-by: Helm ``` Run the `helm install` command in the to create the chart. In the future when updates to the cluster are needed, ensure the values.yaml always contains the desired CephCluster spec. To deploy from a local build from your development environment: ```console cd deploy/charts/rook-ceph-cluster helm install --create-namespace --namespace rook-ceph rook-ceph-cluster -f values.yaml . ``` To see the currently installed Rook chart: ```console helm ls --namespace rook-ceph ``` To uninstall/delete the `rook-ceph-cluster` chart: ```console helm delete --namespace rook-ceph rook-ceph-cluster ``` The command removes all the Kubernetes components associated with the chart and deletes the release. Removing the cluster chart does not remove the Rook operator. In addition, all data on hosts in the Rook data directory (`/var/lib/rook` by default) and on OSD raw devices is kept. To reuse disks, you will have to wipe them before recreating the cluster. See the for more information."
}
] |
{
"category": "Runtime",
"file_name": "ceph-cluster-chart.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Need help? We have several ways to reach us, depending on your preferences and needs. If you haven't already, read our . These docs have common use cases, and might be helpful to browse before submitting an issue. You can contribute to our documentation by creating PRs against the and repositories. For issues with code (and especially if you need to share debug output) we recommend Github issues boards. : is recommended for most issues with the Singularity software. questions, feedback, and suggestions should go here. Feel free to create an issue on a board and additionally request updated content here. questions, feedback, and suggestions should go here. Feel free to create an issue on a board and additionally request updated content here. Note that usage questions, or problems related to running a specific piece of software in a container, are best asked on the Google Group, or Slack channel. Questions in these venues will be seen by a greater number of users, who may already know the answer! After you identify a bug, you should search the respective issue board for similar problems reported by other users. Another user may be facing the same issue, and you can add a +1 (in message or icon) to indicate to the maintainers that the issue is pressing for you as well. The squeaky wheel gets the grease! While we wish we could address every issue, there are only so many hours in the day. We rank issues based on the following questions: How many users are affected? Is there a proposed work-around? In how many instances does the proposed work-around fail? With these simple questions, we can ensure that work is directed and has the maximum impact! However, if your issue doesn't seem to be getting attention you can still move it along using some of the strategies discussed below. Issues can go stale for a number of reasons. In the bullets below, we will review some of these reasons, along with strategies for managing them: The issue needs a gentle"
},
{
"data": "Try targeting a few people with a \"`ping @username any thoughts about this?`\" in the case that it was forgotten. Was your issue properly explained? You are much more likely to get help when you give clear instructions for reproducing the issue, and show effort on your part to think about what the problem might be. If possible, try to come up with a way to reproduce the issue that does not involve a special environment or exotic hardware. Is there broad need? It could be that your issue isn't having a big enough impact for other users to warrant the time for the small development team. In this case, you might try implementing a suggested fix, and then asking for help with the details. Is your issue scattered? When many issues pile up on boards, it sometimes is the case that issues are duplicated. It's important to find these duplicates and merge them into one, because in finding the duplicate you find another user to talk to about the issue. Does your issue need to have scope? The idea of scoping an issue means framing it with respect to other components of the software. For example, if you have a feature request to see metadata about an object, you might frame that in the context of container introspection, and suggest an addition to the software that fits with the \"inspect\" command. A very powerful thing to do would be to open up an issue that (not only discusses your specific addition) but also opens up discussion to the general community for \"How we can do introspection\" better. Then create a set of issues and add them to a [Github milestone](https://help.github.com/articles/about-milestones/). This kind of contribution is much more powerful than simply asking for something. You can reach the community quickly by way of joining our [Google Group](https://groups.google.com/a/lbl.gov/forum/#!forum/singularity). For real time support from the community, you can join our community on slack at . Ping the Google Group or one of the admins here to request to be added. Is there something missing here you'd like to see? Please [let us know](https://github.com/hpcng/singularity/issues)."
}
] |
{
"category": "Runtime",
"file_name": "SUPPORT.md",
"project_name": "Singularity",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Rust Runtime(runtime-rs) is responsible for: Gather metrics about `shim`. Gather metrics from `hypervisor` (through `channel`). Get metrics from `agent` (through `ttrpc`). Here are listed all the metrics gathered by `runtime-rs`. Current status of each entry is marked as: DONE TODO | STATUS | Metric name | Type | Units | Labels | | | | -- | -- | | | | `katashimagentrpcdurationshistogrammilliseconds`: <br> RPC latency distributions. | `HISTOGRAM` | `milliseconds` | <ul><li>`action` (RPC actions of Kata agent)<ul><li>`grpc.CheckRequest`</li><li>`grpc.CloseStdinRequest`</li><li>`grpc.CopyFileRequest`</li><li>`grpc.CreateContainerRequest`</li><li>`grpc.CreateSandboxRequest`</li><li>`grpc.DestroySandboxRequest`</li><li>`grpc.ExecProcessRequest`</li><li>`grpc.GetMetricsRequest`</li><li>`grpc.GuestDetailsRequest`</li><li>`grpc.ListInterfacesRequest`</li><li>`grpc.ListProcessesRequest`</li><li>`grpc.ListRoutesRequest`</li><li>`grpc.MemHotplugByProbeRequest`</li><li>`grpc.OnlineCPUMemRequest`</li><li>`grpc.PauseContainerRequest`</li><li>`grpc.RemoveContainerRequest`</li><li>`grpc.ReseedRandomDevRequest`</li><li>`grpc.ResumeContainerRequest`</li><li>`grpc.SetGuestDateTimeRequest`</li><li>`grpc.SignalProcessRequest`</li><li>`grpc.StartContainerRequest`</li><li>`grpc.StatsContainerRequest`</li><li>`grpc.TtyWinResizeRequest`</li><li>`grpc.UpdateContainerRequest`</li><li>`grpc.UpdateInterfaceRequest`</li><li>`grpc.UpdateRoutesRequest`</li><li>`grpc.WaitProcessRequest`</li><li>`grpc.WriteStreamRequest`</li></ul></li><li>`sandbox_id`</li></ul> | | | `katashimfds`: <br> Kata containerd shim v2 open FDs. | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> | | | `katashimiostat`: <br> Kata containerd shim v2 process IO statistics. | `GAUGE` | | <ul><li>`item` (see `/proc/<pid>/io`)<ul><li>`cancelledwritebytes`</li><li>`rchar`</li><li>`readbytes`</li><li>`syscr`</li><li>`syscw`</li><li>`wchar`</li><li>`writebytes`</li></ul></li><li>`sandboxid`</li></ul> | | | `katashimnetdev`: <br> Kata containerd shim v2 network devices statistics. | `GAUGE` | | <ul><li>`interface` (network device name)</li><li>`item` (see `/proc/net/dev`)<ul><li>`recvbytes`</li><li>`recvcompressed`</li><li>`recvdrop`</li><li>`recverrs`</li><li>`recvfifo`</li><li>`recvframe`</li><li>`recvmulticast`</li><li>`recvpackets`</li><li>`sentbytes`</li><li>`sentcarrier`</li><li>`sentcolls`</li><li>`sentcompressed`</li><li>`sentdrop`</li><li>`senterrs`</li><li>`sentfifo`</li><li>`sentpackets`</li></ul></li><li>`sandbox_id`</li></ul> | | | `katashimpodoverheadcpu`: <br> Kata Pod overhead for CPU resources(percent). | `GAUGE` | percent | <ul><li>`sandbox_id`</li></ul> | | | `katashimpodoverheadmemoryinbytes`: <br> Kata Pod overhead for memory resources(bytes). | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> | | | `katashimprocstat`: <br> Kata containerd shim v2 process statistics. | `GAUGE` | | <ul><li>`item` (see `/proc/<pid>/stat`)<ul><li>`cstime`</li><li>`cutime`</li><li>`stime`</li><li>`utime`</li></ul></li><li>`sandboxid`</li></ul> | | | `katashimprocstatus`: <br> Kata containerd shim v2 process status. | `GAUGE` | | <ul><li>`item` (see `/proc/<pid>/status`)<ul><li>`hugetlbpages`</li><li>`nonvoluntaryctxtswitches`</li><li>`rssanon`</li><li>`rssfile`</li><li>`rssshmem`</li><li>`vmdata`</li><li>`vmexe`</li><li>`vmhwm`</li><li>`vmlck`</li><li>`vmlib`</li><li>`vmpeak`</li><li>`vmpin`</li><li>`vmpmd`</li><li>`vmpte`</li><li>`vmrss`</li><li>`vmsize`</li><li>`vmstk`</li><li>`vmswap`</li><li>`voluntaryctxtswitches`</li></ul></li><li>`sandboxid`</li></ul> | | | `katashimprocesscpusecondstotal`: <br> Total user and system CPU time spent in seconds. | `COUNTER` | `seconds` | <ul><li>`sandboxid`</li></ul> | | | `katashimprocessmaxfds`: <br> Maximum number of open file descriptors. | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> | | | `katashimprocessopenfds`: <br> Number of open file descriptors. | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> | | | `katashimprocessresidentmemorybytes`: <br> Resident memory size in bytes. | `GAUGE` | `bytes` | <ul><li>`sandboxid`</li></ul> | | | `katashimprocessstarttimeseconds`: <br> Start time of the process since `unix` epoch in seconds. | `GAUGE` | `seconds` | <ul><li>`sandboxid`</li></ul> | | | `katashimprocessvirtualmemorybytes`: <br> Virtual memory size in bytes. | `GAUGE` | `bytes` | <ul><li>`sandboxid`</li></ul> | | | `katashimprocessvirtualmemorymaxbytes`: <br> Maximum amount of virtual memory available in bytes. | `GAUGE` | `bytes` | <ul><li>`sandbox_id`</li></ul> | | | `katashimrpcdurationshistogrammilliseconds`: <br> RPC latency distributions. | `HISTOGRAM` | `milliseconds` | <ul><li>`action` (Kata shim v2 actions)<ul><li>`checkpoint`</li><li>`closeio`</li><li>`connect`</li><li>`create`</li><li>`delete`</li><li>`exec`</li><li>`kill`</li><li>`pause`</li><li>`pids`</li><li>`resizepty`</li><li>`resume`</li><li>`shutdown`</li><li>`start`</li><li>`state`</li><li>`stats`</li><li>`update`</li><li>`wait`</li></ul></li><li>`sandboxid`</li></ul> | | | `katashimthreads`: <br> Kata containerd shim v2 process threads. | `GAUGE` | | <ul><li>`sandbox_id`</li></ul> | Different from golang runtime, hypervisor and shim in runtime-rs belong to the same process, so all previous metrics for hypervisor and shim only need to be gathered once. Thus, we currently only collect previous metrics in kata shim. At the same time, we added the interface(`VmmAction::GetHypervisorMetrics`) to gather hypervisor metrics, in case we design tailor-made metrics for hypervisor in the future. Here're metrics exposed from . | Metric name | Type | Units | Labels | | | - | -- | | | `katahypervisorscrapecount`: <br> Metrics scrape count | `COUNTER` | | <ul><li>`sandboxid`</li></ul> | | `katahypervisorvcpu`: <br>Hypervisor metrics specific to VCPUs' mode of functioning. | `IntGauge` | | <ul><li>`item`<ul><li>`exitioin`</li><li>`exitioout`</li><li>`exitmmioread`</li><li>`exitmmiowrite`</li><li>`failures`</li><li>`filtercpuid`</li></ul></li><li>`sandboxid`</li></ul> | | `katahypervisorseccomp`: <br> Hypervisor metrics for the seccomp filtering. | `IntGauge` | | <ul><li>`item`<ul><li>`numfaults`</li></ul></li><li>`sandboxid`</li></ul> | | `katahypervisorseccomp`: <br> Hypervisor metrics for the seccomp filtering. | `IntGauge` | | <ul><li>`item`<ul><li>`sigbus`</li><li>`sigsegv`</li></ul></li><li>`sandbox_id`</li></ul> |"
}
] |
{
"category": "Runtime",
"file_name": "kata-metrics-in-runtime-rs.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "sidebar_position: 2 sidebar_label: \"TiDB\" TiDB is a distributed database product that supports OLTP (Online Transactional Processing), OLAP (Online Analytical Processing), and HTAP (Hybrid Transactional and Analytical Processing) services, compatible with key features such as MySQL 5.7 protocol and MySQL ecosystem. The goal of TiDB is to provide users with one-stop OLTP, OLAP, and HTAP solutions, which are suitable for various application scenarios such as high availability, strict requirements for strong consistency, and large data scale. The TiDB distributed database splits the overall architecture into multiple modules that can communicate with each other. The architecture diagram is as follows: TiDB Server The SQL layer exposes the connection endpoints of the MySQL protocol to the outside world, and is responsible for accepting connections from clients, performing SQL parsing and optimization and finally generating a distributed execution plan. The TiDB layer itself is stateless. In practice, you can start several TiDB instances. A unified access address is provided externally through load-balance components (such as LVS, HAProxy, or F5), and client connections can be evenly distributed on to these TiDB instances. The TiDB server itself does not store data, but only parses SQL and forwards the actual data read request to the underlying storage node, TiKV (or TiFlash). PD (Placement Driver) Server The metadata management module across a TiDB cluster is responsible for storing the real-time data distribution of each TiKV node and the overall topology of the cluster, providing the TiDB Dashboard management and control interface, and assigning transaction IDs to distributed transactions. Placement Driver (PD) not only stores metadata, but also issues data scheduling commands to specific TiKV nodes based on the real-time data distribution status reported by TiKV nodes, which can be said to be the \"brain\" of the entire cluster. In addition, the PD itself is also composed of at least 3 nodes and has high availability capabilities. It is recommended to deploy an odd number of PD nodes. Storage nodes TiKV Server: In charge of storing data. From the outside, TiKV is a distributed Key-Value storage engine that provides transactions. The basic unit for storing data is Region. Each Region is responsible for storing the data of a Key Range (the block between left-closed and right-open from StartKey to EndKey). Each TiKV node is responsible for multiple Regions. TiKV API provides native support for distributed transactions at the KV key-value pair level, and provides the levels of Snapshot Isolation (SI) by default, which is also the core of TiDB's support for distributed transactions at the SQL level. After the SQL layer of TiDB completes the SQL parsing, it will convert the SQL execution plan into the actual call to the TiKV API. Therefore, the data is stored in TiKV. In addition, the TiKV data will be automatically maintained in multiple replicas (the default is three replicas), which naturally supports high availability and automatic failover. TiFlash is a special storage node. Unlike ordinary TiKV nodes, data is stored in columns in TiFlash, and the main function is to accelerate analysis-based"
},
{
"data": "Key-Value Pair The choice of TiKV is the Key-Value model that provides an ordered traversal method. Two key points of TiKV data storage are: A huge Map (comparable to std::map in C++) that stores Key-Value Pairs. The Key-Value pairs in this Map are sorted by the binary order of the Key, that is, you can seek to the position of a certain Key, and then continuously call the Next method to obtain the Key-Value larger than this Key in an ascending order. Local storage (Rocks DB) In any persistent storage engine, data must be saved on disk after all, and TiKV is not different. However, TiKV does not choose to write data directly to the disk, but stores the data in RocksDB, and RocksDB is responsible for the specific data storage. The reason is that developing a stand-alone storage engine requires a lot of work, especially to make a high-performance stand-alone engine, which may require various meticulous optimizations. RocksDB is a very good stand-alone KV storage engine open sourced by Facebook. It can meet various requirements of TiKV for single engine. Here we can simply consider that RocksDB is a persistent Key-Value Map on a host. Raft protocol TiKV uses the Raft algorithm to ensure that data is not lost and error-free when a single host fails. In short, it is to replicate data to multiple hosts, so that if one host cannot provide services, replicas on other hosts can still provide services. This data replication scheme is reliable and efficient, and can deal with replica failures. Region TiKV divides the Range by Key. A certain segment of consecutive Keys are stored on a storage node. Divide the entire Key-Value space into many segments, each segment is a series of consecutive Keys, called a Region. Try to keep the data saved in each Region within a reasonable size. Currently, the default in TiKV is no more than 96 MB. Each Region can be described by a left-closed and right-open block such as [StartKey, EndKey]. MVCC TiKV implements Multi-Version Concurrency Control (MVCC). Distributed ACID transactions TiKV uses the transaction model used by Google in BigTable: Percolator. In this test, we use three VM nodes to deploy the Kubernetes cluster, including one master node and two worker nodes. Kubelete version is 1.22.0. Deploy the HwameiStor local storage in the Kubernetes cluster Configure a 100G local disk, sdb, for HwameiStor on two worker nodes respectively Create StorageClass TiDB can be deployed on Kubernetes using TiDB Operator. TiDB Operator is an automatic operation and maintenance system for TiDB clusters on Kubernetes. It provides full lifecycle management of TiDB including deployment, upgrade, scaling, backup and recovery, and configuration changes. With TiDB Operator, TiDB can run seamlessly on public cloud or privately deployed Kubernetes clusters. The compatibility between TiDB and TiDB Operator versions is as follows: | TiDB version | Applicable versions of TiDB Operator | | | - | | dev | dev | | TiDB >= 5.4 | 1.3 | | 5.1 <= TiDB < 5.4 | 1.3 (recommended), 1.2 | | 3.0 <= TiDB < 5.1 | 1.3 (recommended), 1.2,"
},
{
"data": "| | 2.1 <= TiDB < 3.0 | 1.0 (maintenance stopped) | Install TiDB CRDs ```bash kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml ``` Install TiDB Operator ```bash helm repo add pingcap https://charts.pingcap.org/ kubectl create namespace tidb-admin helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.3.2 \\ --set operatorImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-operator:v1.3.2 \\ --set tidbBackupManagerImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-backup-manager:v1.3.2 \\ --set scheduler.kubeSchedulerImageName=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler ``` Check TiDB Operator components ```bash kubectl create namespace tidb-cluster && \\ kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/tidb-cluster.yaml kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com /pingcap/tidb-operator/master/examples/basic/tidb-monitor.yaml ``` ```bash yum -y install mysql-client ``` ```bash kubectl port-forward -n tidb-cluster svc/basic-tidb 4000 > pf4000.out & ``` Create the Hello_world table ```sql create table helloworld (id int unsigned not null autoincrement primary key, v varchar(32)); ``` Check the TiDB version ```sql select tidb_version()\\G; ``` Check the Tikv storage status ```sql select * from informationschema.tikvstore_status\\G; ``` Create a PVC for tidb-tikv and tidb-pd from `storageClass local-storage-hdd-lvm`: ```bash kubectl get po basic-tikv-0 -oyaml ``` ```bash kubectl get po basic-pd-0 -oyaml ``` After the database cluster is deployed, we performed the following tests about basic capabilities. All are successfully passed. Test purpose: In the case of multiple isolation levels, check if the completeness constraints of distributed data operations are supported, such as atomicity, consistency, isolation, and durability (ACID) Test steps: Create the database: testdb Create the table `ttest ( id int AUTOINCREMENT, name varchar(32), PRIMARY KEY (id) )` Run a test script Test result: The completeness constraints of distributed data operations are supported, such as atomicity, consistency, isolation, and durability (ACID), in the case of multiple isolation levels Test purpose: Check if the object isolation can be implemented by using different schemas Test script: ```sql create database if not exists testdb; use testdb create table if not exists t_test ( id bigint, name varchar(200), saletime datetime default currenttimestamp, constraint pkttest primary key (id) ); insert into t_test(id,name) values (1,'a'),(2,'b'),(3,'c'); create user 'readonly'@'%' identified by \"readonly\"; grant select on testdb.* to readonly@'%'; select * from testdb.t_test; update"
},
{
"data": "set name='aaa'; create user 'otheruser'@'%' identified by \"otheruser\"; ``` Test result: Supported to create different schemas to implement the object isolation Test purpose: Check if you can create, delete, and modifiy table data, DML, columns, partition table Test steps: Run the test scripts step by step after connecting the database Test script: ```sql drop table if exists t_test; create table if not exists t_test ( id bigint default '0', name varchar(200) default '' , saletime datetime default currenttimestamp, constraint pkttest primary key (id) ); insert into t_test(id,name) values (1,'a'),(2,'b'),(3,'c'),(4,'d'),(5,'e'); update t_test set name='aaa' where id=1; update t_test set name='bbb' where id=2; delete from t_dml where id=5; alter table t_test modify column name varchar(250); alter table t_test add column col varchar(255); insert into ttest(id,name,col) values(10,'test','newcol'); alter table t_test add column colwithdefault varchar(255) default 'aaaa'; insert into t_test(id,name) values(20,'testdefault'); insert into t_test(id,name,colwithdefault ) values(10,'test','non-default '); alter table t_test drop column colwithdefault; CREATE TABLE employees ( id INT NOT NULL, fname VARCHAR(30), lname VARCHAR(30), hired DATE NOT NULL DEFAULT '1970-01-01', separated DATE NOT NULL DEFAULT '9999-12-31', job_code INT NOT NULL, store_id INT NOT NULL ) ``` Test result: Supported to create, delete, and modifiy table data, DML, columns, partition table Test purpose: Verify different indexes (unique, clustered, partitioned, Bidirectional indexes, Expression-based indexes, hash indexes, etc.) and index rebuild operations. Test script: ```bash alter table ttest add unique index udxt_test (name); ADMIN CHECK TABLE t_test; create index timeidx on ttest(sale_time); alter table ttest drop index timeidx; admin show ddl jobs; admin show ddl job queries 156; create index timeidx on ttest(sale_time); ``` Test result: Supported to create, delete, combine, and list indexes and supported for unique index Test purpose: Check if the statements in distributed databases are supported such as `if`, `case when`, `for loop`, `while loop`, `loop exit when` (up to 5 kinds) Test script: ```sql SELECT CASE id WHEN 1 THEN 'first' WHEN 2 THEN 'second' ELSE 'OTHERS' END AS idnew FROM ttest; SELECT IF(id>2,'int2+','int2-') from t_test; ``` Test result: supported for statements such as `if`, `case when`, `for loop`, `while loop`, and `loop exit when` (up to 5 kinds) Test purpose: Check if execution plan parsing is supported for distributed databases Test script: ```sql explain analyze select * from t_test where id NOT IN (1,2,4); explain analyze select from t_test a where EXISTS (select from t_test b where a.id=b.id and b.id<3); explain analyze SELECT IF(id>2,'int2+','int2-') from t_test; ``` Test result: the execution plan is supported to parse Test purpose: Verify the feature of binding execution plan for distributed databases Test steps: View the current execution plan of sql statements Use the binding feature View the execution plan after the sql statement is binded Delete the binding Test script: ```sql explain select * from employees3 a join employees4 b on a.id = b.id where a.lname='Johnson'; explain select /+ hash_join(a,b) / * from employees3 a join employees4 b on a.id = b.id where a.lname='Johnson'; ``` Test result: It may not be hashjoin when hint is not used, and it must be hashjoin after hint is used. Test purpose: Verify standard functions of distributed databases Test result: Standard database functions are supported Test purpose: Verify the transaction support of distributed databases Test result: Explict and implicit transactions are supported Test purpose: Verify the data types supported by distributed database Test result: Only the UTF-8 mb4 character set is supported now Test purpose: Verify the lock implementation of distributed databases Test result: Described how the lock is implemented, what are blockage conditions in the case of R-R/R-W/W-W, and how the deadlock is handled Test purpose: Verify the transactional isolation levels of distributed databases Test result: Supported for si and rc isolation levels (4.0 GA version) Test purpose: Verify the complex query capabilities of distributed databases Test result: Supported for the distributed complex queries and operations such as inter-node joins, and supported for window functions and hierarchical queries This section describes system security tests. After the database cluster is deployed, all the following tests are passed. Test purpose: Verify the accout permisson management of distributed databases Test script: ```sql select host,user,authentication_string from mysql.user; create user tidb IDENTIFIED by 'tidb'; select host,user,authentication_string from mysql.user; set password for tidb =password('tidbnew'); select host,user,authenticationstring,Selectpriv from"
},
{
"data": "grant select on . to tidb; flush privileges ; select host,user,authenticationstring,Selectpriv from mysql.user; grant all privileges on . to tidb; flush privileges ; select * from mysql.user where user='tidb'; revoke select on . from tidb; flush privileges ; revoke all privileges on . from tidb; flush privileges ; grant select(id) on test.TEST_HOTSPOT to tidb; drop user tidb; ``` Test results: Supported for creating, modifying, and deleting accounts, and configuring passwords, and supported for the separation of security, audit, and data management Based on different accounts, various permission control for database includes: instance, library, table, and column Test purpose: Verify the permission access control of distributed databases, and control the database data by granting basic CRUD (create, read, update, and delete) permissions Test script: ```sql mysql -u root -h 172.17.49.222 -P 4000 drop user tidb; drop user tidb1; create user tidb IDENTIFIED by 'tidb'; grant select on tidb.* to tidb; grant insert on tidb.* to tidb; grant update on tidb.* to tidb; grant delete on tidb.* to tidb; flush privileges; show grants for tidb; exit; mysql -u tidb -h 172.17.49.222 -ptidb -P 4000 -D tidb -e 'select * from aa;' mysql -u tidb -h 172.17.49.222 -ptidb -P 4000 -D tidb -e 'insert into aa values(2);' mysql -u tidb -h 172.17.49.222 -ptidb -P 4000 -D tidb -e 'update aa set id=3;' mysql -u tidb -h 172.17.49.222 -ptidb -P 4000 -D tidb -e 'delete from aa where id=3;' ``` Test result: Database data is controlled by granting the basic CRUD permissions Test purpose: Verify the whitelist feature of distributed databases Test script: ```sql mysql -u root -h 172.17.49.102 -P 4000 drop user tidb; create user tidb@'127.0.0.1' IDENTIFIED by 'tidb'; flush privileges; select * from mysql.user where user='tidb'; mysql -u tidb -h 127.0.0.1 -P 4000 -ptidb mysql -u tidb -h 172.17.49.102 -P 4000 -ptidb ``` Test result: Supported for the IP whitelist feature and supportred for matching actions with IP segments Test purpose: Verify the monitor capability to distributed databases Test script: `kubectl -ntidb-cluster logs tidb-test-pd-2 --tail 22` Test result: Record key actions or misoperations performed by users through the operation and maintenance management console or API This section describes the operation and maintenance test. After the database cluster is deployed, the following operation and maintenance tests are all passed. Test purpose: Verify the tools support for importing and exporting data of distributed databases Test script: ```sql select * from sbtest1 into outfile '/sbtest1.csv'; load data local infile '/sbtest1.csv' into table test100; ``` Test result: Supported for importing and exporting table, schema, and database Test purpose: Get the SQL info by slow query Prerequisite: The SQL execution time shall be longer than the configured threshold for slow query, and the SQL execution is completed Test steps: Adjust the slow query threshold to 100 ms Run SQL View the slow query info from log, system table, or dashboard Test script: ```sql show variables like 'tidbslowlog_threshold'; set tidbslowlog_threshold=100; select querytime, query from informationschema.slowquery where isinternal = false order by query_time desc limit 3; ``` Test result: Can get the slow query info. For details about test data, see ."
}
] |
{
"category": "Runtime",
"file_name": "tidb.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "MinIO provides a custom STS API that allows integration with LDAP based corporate environments including Microsoft Active Directory. The MinIO server uses a separate LDAP service account to lookup user information. The login flow for a user is as follows: User provides their AD/LDAP username and password to the STS API. MinIO looks up the user's information (specifically the user's Distinguished Name) in the LDAP server. On finding the user's info, MinIO verifies the login credentials with the AD/LDAP server. MinIO optionally queries the AD/LDAP server for a list of groups that the user is a member of. MinIO then checks if there are any policies with the user or their groups. On finding at least one associated policy, MinIO generates temporary credentials for the user storing the list of groups in a cryptographically secure session token. The temporary access key, secret key and session token are returned to the user. The user can now use these credentials to make requests to the MinIO server. The administrator will associate IAM access policies with each group and if required with the user too. The MinIO server then evaluates applicable policies on a user (these are the policies associated with the groups along with the policy on the user if any) to check if the request should be allowed or denied. To ensure that changes in the LDAP directory are reflected in object storage access changes, MinIO performs an Automatic LDAP sync. MinIO periodically queries the LDAP service to: find accounts (user DNs) that have been removed; any active STS credentials or MinIO service accounts belonging to these users are purged. find accounts whose group memberships have changed; access policies available to a credential are updated to reflect the change, i.e. they will lose any privileges associated with a group they are removed from, and gain any privileges associated with a group they are added to. Please note that when AD/LDAP is configured, MinIO will not support long term users defined internally. Only AD/LDAP users (and the root user) are allowed. In addition to this, the server will not support operations on users or groups using `mc admin user` or `mc admin group` commands except `mc admin user info` and `mc admin group info` to list set policies for users and groups. This is because users and groups are defined externally in AD/LDAP. LDAP STS configuration can be performed via MinIO's standard configuration API (i.e. using `mc admin config set/get` commands) or equivalently via environment variables. For brevity we refer to environment variables here. LDAP is configured via the following environment variables: ``` $ mc admin config set myminio identity_ldap --env KEY: identity_ldap enable LDAP SSO support ARGS: MINIOIDENTITYLDAPSERVERADDR* (address) AD/LDAP server address e.g. \"myldap.com\" or"
},
{
"data": "MINIOIDENTITYLDAPSRVRECORD_NAME (string) DNS SRV record name for LDAP service, if given, must be one of ldap, ldaps or on MINIOIDENTITYLDAPLOOKUPBIND_DN* (string) DN for LDAP read-only service account used to perform DN and group lookups MINIOIDENTITYLDAPLOOKUPBIND_PASSWORD (string) Password for LDAP read-only service account used to perform DN and group lookups MINIOIDENTITYLDAPUSERDNSEARCHBASE_DN* (list) \";\" separated list of user search base DNs e.g. \"dc=myldapserver,dc=com\" MINIOIDENTITYLDAPUSERDNSEARCHFILTER* (string) Search filter to lookup user DN MINIOIDENTITYLDAPGROUPSEARCH_FILTER (string) search filter for groups e.g. \"(&(objectclass=groupOfNames)(memberUid=%s))\" MINIOIDENTITYLDAPGROUPSEARCHBASEDN (list) \";\" separated list of group search base DNs e.g. \"dc=myldapserver,dc=com\" MINIOIDENTITYLDAPTLSSKIP_VERIFY (on|off) trust server TLS without verification, defaults to \"off\" (verify) MINIOIDENTITYLDAPSERVERINSECURE (on|off) allow plain text connection to AD/LDAP server, defaults to \"off\" MINIOIDENTITYLDAPSERVERSTARTTLS (on|off) use StartTLS connection to AD/LDAP server, defaults to \"off\" MINIOIDENTITYLDAP_COMMENT (sentence) optionally add a comment to this setting ``` The variables relevant to configuring connectivity to the LDAP service are: ``` MINIOIDENTITYLDAPSERVERADDR* (address) AD/LDAP server address e.g. \"myldap.com\" or \"myldapserver.com:1686\" MINIOIDENTITYLDAPSRVRECORD_NAME (string) DNS SRV record name for LDAP service, if given, must be one of ldap, ldaps or on MINIOIDENTITYLDAPTLSSKIP_VERIFY (on|off) trust server TLS without verification, defaults to \"off\" (verify) MINIOIDENTITYLDAPSERVERINSECURE (on|off) allow plain text connection to AD/LDAP server, defaults to \"off\" MINIOIDENTITYLDAPSERVERSTARTTLS (on|off) use StartTLS connection to AD/LDAP server, defaults to \"off\" ``` The server address variable is required. TLS is assumed to be on by default. The port in the server address is optional and defaults to 636 if not provided. MinIO sends LDAP credentials to the LDAP server for validation. So we _strongly recommend_ to use MinIO with AD/LDAP server over TLS or StartTLS _only_. Using plain-text connection between MinIO and LDAP server means _credentials can be compromised_ by anyone listening to network traffic. If a self-signed certificate is being used, the certificate can be added to MinIO's certificates directory, so it can be trusted by the server. Many Active Directory and other LDAP services are setup with for high-availability of the directory service. To use this to find LDAP servers to connect to, an LDAP client makes a DNS SRV record request to the DNS service on a domain that looks like `service.proto.example.com`. For LDAP the `proto` value is always `tcp`, and `service` is usually `ldap` or `ldaps`. To enable MinIO to use the SRV records, specify the `srvrecordname` config parameter (or equivalently the `MINIOIDENTITYLDAPSRVRECORDNAME` environment variable). This parameter can be set to `ldap` or `ldaps` and MinIO will substitute it into the `service` value. For example, when `serveraddr=myldapserver.com` and `srvrecordname=ldap`, MinIO will lookup the SRV record for `ldap.tcp.myldapserver.com` and pick an appropriate target for LDAP requests. If the DNS SRV record is at an entirely different place, say `ldapsrv.tcpish.myldapserver.com`, then set `srvrecordname` to the special value `on` and set `serveraddr=ldapsrv._tcpish.myldapserver.com`. When using this feature, do not specify a port in the `server_addr` as the port is picked up automatically from the SRV record. With the default (empty) value for `srvrecordname`, MinIO will not perform any SRV record request. The value of `srvrecordname` does not affect any TLS settings - they must be configured with their own parameters. A low-privilege read-only LDAP service account is configured in the MinIO server by providing the account's Distinguished Name (DN) and"
},
{
"data": "This service account is used to perform directory lookups as needed. ``` MINIOIDENTITYLDAPLOOKUPBIND_DN* (string) DN for LDAP read-only service account used to perform DN and group lookups MINIOIDENTITYLDAPLOOKUPBIND_PASSWORD (string) Password for LDAP read-only service account used to perform DN and group lookups ``` If you set an empty lookup bind password, the lookup bind will use the unauthenticated authentication mechanism, as described in . When a user provides their LDAP credentials, MinIO runs a lookup query to find the user's Distinguished Name (DN). The search filter and base DN used in this lookup query are configured via the following variables: ``` MINIOIDENTITYLDAPUSERDNSEARCHBASE_DN* (list) \";\" separated list of user search base DNs e.g. \"dc=myldapserver,dc=com\" MINIOIDENTITYLDAPUSERDNSEARCHFILTER* (string) Search filter to lookup user DN ``` The search filter must use the LDAP username to find the user DN. This is done via . The returned user's DN and their password are then verified with the LDAP server. The user DN may also be associated with an . MinIO can be optionally configured to find the groups of a user from AD/LDAP by specifying the following variables: ``` MINIOIDENTITYLDAPGROUPSEARCH_FILTER (string) search filter for groups e.g. \"(&(objectclass=groupOfNames)(memberUid=%s))\" MINIOIDENTITYLDAPGROUPSEARCHBASEDN (list) \";\" separated list of group search base DNs e.g. \"dc=myldapserver,dc=com\" ``` The search filter must use the username or the DN to find the user's groups. This is done via . A group's DN may be associated with an . If you are using Active directory with nested groups you have to add LDAPMATCHINGRULEINCHAIN: :1.2.840.113556.1.4.1941: to your query. For example: ```shell groupsearchfilter: (&(objectClass=group)(member:1.2.840.113556.1.4.1941:=%d)) userdnsearch_filter: (&(memberOf:1.2.840.113556.1.4.1941:=CN=group,DC=dc,DC=net)(sAMAccountName=%s)) ``` Here are some (minimal) sample settings for development or experimentation: ```shell export MINIOIDENTITYLDAPSERVERADDR=myldapserver.com:636 export MINIOIDENTITYLDAPLOOKUPBIND_DN='cn=admin,dc=min,dc=io' export MINIOIDENTITYLDAPLOOKUPBIND_PASSWORD=admin export MINIOIDENTITYLDAPUSERDNSEARCHBASE_DN='ou=hwengg,dc=min,dc=io' export MINIOIDENTITYLDAPUSERDNSEARCHFILTER='(uid=%s)' export MINIOIDENTITYLDAPTLSSKIP_VERIFY=on ``` In the configuration variables, `%s` is substituted with the username from the STS request and `%d` is substituted with the distinguished username (user DN) of the LDAP user. Please see the following table for which configuration variables support these substitution variables: | Variable | Supported substitutions | ||-| | `MINIOIDENTITYLDAPUSERDNSEARCHFILTER` | `%s` | | `MINIOIDENTITYLDAPGROUPSEARCH_FILTER` | `%s` and `%d` | Access policies may be associated by their name with a group or user directly. Access policies are first defined on the MinIO server using IAM policy JSON syntax. To define a new policy, you can use the . Copy the policy into a text file `mypolicy.json` and issue the command like so: ```sh mc admin policy create myminio mypolicy mypolicy.json ``` To associate the policy with an LDAP user or group, use the full DN of the user or group: ```sh mc idp ldap policy attach myminio mypolicy --user='uid=james,cn=accounts,dc=myldapserver,dc=com' ``` ```sh mc idp ldap policy attach myminio mypolicy -group='cn=projectx,ou=groups,ou=hwengg,dc=min,dc=io' ``` To remove a policy association, use the similar `detach` command: ```sh mc idp ldap policy detach myminio mypolicy --user='uid=james,cn=accounts,dc=myldapserver,dc=com' ``` ```sh mc idp ldap policy detach myminio mypolicy -group='cn=projectx,ou=groups,ou=hwengg,dc=min,dc=io' ``` Note that the commands above attempt to validate if the given entity (user or group) exist in the LDAP directory and return an error if they are not"
},
{
"data": "<details><summary> View DEPRECATED older policy association commands</summary> Please do not use these as they may be removed or their behavior may change. ```sh mc admin policy attach myminio mypolicy --user='uid=james,cn=accounts,dc=myldapserver,dc=com' ``` ```sh mc admin policy attach myminio mypolicy --group='cn=projectx,ou=groups,ou=hwengg,dc=min,dc=io' ``` </details> Note that by default no policy is set on a user. Thus even if they successfully authenticate with AD/LDAP credentials, they have no access to object storage as the default access policy is to deny all access. Is AD/LDAP username to login. Application must ask user for this value to successfully obtain rotating access credentials from AssumeRoleWithLDAPIdentity. | Params | Value | | :-- | :-- | | Type | String | | Length Constraints | Minimum length of 2. Maximum length of 2048. | | Required | Yes | Is AD/LDAP username password to login. Application must ask user for this value to successfully obtain rotating access credentials from AssumeRoleWithLDAPIdentity. | Params | Value | | :-- | :-- | | Type | String | | Length Constraints | Minimum length of 4. Maximum length of 2048. | | Required | Yes | Indicates STS API version information, the only supported value is '2011-06-15'. This value is borrowed from AWS STS API documentation for compatibility reasons. | Params | Value | | :-- | :-- | | Type | String | | Required | Yes | The duration, in seconds. The value can range from 900 seconds (15 minutes) up to 365 days. If value is higher than this setting, then operation fails. By default, the value is set to 3600 seconds. | Params | Value | | :-- | :-- | | Type | Integer | | Valid Range | Minimum value of 900. Maximum value of 31536000. | | Required | No | An IAM policy in JSON format that you want to use as an inline session policy. This parameter is optional. Passing policies to this operation returns new temporary credentials. The resulting session's permissions are the intersection of the canned policy name and the policy set here. You cannot use this policy to grant more permissions than those allowed by the canned policy name being assumed. | Params | Value | | :-- | :-- | | Type | String | | Valid Range | Minimum length of 1. Maximum length of 2048. | | Required | No | XML response for this API is similar to XML error response for this API is similar to ``` http://minio.cluster:9000?Action=AssumeRoleWithLDAPIdentity&LDAPUsername=foouser&LDAPPassword=foouserpassword&Version=2011-06-15&DurationSeconds=7200 ``` ``` <?xml version=\"1.0\" encoding=\"UTF-8\"?> <AssumeRoleWithLDAPIdentityResponse xmlns=\"https://sts.amazonaws.com/doc/2011-06-15/\"> <AssumeRoleWithLDAPIdentityResult> <AssumedRoleUser> <Arn/> <AssumeRoleId/> </AssumedRoleUser> <Credentials> <AccessKeyId>Y4RJU1RNFGK48LGO9I2S</AccessKeyId> <SecretAccessKey>sYLRKS1Z7hSjluf6gEbb9066hnx315wHTiACPAjg</SecretAccessKey> <Expiration>2019-08-08T20:26:12Z</Expiration> <SessionToken>eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJhY2Nlc3NLZXkiOiJZNFJKVTFSTkZHSzQ4TEdPOUkyUyIsImF1ZCI6IlBvRWdYUDZ1Vk80NUlzRU5SbmdEWGo1QXU1WWEiLCJhenAiOiJQb0VnWFA2dVZPNDVJc0VOUm5nRFhqNUF1NVlhIiwiZXhwIjoxNTQxODExMDcxLCJpYXQiOjE1NDE4MDc0NzEsImlzcyI6Imh0dHBzOi8vbG9jYWxob3N0Ojk0NDMvb2F1dGgyL3Rva2VuIiwianRpIjoiYTBiMjc2MjktZWUxYS00M2JmLTg3MzktZjMzNzRhNGNkYmMwIn0.ewHqKVFTaP-jkgZrcOEKroNUjk10GEp8bqQjxBbYVovV0nHO985VnRESFbcT6XMDDKHZiWqN2viETX_u3Q-w</SessionToken> </Credentials> </AssumeRoleWithLDAPIdentity> <ResponseMetadata/> </AssumeRoleWithLDAPIdentityResponse> ``` With multiple OU hierarchies for users, and multiple group search base DN's. ``` export MINIOROOTUSER=minio export MINIOROOTPASSWORD=minio123 export MINIOIDENTITYLDAPSERVERADDR='my.ldap-active-dir-server.com:636' export MINIOIDENTITYLDAPLOOKUPBIND_DN='cn=admin,dc=min,dc=io' export MINIOIDENTITYLDAPLOOKUPBIND_PASSWORD=admin export MINIOIDENTITYLDAPGROUPSEARCHBASEDN='dc=minioad,dc=local;dc=somedomain,dc=com' export MINIOIDENTITYLDAPGROUPSEARCH_FILTER='(&(objectclass=groupOfNames)(member=%d))' minio server ~/test ``` You can make sure it works appropriately using our : ``` $ go run ldap.go -u foouser -p foopassword { \"accessKey\": \"NUIBORZYTV2HG2BMRSXR\", \"secretKey\": \"qQlP5O7CFPc5m5IXf1vYhuVTFj7BRVJqh0FqZ86S\", \"expiration\": \"2018-08-21T17:10:29-07:00\", \"sessionToken\": \"eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJhY2Nlc3NLZXkiOiJOVUlCT1JaWVRWMkhHMkJNUlNYUiIsImF1ZCI6IlBvRWdYUDZ1Vk80NUlzRU5SbmdEWGo1QXU1WWEiLCJhenAiOiJQb0VnWFA2dVZPNDVJc0VOUm5nRFhqNUF1NVlhIiwiZXhwIjoxNTM0ODk2NjI5LCJpYXQiOjE1MzQ4OTMwMjksImlzcyI6Imh0dHBzOi8vbG9jYWxob3N0Ojk0NDMvb2F1dGgyL3Rva2VuIiwianRpIjoiNjY2OTZjZTctN2U1Ny00ZjU5LWI0MWQtM2E1YTMzZGZiNjA4In0.eJONnVaSVHypiXKEARSMnSKgr-2mlC2Sr4fEGJitLcJFat3LeNdTHv0oHsv6ZZA3zueVGgFlVXMlREgr9LXA\" } ```"
}
] |
{
"category": "Runtime",
"file_name": "ldap.md",
"project_name": "MinIO",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: \"Object Storage Layout Changes in v0.10\" layout: docs Ark v0.10 includes breaking changes to where data is stored in your object storage bucket. You'll need to run a if you're upgrading from prior versions of Ark. Prior to v0.10, Ark stored data in an object storage bucket using the following structure: ``` <your-bucket>/ backup-1/ ark-backup.json backup-1.tar.gz backup-1-logs.gz restore-of-backup-1-logs.gz restore-of-backup-1-results.gz backup-2/ ark-backup.json backup-2.tar.gz backup-2-logs.gz restore-of-backup-2-logs.gz restore-of-backup-2-results.gz ... ``` Ark also stored restic data, if applicable, in a separate object storage bucket, structured as: ``` <your-ark-restic-bucket>/[<your-optional-prefix>/] namespace-1/ data/ index/ keys/ snapshots/ config namespace-2/ data/ index/ keys/ snapshots/ config ... ``` As of v0.10, we've reorganized this layout to provide a cleaner and more extensible directory structure. The new layout looks like: ``` <your-bucket>[/<your-prefix>]/ backups/ backup-1/ ark-backup.json backup-1.tar.gz backup-1-logs.gz backup-2/ ark-backup.json backup-2.tar.gz backup-2-logs.gz ... restores/ restore-of-backup-1/ restore-of-backup-1-logs.gz restore-of-backup-1-results.gz restore-of-backup-2/ restore-of-backup-2-logs.gz restore-of-backup-2-results.gz ... restic/ namespace-1/ data/ index/ keys/ snapshots/ config namespace-2/ data/ index/ keys/ snapshots/ config ... ... ``` Before upgrading to v0.10, you'll need to run a one-time upgrade script to rearrange the contents of your existing Ark bucket(s) to be compatible with the new layout. Please note that the following scripts will not migrate existing restore logs/results into the new `restores/` subdirectory. This means that they will not be accessible using `ark restore describe` or `ark restore logs`. They will remain in the relevant backup's subdirectory so they are manually accessible, and will eventually be garbage-collected along with the backup. We've taken this approach in order to keep the migration scripts simple and less error-prone. This script uses , which you can download and install following the instructions . Please read through the script carefully before starting and execute it step-by-step. ```bash ARK_BUCKET=<your-ark-bucket> ARKTEMPMIGRATION_BUCKET=<a-temp-bucket-for-migration> rclone config RCLONEREMOTENAME=<your-remote-name> rclone mkdir ${RCLONEREMOTENAME}:${ARKTEMPMIGRATION_BUCKET} rclone copy ${RCLONEREMOTENAME}:${ARKBUCKET} ${RCLONEREMOTENAME}:${ARKTEMPMIGRATIONBUCKET} rclone check ${RCLONEREMOTENAME}:${ARKBUCKET} ${RCLONEREMOTENAME}:${ARKTEMPMIGRATIONBUCKET} rclone delete ${RCLONEREMOTENAME}:${ARK_BUCKET} rclone copy ${RCLONEREMOTENAME}:${ARKTEMPMIGRATIONBUCKET} ${RCLONEREMOTENAME}:${ARKBUCKET}/backups rclone check ${RCLONEREMOTENAME}:${ARKBUCKET}/backups ${RCLONEREMOTENAME}:${ARKTEMPMIGRATIONBUCKET} ARKRESTICLOCATION=<your-ark-restic-bucket[/optional-prefix]> rclone copy ${RCLONEREMOTENAME}:${ARKRESTICLOCATION} ${RCLONEREMOTENAME}:${ARK_BUCKET}/restic rclone check ${RCLONEREMOTENAME}:${ARKBUCKET}/restic ${RCLONEREMOTENAME}:${ARKRESTIC_LOCATION} kubectl -n heptio-ark delete resticrepositories --all ```"
}
] |
{
"category": "Runtime",
"file_name": "storage-layout-reorg-v0.10.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This resource controls the state of the LINSTOR node connections. Node connections control the DRBD options set on a particular path, such as which protocol and network interface to use. Configures the desired state of the node connections. Selects which connections the resource should apply to. By default, a `LinstorNodeConnection` resource applies to every possible connection in the cluster. A resource applies to a connection if one of the provided selectors match. A selector itself can contain multiple expressions. If multiple expressions are specified in a selector, the connection must match all of them. Every expression requires a label name (`key`) on which it operates and an operator `op`. Depending on the operator, you can also specify specific values for the node label. The following operators are available: `Exists`, the label specified in `key` is present on both nodes of the connection, with any value. This is the default. `DoesNotExist`, the label specified in `key` is not present on the nodes in the connections. `In`, the label specified in `key` matches any of the provided `values` for the nodes in the connection. `NotIn` the label specified in `key` does not match any of the provided `values` for the nodes in the connection. `Same` the label specified in `key` has the same value for the nodes in the connection. `NotSame` the label specified in `key` has different values for the nodes in the connection. This example restricts the resource to connections between nodes matching `example.com/storage: \"yes\"`: ```yaml apiVersion: piraeus.io/v1 kind: LinstorNodeConnection metadata: name: selector spec: selector: matchLabels: key: example.com/storage op: In values: yes ``` This example restricts the resource to connections between nodes in the same region, but different zones. ```yaml apiVersion: piraeus.io/v1 kind: LinstorNodeConnection metadata: name: selector spec: selector: matchLabels: key: topology.kubernetes.io/region op: Same key: topology.kubernetes.io/zone op: NotSame ``` Paths configure one or more network connections between nodes. If a path is configured, LINSTOR will use the given network interface to configure DRBD replication. The network interfaces have to be registered with LINSTOR first, using `linstor node interface create ...`. This example configures all nodes to use the \"data-nic\" network interface instead of the default interface. ```yaml apiVersion: piraeus.io/v1 kind: LinstorNodeConnection metadata: name: network-paths spec: paths: name: path1 interface: data-nic ``` Sets the given properties on the LINSTOR Node Connection level. This example sets the DRBD protocol to `C`. ```yaml apiVersion: piraeus.io/v1 kind: LinstorNodeConnection metadata: name: drbd-options spec: properties: name: DrbdOptions/Net/protocol value: C ``` Reports the actual state of the connections. The Operator reports the current state of the LINSTOR Node Connection through a set of conditions. Conditions are identified by their `type`. | `type` | Explanation | |--|| | `Configured` | The LINSTOR Node Connection is applied to all matching pairs of nodes. |"
}
] |
{
"category": "Runtime",
"file_name": "linstornodeconnection.md",
"project_name": "Piraeus Datastore",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium-agent --cmdref, do not edit manually--> Generate the autocompletion script for the specified shell Generate the autocompletion script for cilium-health for the specified shell. See each sub-command's help for details on how to use the generated script. ``` -h, --help help for completion ``` ``` -D, --debug Enable debug messages -H, --host string URI to cilium-health server API --log-driver strings Logging endpoints to use for example syslog --log-opt map Log driver options for cilium-health e.g. syslog.level=info,syslog.facility=local5,syslog.tag=cilium-agent ``` - Cilium Health Client - Generate the autocompletion script for bash - Generate the autocompletion script for fish - Generate the autocompletion script for powershell - Generate the autocompletion script for zsh"
}
] |
{
"category": "Runtime",
"file_name": "cilium-health_completion.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "(network-bridge)= As one of the possible network configuration types under Incus, Incus supports creating and managing network bridges. <!-- Include start bridge intro --> A network bridge creates a virtual L2 Ethernet switch that instance NICs can connect to, making it possible for them to communicate with each other and the host. Incus bridges can leverage underlying native Linux bridges and Open vSwitch. <!-- Include end bridge intro --> The `bridge` network type allows to create an L2 bridge that connects the instances that use it together into a single network L2 segment. Bridges created by Incus are managed, which means that in addition to creating the bridge interface itself, Incus also sets up a local `dnsmasq` process to provide DHCP, IPv6 route announcements and DNS services to the network. By default, it also performs NAT for the bridge. See {ref}`network-bridge-firewall` for instructions on how to configure your firewall to work with Incus bridge networks. <!-- Include start MAC identifier note --> ```{note} Static DHCP assignments depend on the client using its MAC address as the DHCP identifier. This method prevents conflicting leases when copying an instance, and thus makes statically assigned leases work properly. ``` <!-- Include end MAC identifier note --> If you're using IPv6 for your bridge network, you should use a prefix size of 64. Larger subnets (i.e., using a prefix smaller than 64) should work properly too, but they aren't typically that useful for {abbr}`SLAAC (Stateless Address Auto-configuration)`. Smaller subnets are in theory possible (when using stateful DHCPv6 for IPv6 allocation), but they aren't properly supported by `dnsmasq` and might cause problems. If you must create a smaller subnet, use static allocation or another standalone router advertisement daemon. (network-bridge-options)= The following configuration key namespaces are currently supported for the `bridge` network type: `bgp` (BGP peer configuration) `bridge` (L2 interface configuration) `dns` (DNS server and resolution configuration) `ipv4` (L3 IPv4 configuration) `ipv6` (L3 IPv6 configuration) `security` (network ACL configuration) `raw` (raw configuration file content) `tunnel` (cross-host tunneling configuration) `user` (free-form key/value for user metadata) ```{note} {{noteipaddresses_CIDR}} ``` The following configuration options are available for the `bridge` network type: Key | Type | Condition | Default | Description :-- | :-- | :-- | :-- | :-- `bgp.peers.NAME.address` | string | BGP server | - | Peer address (IPv4 or IPv6) `bgp.peers.NAME.asn` | integer | BGP server | - | Peer AS number `bgp.peers.NAME.password` | string | BGP server | - (no password) | Peer session password (optional) `bgp.peers.NAME.holdtime` | integer | BGP server | `180` | Peer session hold time (in seconds; optional) `bgp.ipv4.nexthop` | string | BGP server | local address | Override the next-hop for advertised prefixes `bgp.ipv6.nexthop` | string | BGP server | local address | Override the next-hop for advertised prefixes `bridge.driver` | string | - | `native` | Bridge driver: `native` or `openvswitch` `bridge.external_interfaces` | string | - | - | Comma-separated list of unconfigured network interfaces to include in the bridge `bridge.hwaddr` | string | - | - | MAC address for the bridge"
},
{
"data": "| integer | - | `1500` | Bridge MTU (default varies if tunnel in use) `dns.domain` | string | - | `incus` | Domain to advertise to DHCP clients and use for DNS resolution `dns.mode` | string | - | `managed` | DNS registration mode: `none` for no DNS record, `managed` for Incus-generated static records or `dynamic` for client-generated records `dns.search` | string | - | - | Full comma-separated domain search list, defaulting to `dns.domain` value `dns.zone.forward` | string | - | `managed` | Comma-separated list of DNS zone names for forward DNS records `dns.zone.reverse.ipv4` | string | - | `managed` | DNS zone name for IPv4 reverse DNS records `dns.zone.reverse.ipv6` | string | - | `managed` | DNS zone name for IPv6 reverse DNS records `ipv4.address` | string | standard mode | - (initial value on creation: `auto`) | IPv4 address for the bridge (use `none` to turn off IPv4 or `auto` to generate a new random unused subnet) (CIDR) `ipv4.dhcp` | bool | IPv4 address | `true` | Whether to allocate addresses using DHCP `ipv4.dhcp.expiry` | string | IPv4 DHCP | `1h` | When to expire DHCP leases `ipv4.dhcp.gateway` | string | IPv4 DHCP | IPv4 address | Address of the gateway for the subnet `ipv4.dhcp.ranges` | string | IPv4 DHCP | all addresses | Comma-separated list of IP ranges to use for DHCP (FIRST-LAST format) `ipv4.firewall` | bool | IPv4 address | `true` | Whether to generate filtering firewall rules for this network `ipv4.nat` | bool | IPv4 address | `false` (initial value on creation if `ipv4.address` is set to `auto`: `true`) | Whether to NAT `ipv4.nat.address` | string | IPv4 address | - | The source address used for outbound traffic from the bridge `ipv4.nat.order` | string | IPv4 address | `before` | Whether to add the required NAT rules before or after any pre-existing rules `ipv4.ovn.ranges` | string | - | - | Comma-separated list of IPv4 ranges to use for child OVN network routers (FIRST-LAST format) `ipv4.routes` | string | IPv4 address | - | Comma-separated list of additional IPv4 CIDR subnets to route to the bridge `ipv4.routing` | bool | IPv4 address | `true` | Whether to route traffic in and out of the bridge `ipv6.address` | string | standard mode | - (initial value on creation: `auto`) | IPv6 address for the bridge (use `none` to turn off IPv6 or `auto` to generate a new random unused subnet) (CIDR) `ipv6.dhcp` | bool | IPv6 address | `true` | Whether to provide additional network configuration over DHCP `ipv6.dhcp.expiry` | string | IPv6 DHCP | `1h` | When to expire DHCP leases `ipv6.dhcp.ranges` | string | IPv6 stateful DHCP | all addresses | Comma-separated list of IPv6 ranges to use for DHCP (FIRST-LAST format) `ipv6.dhcp.stateful` | bool | IPv6 DHCP | `false` | Whether to allocate addresses using DHCP `ipv6.firewall` | bool | IPv6 address | `true` | Whether to generate filtering firewall rules for this network"
},
{
"data": "| bool | IPv6 address | `false` (initial value on creation if `ipv6.address` is set to `auto`: `true`) | Whether to NAT `ipv6.nat.address` | string | IPv6 address | - | The source address used for outbound traffic from the bridge `ipv6.nat.order` | string | IPv6 address | `before` | Whether to add the required NAT rules before or after any pre-existing rules `ipv6.ovn.ranges` | string | - | - | Comma-separated list of IPv6 ranges to use for child OVN network routers (FIRST-LAST format) `ipv6.routes` | string | IPv6 address | - | Comma-separated list of additional IPv6 CIDR subnets to route to the bridge `ipv6.routing` | bool | IPv6 address | `true` | Whether to route traffic in and out of the bridge `raw.dnsmasq` | string | - | - | Additional `dnsmasq` configuration to append to the configuration file `security.acls` | string | - | - | Comma-separated list of Network ACLs to apply to NICs connected to this network (see {ref}`network-acls-bridge-limitations`) `security.acls.default.egress.action`| string | `security.acls` | `reject` | Action to use for egress traffic that doesn't match any ACL rule `security.acls.default.egress.logged`| bool | `security.acls` | `false` | Whether to log egress traffic that doesn't match any ACL rule `security.acls.default.ingress.action`| string | `security.acls` | `reject` | Action to use for ingress traffic that doesn't match any ACL rule `security.acls.default.ingress.logged`| bool | `security.acls` | `false` | Whether to log ingress traffic that doesn't match any ACL rule `tunnel.NAME.group` | string | `vxlan` | `239.0.0.1` | Multicast address for `vxlan` (used if local and remote aren't set) `tunnel.NAME.id` | integer | `vxlan` | `0` | Specific tunnel ID to use for the `vxlan` tunnel `tunnel.NAME.interface` | string | `vxlan` | - | Specific host interface to use for the tunnel `tunnel.NAME.local` | string | `gre` or `vxlan` | - | Local address for the tunnel (not necessary for multicast `vxlan`) `tunnel.NAME.port` | integer | `vxlan` | `0` | Specific port to use for the `vxlan` tunnel `tunnel.NAME.protocol` | string | standard mode | - | Tunneling protocol: `vxlan` or `gre` `tunnel.NAME.remote` | string | `gre` or `vxlan` | - | Remote address for the tunnel (not necessary for multicast `vxlan`) `tunnel.NAME.ttl` | integer | `vxlan` | `1` | Specific TTL to use for multicast routing topologies `user.*` | string | - | - | User-provided free-form key/value pairs ```{note} The `bridge.external_interfaces` option supports an extended format allowing the creation of missing VLAN interfaces. The extended format is `<interfaceName>/<parentInterfaceName>/<vlanId>`. When the external interface is added to the list with the extended format, the system will automatically create the interface upon the network's creation and subsequently delete it when the network is terminated. The system verifies that the <interfaceName> does not already exist. If the interface name is in use with a different parent or VLAN ID, or if the creation of the interface is unsuccessful, the system will revert with an error message. ``` (network-bridge-features)= The following features are supported for the `bridge` network type: {ref}`network-acls` {ref}`network-forwards` {ref}`network-zones` {ref}`network-bgp` ```{toctree} :maxdepth: 1 :hidden: Integrate with resolved </howto/networkbridgeresolved> Configure your firewall </howto/networkbridgefirewalld> ```"
}
] |
{
"category": "Runtime",
"file_name": "network_bridge.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Argo Workflows enables us to schedule operations. In the Kanister project, Argo Cron Workflows will be used to automate the creation of ActionSets to execute Blueprint actions at regular intervals. To summarize, ActionSets are CRs that are used to execute actions from Blueprint CRs. The Kanister controller watches for the creation of ActionSets and executes the specified action. In this tutorial, you will schedule the creation of a backup ActionSet using Argo Cron Workflows. Kubernetes `1.20` or higher. A running Kanister controller in the `Kanister` namespace. See `kanctl` CLI installed. See . Download the Argo CLI from their page. Create a separate namespace for the Workflows. ``` bash kubectl create ns argo ``` In this tutorial, the Argo Workflows CRDs and other resources will be deployed on the Kubernetes cluster using the minimal manifest file. ``` bash kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-workflows/master/manifests/quick-start-minimal.yaml -n argo ``` You can install Argo in either cluster scoped or namespace scope configurations. To deploy Argo with custom configuration, download the minimal manifest file and apply the necessary changes. For more information, see . Use `port-forward` to forward a local port to the argo-server pod\\'s port to view the Argo UI: ``` bash kubectl -n argo port-forward deployment/argo-server 2746:2746 ``` Open a web browser and navigate to `localhost:2746` Here, you will reference the example from Kanister. Install the chart and set up MySQL in the `mysql-test` namespace. Integrate it with Kanister by creating a Profile CR in the `mysql-test` namespace and a Blueprint in the `kanister` namespace. Copy and save the names of the MySQL StatefulSet, secrets, Kanister Blueprint, and the Profile CR for the next step. Now, create a Cron Workflow to automate the creation of an ActionSet to backup the MySQL application. The workflow will use `kanctl` to achieve this. Modify the `kanctl` command in the YAML below to specify the names of the Blueprint, Profile, MySQL StatefulSet, and secrets created in the previous step. ``` bash kanctl create actionset --action backup --namespace kanister --blueprint <BLUEPRINTNAME> --statefulset <NAMESPACE/STATEFULSET> --profile <NAMESPACE/PROFILENAME> --secrets <NAMESPACE/SECRETS_NAME> ``` Then execute: ``` yaml cat <<EOF >> mysql-cron-wf.yaml apiVersion: argoproj.io/v1alpha1 kind: CronWorkflow metadata: name: mysql-cron-wf spec: schedule: \"/5 *\" concurrencyPolicy: \"Replace\" workflowSpec: entrypoint: automate-actionset templates: name: automate-actionset container: image: ghcr.io/kanisterio/kanister-tools:0.81.0 command: /bin/bash -c | microdnf install tar curl -LO https://github.com/kanisterio/kanister/releases/download/0.81.0/kanister0.81.0linux_amd64.tar.gz tar -C /usr/local/bin -xvf kanister0.81.0linux_amd64.tar.gz kanctl create actionset --action backup --namespace kanister --blueprint mysql-blueprint --statefulset mysql-test/mysql-release --profile mysql-test/s3-profile-gd4kx --secrets mysql=mysql-test/mysql-release EOF ``` ::: tip NOTE Here, the cron job is scheduled to run every 5 minutes. This means that an ActionSet is created every 5 minutes to perform a backup operation. You may schedule it to run as per your requirements. ::: Next, you will grant the required permissions to the Service Account in the `argo` namespace to access resources in the `kanister` and `mysql-test` namespaces. This is required to create CRs based on the Secrets and StatefulSet that you provided in the previous step. You may read more about RBAC authorization here - . Create a RoleBinding named `cron-wf-manager` in the `kanister` and `mysql-test`"
},
{
"data": "Grant the permissions in ClusterRole `cluster-admin` to the default ServiceAccount named `default` in the `argo` namespace. Execute the following command: ``` bash kubectl create rolebinding cron-wf-manager --clusterrole=cluster-admin --serviceaccount=argo:default -n kanister ``` ``` bash kubectl create rolebinding cron-wf-manager --clusterrole=cluster-admin --serviceaccount=argo:default -n mysql-test ``` ::: tip NOTE It is not recommended to grant the `cluster-admin` privileges to the `default` ServiceAccount in production. You must create a separate Role or a ClusterRole to grant specific access for allowing the creation of Custom Resources (ActionSets) in the `kanister` namespace. ::: Launch the workflow in the `argo` namespace by running the following command: ``` bash argo cron create mysql-cron-wf.yaml -n argo ``` Check if the workflow was created by running: ``` bash argo cron list -n argo ``` When the workflow runs, check if the ActionSet was created in the `kanister` namespace: ``` bash kubectl get actionsets.cr.kanister.io -n kanister ``` The output should be similar to the sample output below. ``` bash $ argo cron create mysql-cron-wf.yaml -n argo Name: mysql-cron-wf Namespace: argo Created: Fri Jul 22 10:23:09 -0400 (now) Schedule: /5 * Suspended: false ConcurrencyPolicy: Replace NextScheduledTime: Fri Jul 22 10:25:00 -0400 (1 minute from now) (assumes workflow-controller is in UTC) $ argo cron list -n argo NAME AGE LAST RUN NEXT RUN SCHEDULE TIMEZONE SUSPENDED mysql-cron-wf 12s N/A 1m /5 * false $ argo cron list -n argo NAME AGE LAST RUN NEXT RUN SCHEDULE TIMEZONE SUSPENDED mysql-cron-wf 4m 2m 2m /5 * false $ kubectl get actionsets.cr.kanister.io -n kanister NAME AGE backup-478lk 2m28s ``` In the above example, the workflow was created and scheduled to run in 1 minute. This scheduled time can be anywhere between 1 to 5 minutes for you. Once the workflow runs successfully, the `LAST RUN` field is updated with the timestamp of the last run. Along with this, a backup ActionSet must be created. The creation time of the ActionSet is indicated by the `AGE` field as seen above. You should see the workflow on the Argo UI under the Cron Workflows tab. On clicking on the workflow name, you will see its status. If the Cron Workflow does not run, check if the pod to run the workflow was created in the `argo` namespace. Examine the logs of this pod. ``` bash kubectl logs <NAMEOFMYSQLCRONWORKFLOW_POD> -n argo ``` If this pod was not created, examine the logs of the Argo Workflow Controller in the `argo` namespace. ``` bash kubectl logs <NAMEOFWORKFLOW_CONTROLLER> -n argo ``` If the logs mention that you have not granted the right permissions to the ServiceAccounts, circle back to Step 4 and verify your RBAC configuration. Your ServiceAccount should have access to the requested resources. ``` bash kubectl get serviceaccounts -n argo ``` Delete the cron workflow by running the following. Verify the name of your workflow before deleting it. Verify workflow name: ``` bash argo cron list -n argo ``` Delete workflow: ``` bash argo cron delete mysql-cron-wf -n argo ``` Deleting the Argo CRDs and other resources: ``` bash kubectl delete -f quick-start-minimal.yaml ``` Deleting the Argo namespace: ``` bash kubectl delete namespace argo ```"
}
] |
{
"category": "Runtime",
"file_name": "argo.md",
"project_name": "Kanister",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: \"Output file format\" layout: docs A backup is a gzip-compressed tar file whose name matches the Backup API resource's `metadata.name` (what is specified during `velero backup create <NAME>`). In cloud object storage, each backup file is stored in its own subdirectory in the bucket specified in the Velero server configuration. This subdirectory includes an additional file called `velero-backup.json`. The JSON file lists all information about your associated Backup resource, including any default values. This gives you a complete historical record of the backup configuration. The JSON file also specifies `status.version`, which corresponds to the output file format. The directory structure in your cloud storage looks something like: ``` rootBucket/ backup1234/ velero-backup.json backup1234.tar.gz ``` ```json { \"kind\": \"Backup\", \"apiVersion\": \"velero.io/v1\", \"metadata\": { \"name\": \"test-backup\", \"namespace\": \"velero\", \"selfLink\": \"/apis/velero.io/v1/namespaces/velero/backups/test-backup\", \"uid\": \"a12345cb-75f5-11e7-b4c2-abcdef123456\", \"resourceVersion\": \"337075\", \"creationTimestamp\": \"2017-07-31T13:39:15Z\" }, \"spec\": { \"includedNamespaces\": [ \"*\" ], \"excludedNamespaces\": null, \"includedResources\": [ \"*\" ], \"excludedResources\": null, \"labelSelector\": null, \"snapshotVolumes\": true, \"ttl\": \"24h0m0s\" }, \"status\": { \"version\": 1, \"formatVersion\": \"1.1.0\", \"expiration\": \"2017-08-01T13:39:15Z\", \"phase\": \"Completed\", \"volumeBackups\": { \"pvc-e1e2d345-7583-11e7-b4c2-abcdef123456\": { \"snapshotID\": \"snap-04b1a8e11dfb33ab0\", \"type\": \"gp2\", \"iops\": 100 } }, \"validationErrors\": null } } ``` Note that this file includes detailed info about your volume snapshots in the `status.volumeBackups` field, which can be helpful if you want to manually check them in your cloud provider GUI. The Velero output file format is intended to be relatively stable, but may change over time to support new features. To accommodate this, Velero follows for the file format version. Minor and patch versions will indicate backwards-compatible changes that previous versions of Velero can restore, including new directories or files. A major version would indicate that a version of Velero older than the version that created the backup could not restore it, usually because of moved or renamed directories or files. Major versions of the file format will be incremented with major version releases of"
},
{
"data": "However, a major version release of Velero does not necessarily mean that the backup format version changed - Velero 3.0 could still use backup file format 2.0, as an example. Version 1.1 added support of API groups versions as part of the backup. Previously, only the preferred version of each API groups was backed up. Each resource has one or more sub-directories: one sub-directory for each supported version of the API group. The preferred version API Group of each resource has the suffix \"-preferredversion\" as part of the sub-directory name. For backward compatibility, we kept the classic directory structure without the API group version, which sits on the same level as the API group sub-directory versions. By default, only the preferred API group of each resource is backed up. To take a backup of all API group versions, you need to run the Velero server with the `--features=EnableAPIGroupVersions` feature flag. This is an experimental flag and the restore logic to handle multiple API group versions is documented at . When unzipped, a typical backup directory (`backup1234.tar.gz`) taken with this file format version looks like the following (with the feature flag): ``` resources/ persistentvolumes/ cluster/ pv01.json ... v1-preferredversion/ cluster/ pv01.json ... configmaps/ namespaces/ namespace1/ myconfigmap.json ... namespace2/ ... v1-preferredversion/ namespaces/ namespace1/ myconfigmap.json ... namespace2/ ... pods/ namespaces/ namespace1/ mypod.json ... namespace2/ ... v1-preferredversion/ namespaces/ namespace1/ mypod.json ... namespace2/ ... jobs.batch/ namespaces/ namespace1/ awesome-job.json ... namespace2/ ... v1-preferredversion/ namespaces/ namespace1/ awesome-job.json ... namespace2/ ... deployments/ namespaces/ namespace1/ cool-deployment.json ... namespace2/ ... v1-preferredversion/ namespaces/ namespace1/ cool-deployment.json ... namespace2/ ... horizontalpodautoscalers.autoscaling/ namespaces/ namespace1/ hpa-to-the-rescue.json ... namespace2/ ... v1-preferredversion/ namespaces/ namespace1/ hpa-to-the-rescue.json ... namespace2/ ... v2beta1/ namespaces/ namespace1/ hpa-to-the-rescue.json ... namespace2/ ... v2beta2/ namespaces/ namespace1/ hpa-to-the-rescue.json ... namespace2/ ... ... ``` When unzipped, a typical backup directory (`backup1234.tar.gz`) looks like the following: ``` resources/ persistentvolumes/ cluster/ pv01.json ... configmaps/ namespaces/ namespace1/ myconfigmap.json ... namespace2/ ... pods/ namespaces/ namespace1/ mypod.json ... namespace2/ ... jobs/ namespaces/ namespace1/ awesome-job.json ... namespace2/ ... deployments/ namespaces/ namespace1/ cool-deployment.json ... namespace2/ ... ... ```"
}
] |
{
"category": "Runtime",
"file_name": "output-file-format.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "(storage-cephfs)= % Include content from ```{include} storage_ceph.md :start-after: <!-- Include start Ceph intro --> :end-before: <!-- Include end Ceph intro --> ``` {abbr}`CephFS (Ceph File System)` is Ceph's file system component that provides a robust, fully-featured POSIX-compliant distributed file system. Internally, it maps files to Ceph objects and stores file metadata (for example, file ownership, directory paths, access permissions) in a separate data pool. % Include content from ```{include} storage_ceph.md :start-after: <!-- Include start Ceph terminology --> :end-before: <!-- Include end Ceph terminology --> ``` A CephFS file system consists of two OSD storage pools, one for the actual data and one for the file metadata. ```{note} The `cephfs` driver can only be used for custom storage volumes with content type `filesystem`. For other storage volumes, use the {ref}`Ceph <storage-ceph>` driver. That driver can also be used for custom storage volumes with content type `filesystem`, but it implements them through Ceph RBD images. ``` % Include content from ```{include} storage_ceph.md :start-after: <!-- Include start Ceph driver cluster --> :end-before: <!-- Include end Ceph driver cluster --> ``` You can either create the CephFS file system that you want to use beforehand and specify it through the option, or specify the option to automatically create the file system and the data and metadata OSD pools (with the names given in and ). % Include content from ```{include} storage_ceph.md :start-after: <!-- Include start Ceph driver remote --> :end-before: <!-- Include end Ceph driver remote --> ``` % Include content from ```{include} storage_ceph.md :start-after: <!-- Include start Ceph driver control --> :end-before: <!-- Include end Ceph driver control --> ``` The `cephfs` driver in Incus supports snapshots if snapshots are enabled on the server side. The following configuration options are available for storage pools that use the `cephfs` driver and for storage volumes in these pools. (storage-cephfs-pool-config)= Key | Type | Default | Description :-- | : | : | :- `cephfs.cluster_name` | string | `ceph` | Name of the Ceph cluster that contains the CephFS file system `cephfs.create_missing` | bool | `false` | Create the file system and the missing data and metadata OSD pools `cephfs.data_pool` | string | - | Data OSD pool name to create for the file system `cephfs.fscache` | bool | `false` | Enable use of kernel `fscache` and `cachefilesd` `cephfs.meta_pool` | string | - | Metadata OSD pool name to create for the file system `cephfs.osdpgnum` | string | - | OSD pool `pg_num` to use when creating missing OSD pools `cephfs.path` | string | `/` | The base path for the CephFS mount `cephfs.user.name` | string | `admin` | The Ceph user to use `source` | string | - | Existing CephFS file system or file system path to use `volatile.pool.pristine` | string | `true` | Whether the CephFS file system was empty on creation time {{volume_configuration}} Key | Type | Condition | Default | Description :-- | : | :-- | : | :- `security.shared` | bool | custom block volume | same as `volume.security.shared` or `false` | Enable sharing the volume across multiple instances `security.shifted` | bool | custom volume | same as `volume.security.shifted` or `false` | {{enableIDshifting}} `security.unmapped` | bool | custom volume | same as `volume.security.unmapped` or `false` | Disable ID mapping for the volume `size` | string | appropriate driver | same as `volume.size` | Size/quota of the storage volume `snapshots.expiry` | string | custom volume | same as `volume.snapshots.expiry` | {{snapshotexpiryformat}} `snapshots.pattern` | string | custom volume | same as `volume.snapshots.pattern` or `snap%d` | {{snapshotpatternformat}} [^*] `snapshots.schedule` | string | custom volume | same as `volume.snapshots.schedule` | {{snapshotscheduleformat}}"
}
] |
{
"category": "Runtime",
"file_name": "storage_cephfs.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Vineyard follows the . In cases of abusive, harassing, or any unacceptable behaviors, please don't hesitate to contact the project team at ."
}
] |
{
"category": "Runtime",
"file_name": "CODE_OF_CONDUCT.md",
"project_name": "Vineyard",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: \"Upgrading to Velero 1.14\" layout: docs Velero installed. If you're not yet running at least Velero v1.8, see the following: Before upgrading, check the to make sure your version of Kubernetes is supported by the new version of Velero. Caution: Starting in Velero v1.10, kopia has replaced restic as the default uploader. It is now possible to upgrade from a version >= v1.10 directly. However, the procedure for upgrading to v1.13 from a Velero release lower than v1.10 is different. Install the Velero v1.13 command-line interface (CLI) by following the . Verify that you've properly installed it by running: ```bash velero version --client-only ``` You should see the following output: ```bash Client: Version: v1.14.0 Git commit: <git SHA> ``` Update the Velero custom resource definitions (CRDs) to include schema changes across all CRDs that are at the core of the new features in this release: ```bash velero install --crds-only --dry-run -o yaml | kubectl apply -f - ``` NOTE: Since velero v1.10.0 only v1 CRD will be supported during installation, therefore, the v1.10.0 will only work on Kubernetes version >= v1.16 Delete the CSI plugin. Because the Velero CSI plugin is already merged into the Velero, need to remove the existing CSI plugin InitContainer. Otherwise, the Velero server plugin would fail to start due to same plugin registered twice. Please find more information of CSI plugin merging in this page [csi]. If the plugin move CLI fails due to `not found`, that is caused by the Velero CSI plugin not installed before upgrade. It's safe to ignore the error. ``` bash velero plugin remove velero-velero-plugin-for-csi; echo 0 ``` Update the container image used by the Velero deployment, plugin and (optionally) the node agent daemon set: ```bash kubectl set image deployment/velero \\ velero=velero/velero:v1.14.0 \\ velero-plugin-for-aws=velero/velero-plugin-for-aws:v1.10.0 \\ --namespace velero kubectl set image daemonset/node-agent \\ node-agent=velero/velero:v1.14.0 \\ --namespace velero ``` Confirm that the deployment is up and running with the correct version by running: ```bash velero version ``` You should see the following output: ```bash Client: Version: v1.14.0 Git commit: <git SHA> Server: Version: v1.14.0 ``` The procedure for upgrading from a version lower than v1.10.0 is identical to the procedure above, except for step 4 as shown below. Update the container image and objects fields used by the Velero deployment and, optionally, the restic daemon set: ```bash kubectl get deploy -n velero -ojson \\ | sed \"s#\\\"image\\\"\\: \\\"velero\\/velero\\:v[0-9].[0-9].[0-9]\\\"#\\\"image\\\"\\: \\\"velero\\/velero\\:v1.14.0\\\"#g\" \\ | sed \"s#\\\"server\\\",#\\\"server\\\",\\\"--uploader-type=$uploader_type\\\",#g\" \\ | sed \"s#default-volumes-to-restic#default-volumes-to-fs-backup#g\" \\ | sed \"s#default-restic-prune-frequency#default-repo-maintain-frequency#g\" \\ | sed \"s#restic-timeout#fs-backup-timeout#g\" \\ | kubectl apply -f - echo $(kubectl get ds -n velero restic -ojson) \\ | sed \"s#\\\"image\\\"\\: \\\"velero\\/velero\\:v[0-9].[0-9].[0-9]\\\"#\\\"image\\\"\\: \\\"velero\\/velero\\:v1.14.0\\\"#g\" \\ | sed \"s#\\\"name\\\"\\: \\\"restic\\\"#\\\"name\\\"\\: \\\"node-agent\\\"#g\" \\ | sed \"s#\\[ \\\"restic\\\",#\\[ \\\"node-agent\\\",#g\" \\ | kubectl apply -f - kubectl delete ds -n velero restic --force --grace-period 0 ``` If upgrading from Velero v1.9.x or lower, there will likely remain some unused resources leftover in the cluster.These can be deleted manually (e.g. using kubectl) at your own discretion: resticrepository CRD and related CRs velero-restic-credentials secret in velero install namespace"
}
] |
{
"category": "Runtime",
"file_name": "upgrade-to-1.14.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: \"Examples\" layout: docs After you set up the Velero server, you can clone the examples used in the following sections by running the following: ``` git clone https://github.com/vmware-tanzu/velero.git cd velero ``` Start the sample nginx app: ```bash kubectl apply -f examples/nginx-app/base.yaml ``` Create a backup: ```bash velero backup create nginx-backup --include-namespaces nginx-example ``` Simulate a disaster: ```bash kubectl delete namespaces nginx-example ``` Wait for the namespace to be deleted. Restore your lost resources: ```bash velero restore create --from-backup nginx-backup ``` NOTE: For Azure, you must run Kubernetes version 1.7.2 or later to support PV snapshotting of managed disks. Start the sample nginx app: ```bash kubectl apply -f examples/nginx-app/with-pv.yaml ``` Create a backup with PV snapshotting: ```bash velero backup create nginx-backup --include-namespaces nginx-example ``` Simulate a disaster: ```bash kubectl delete namespaces nginx-example ``` Because the default for dynamically-provisioned PVs is \"Delete\", these commands should trigger your cloud provider to delete the disk that backs the PV. Deletion is asynchronous, so this may take some time. Before continuing to the next step, check your cloud provider to confirm that the disk no longer exists. Restore your lost resources: ```bash velero restore create --from-backup nginx-backup ```"
}
] |
{
"category": "Runtime",
"file_name": "examples.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "A utilizes a Virtual Machine (VM) to enhance security and isolation of container workloads. As a result, the system has a number of differences and limitations when compared with the default runtime, . Some of these limitations have potential solutions, whereas others exist due to fundamental architectural differences generally related to the use of VMs. The launches each container within its own hardware isolated VM, and each VM has its own kernel. Due to this higher degree of isolation, certain container capabilities cannot be supported or are implicitly enabled through the VM. The (\"OCI spec\") defines the minimum specifications a runtime must support to interoperate with container managers such as Docker. If a runtime does not support some aspect of the OCI spec, it is by definition a limitation. However, the OCI runtime reference implementation (`runc`) does not perfectly align with the OCI spec itself. Further, since the default OCI runtime used by Docker is `runc`, Docker expects runtimes to behave as `runc` does. This implies that another form of limitation arises if the behavior of a runtime implementation does not align with that of `runc`. Having two standards complicates the challenge of supporting a Docker environment since a runtime must support the official OCI spec and the non-standard extensions provided by `runc`. Each known limitation is captured in a separate GitHub issue that contains detailed information about the issue. These issues are tagged with the `limitation` label. This document is a curated summary of important known limitations and provides links to the relevant GitHub issues. The following link shows the latest list of limitations: https://github.com/pulls?utf8=%E2%9C%93&q=is%3Aopen+label%3Alimitation+org%3Akata-containers If you would like to work on resolving a limitation, please refer to the . If you wish to raise an issue for a new limitation, either or see the for advice on which repository to raise the issue against. This section lists items that might be possible to fix. Currently Kata Containers does not support Podman. See issue https://github.com/kata-containers/kata-containers/issues/722 for more information. Docker supports Kata Containers since 22.06: ```bash $ sudo docker run --runtime io.containerd.kata.v2 ``` Kata Containers works perfectly with containerd, we recommend to use containerd's Docker-style command line tool . The runtime does not provide `checkpoint` and `restore` commands. There are discussions about using VM save and restore to give us a -like functionality, which might provide a solution. Note that the OCI standard does not specify `checkpoint` and `restore` commands. See issue https://github.com/kata-containers/runtime/issues/184 for more information. The runtime does not fully implement the `events` command. `OOM` notifications and `Intel RDT` stats are not fully supported. Note that the OCI standard does not specify an `events` command. See issue https://github.com/kata-containers/runtime/issues/308 and https://github.com/kata-containers/runtime/issues/309 for more information. Currently, only block I/O weight is not supported. All other configurations are supported and are working properly. Host network (`nerdctl/docker run --net=host`or ) is not supported. It is not possible to directly access the host networking configuration from within the VM. The `--net=host` option can still be used with `runc` containers and inter-mixed with running Kata Containers, thus enabling use of `--net=host` when"
},
{
"data": "It should be noted, currently passing the `--net=host` option into a Kata Container may result in the Kata Container networking setup modifying, re-configuring and therefore possibly breaking the host networking setup. Do not use `--net=host` with Kata Containers. Docker supports the ability for containers to join another containers namespace with the `docker run --net=containers` syntax. This allows multiple containers to share a common network namespace and the network interfaces placed in the network namespace. Kata Containers does not support network namespace sharing. If a Kata Container is setup to share the network namespace of a `runc` container, the runtime effectively takes over all the network interfaces assigned to the namespace and binds them to the VM. Consequently, the `runc` container loses its network connectivity. The runtime does not support the `docker run --link` command. This command is now deprecated by docker and we have no intention of adding support. Equivalent functionality can be achieved with the newer docker networking commands. See more documentation at . Due to the way VMs differ in their CPU and memory allocation, and sharing across the host system, the implementation of an equivalent method for these commands is potentially challenging. See issue https://github.com/clearcontainers/runtime/issues/341 and for more information. For CPUs resource management see . . This section lists items that might not be fixed due to fundamental architectural differences between \"soft containers\" (i.e. traditional Linux* containers) and those based on VMs. Kubernetes `volumeMount.subPath` is not supported by Kata Containers at the moment. See for more details. focuses on the case of `emptyDir`. Privileged support in Kata is essentially different from `runc` containers. The container runs with elevated capabilities within the guest and is granted access to guest devices instead of the host devices. This is also true with using `securityContext privileged=true` with Kubernetes. The container may also be granted full access to a subset of host devices (https://github.com/kata-containers/runtime/issues/1568). See for how to configure some of this behavior. Applying resource constraints such as cgroup, CPU, memory, and storage to a workload is not always straightforward with a VM based system. A Kata Container runs in an isolated environment inside a virtual machine. This, coupled with the architecture of Kata Containers, offers many more possibilities than are available to traditional Linux containers due to the various layers and contexts. In some cases it might be necessary to apply the constraints to multiple levels. In other cases, the hardware isolated VM provides equivalent functionality to the the requested constraint. The following examples outline some of the various areas constraints can be applied: Inside the VM Constrain the guest kernel. This can be achieved by passing particular values through the kernel command line used to boot the guest kernel. Alternatively, sysctl values can be applied at early boot. Inside the container Constrain the container created inside the VM. Outside the VM: Constrain the hypervisor process by applying host-level constraints. Constrain all processes running inside the hypervisor. This can be achieved by specifying particular hypervisor configuration options. Note that in some circumstances it might be necessary to apply particular constraints to more than one of the previous areas to achieve the desired level of isolation and resource control."
}
] |
{
"category": "Runtime",
"file_name": "Limitations.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.