content
listlengths 1
171
| tag
dict |
---|---|
[
{
"data": "- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -"
}
] |
{
"category": "Runtime",
"file_name": "SUMMARY.md",
"project_name": "WasmEdge Runtime",
"subcategory": "Container Runtime"
}
|
[
{
"data": "go-md2man 1 \"January 2015\" go-md2man \"User Manual\" ================================================== go-md2man - Convert mardown files into manpages go-md2man -in=[/path/to/md/file] -out=[/path/to/output] go-md2man converts standard markdown formatted documents into manpages. It is written purely in Go so as to reduce dependencies on 3rd party libs. By default, the input is stdin and the output is stdout. Convert the markdown file \"go-md2man.1.md\" into a manpage. go-md2man -in=README.md -out=go-md2man.1.out January 2015, Originally compiled by Brian Goff( [email protected] )"
}
] |
{
"category": "Runtime",
"file_name": "go-md2man.1.md",
"project_name": "CRI-O",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This document holds information about modifying or replacing builtin defaults and is only recommended to advanced users. Please make sure to have read the before treading into these things. If a network has a name \"default\", it will override the default network added by rkt. It is strongly recommended that such network also has type \"ptp\" as it protects from the pod spoofing its IP address and defeating identity management provided by the metadata service. The network backend CNI allows the passing of , specifically `CNI_ARGS`, at runtime. These arguments can be used to reconfigure a network without changing the configuration file. rkt supports the `CNI_ARGS` variable through the command line argument `--net`. The syntax for passing arguments to a network looks like `--net=\"$networkname1:$arg1=$val1;$arg2=val2\"`. When executed from a shell, you can use double quotes to avoid `;` being interpreted as a command separator by the shell. To allow the passing of arguments to different networks simply append the arguments to the network name with a colon (`:`), and separate the arguments by semicolon (`;`). All arguments can either be given in a single instance of the `--net`, or can be spread across multiple uses of `--net`. Reminder: the separator for the networks (and their arguments) within one `--net` instance is the comma `,`. A network name must not be passed more than once, not within the same nor throughout multiple instances of `--net`. This example will override the IP in the networks net1 and net2. ```bash rkt run --net=\"net1:IP=1.2.3.4\" --net=\"net2:IP=1.2.4.5\" pod.aci ``` This example will load all configured networks and override the IP addresses for net1 and net2. ```bash rkt run --net=\"all,net1:IP=1.2.3.4\" --net=\"net2:IP=1.2.4.5\" pod.aci ``` This is not documented yet. Please follow CNI issue to track the progress of the documentation."
}
] |
{
"category": "Runtime",
"file_name": "overriding-defaults.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This is a documentation for the legacy JetStream API. A README for the current API can be found ```go import \"github.com/nats-io/nats.go\" // Connect to NATS nc, _ := nats.Connect(nats.DefaultURL) // Create JetStream Context js, _ := nc.JetStream(nats.PublishAsyncMaxPending(256)) // Simple Stream Publisher js.Publish(\"ORDERS.scratch\", []byte(\"hello\")) // Simple Async Stream Publisher for i := 0; i < 500; i++ { js.PublishAsync(\"ORDERS.scratch\", []byte(\"hello\")) } select { case <-js.PublishAsyncComplete(): case <-time.After(5 * time.Second): fmt.Println(\"Did not resolve in time\") } // Simple Async Ephemeral Consumer js.Subscribe(\"ORDERS.\", func(m nats.Msg) { fmt.Printf(\"Received a JetStream message: %s\\n\", string(m.Data)) }) // Simple Sync Durable Consumer (optional SubOpts at the end) sub, err := js.SubscribeSync(\"ORDERS.*\", nats.Durable(\"MONITOR\"), nats.MaxDeliver(3)) m, err := sub.NextMsg(timeout) // Simple Pull Consumer sub, err := js.PullSubscribe(\"ORDERS.*\", \"MONITOR\") msgs, err := sub.Fetch(10) // Unsubscribe sub.Unsubscribe() // Drain sub.Drain() ``` ```go import \"github.com/nats-io/nats.go\" // Connect to NATS nc, _ := nats.Connect(nats.DefaultURL) // Create JetStream Context js, _ := nc.JetStream() // Create a Stream js.AddStream(&nats.StreamConfig{ Name: \"ORDERS\", Subjects: []string{\"ORDERS.*\"}, }) // Update a Stream js.UpdateStream(&nats.StreamConfig{ Name: \"ORDERS\", MaxBytes: 8, }) // Create a Consumer js.AddConsumer(\"ORDERS\", &nats.ConsumerConfig{ Durable: \"MONITOR\", }) // Delete Consumer js.DeleteConsumer(\"ORDERS\", \"MONITOR\") // Delete Stream js.DeleteStream(\"ORDERS\") ```"
}
] |
{
"category": "Runtime",
"file_name": "legacy_jetstream.md",
"project_name": "Stash by AppsCode",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: \"Enable API Group Versions Feature\" layout: docs Velero serves to both restore and migrate Kubernetes applications. Typically, backup and restore does not involve upgrading Kubernetes API group versions. However, when migrating from a source cluster to a destination cluster, it is not unusual to see the API group versions differing between clusters. NOTE: Kubernetes applications are made up of various resources. Common resources are pods, jobs, and deployments. Custom resources are created via custom resource definitions (CRDs). Every resource, whether custom or not, is part of a group, and each group has a version called the API group version. Kubernetes by default allows changing API group versions between clusters as long as the upgrade is a single version, for example, v1 -> v2beta1. Jumping multiple versions, for example, v1 -> v3, is not supported out of the box. This is where the Velero Enable API Group Version feature can help you during an upgrade. Currently, the Enable API Group Version feature is in beta and can be enabled by installing Velero with a , `--features=EnableAPIGroupVersions`. For the most up-to-date information on Kubernetes API version compatibility, you should always review the for the source and destination cluster version to before starting an upgrade, migration, or restore. If there is a difference between Kubernetes API versions, use the Enable API Group Version feature to help mitigate compatibility issues. When the Enable API Group Versions feature is enabled on the source cluster, Velero will not only back up Kubernetes preferred API group versions, but it will also back up all supported versions on the cluster. As an example, consider the resource `horizontalpodautoscalers` which falls under the `autoscaling` group. Without the feature flag enabled, only the preferred API group version for autoscaling, `v1` will be backed up. With the feature enabled, the remaining supported versions, `v2beta1` and `v2beta2` will also be backed up. Once the versions are stored in the backup tarball file, they will be available to be restored on the destination cluster. When the Enable API Group Versions feature is enabled on the destination cluster, Velero restore will choose the version to restore based on an API group version priority order. The version priorities are listed from highest to lowest priority below: Priority 1: destination cluster preferred version Priority 2: source cluster preferred version Priority 3: non-preferred common supported version with the highest The highest priority (Priority 1) will be the destination cluster's preferred API group version. If the destination preferred version is found in the backup tarball, it will be the API group version chosen for restoration for that resource. However, if the destination preferred version is not found in the backup tarball, the next version in the list will be selected: the source cluster preferred version (Priority 2). If the source cluster preferred version is found to be supported by the destination cluster, it will be chosen as the API group version to restore. However, if the source preferred version is not supported by the destination cluster, then the next version in the list will be considered: a non-preferred common supported version (Priority"
},
{
"data": "In the case that there are more than one non-preferred common supported version, which version will be chosen? The answer requires understanding the . Kubernetes prioritizes group versions by making the latest, most stable version the highest priority. The highest priority version is the Kubernetes preferred version. Here is a sorted version list example from the Kubernetes.io documentation: v10 v2 v1 v11beta2 v10beta3 v3beta1 v12alpha1 v11alpha2 foo1 foo10 Of the non-preferred common versions, the version that has the highest Kubernetes version priority will be chosen. See the example for Priority 3 below. To better understand which API group version will be chosen, the following provides some concrete examples. The examples use the term \"target cluster\" which is synonymous to \"destination cluster\". on source cluster with the . The flag is `--features=EnableAPIGroupVersions`. For the enable API group versions feature to work, the feature flag needs to be used for Velero installations on both the source and destination clusters. Back up and restore following the . Note that \"Cluster 1\" in the instructions refers to the source cluster, and \"Cluster 2\" refers to the destination cluster. Optionally, users can create a config map to override the default API group prioritization for some or all of the resources being migrated. For each resource that is specified by the user, Velero will search for the version in both the backup tarball and the destination cluster. If there is a match, the user-specified API group version will be restored. If the backup tarball and the destination cluster does not have or support any of the user-specified versions, then the default version prioritization will be used. Here are the steps for creating a config map that allows users to override the default version prioritization. These steps must happen on the destination cluster before a Velero restore is initiated. Create a file called `restoreResourcesVersionPriority`. The file name will become a key in the `data` field of the config map. In the file, write a line for each resource group you'd like to override. Make sure each line follows the format `<resource>.<group>=<highest user priority version>,<next highest>` Note that the resource group and versions are separated by a single equal (=) sign. Each version is listed in order of user's priority separated by commas. Here is an example of the contents of a config map file: ```cm rockbands.music.example.io=v2beta1,v2beta2 orchestras.music.example.io=v2,v3alpha1 subscriptions.operators.coreos.com=v2,v1 ``` Apply config map with ```bash kubectl create configmap enableapigroupversions --from-file=<absolute path>/restoreResourcesVersionPriority -n velero ``` See the config map with ```bash kubectl describe configmap enableapigroupversions -n velero ``` The config map should look something like ```bash Name: enableapigroupversions Namespace: velero Labels: <none> Annotations: <none> Data ==== restoreResourcesVersionPriority: rockbands.music.example.io=v2beta1,v2beta2 orchestras.music.example.io=v2,v3alpha1 subscriptions.operators.coreos.com=v2,v1 Events: <none> ``` Refer to the of the docs as the techniques generally apply here as well. The will contain information on which version was chosen to restore. If no API group version could be found that both exists in the backup tarball file and is supported by the destination cluster, then the following error will be recorded (no need to activate debug level logging): `\"error restoring rockbands.music.example.io/rockstars/beatles: the server could not find the requested resource\"`."
}
] |
{
"category": "Runtime",
"file_name": "enable-api-group-versions-feature.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Bucket Claim Rook supports the creation of new buckets and access to existing buckets via two custom resources: an `Object Bucket Claim (OBC)` is custom resource which requests a bucket (new or existing) and is described by a Custom Resource Definition (CRD) shown below. an `Object Bucket (OB)` is a custom resource automatically generated when a bucket is provisioned. It is a global resource, typically not visible to non-admin users, and contains information specific to the bucket. It is described by an OB CRD, also shown below. An OBC references a storage class which is created by an administrator. The storage class defines whether the bucket requested is a new bucket or an existing bucket. It also defines the bucket retention policy. Users request a new or existing bucket by creating an OBC which is shown below. The ceph provisioner detects the OBC and creates a new bucket or grants access to an existing bucket, depending the storage class referenced in the OBC. It also generates a Secret which provides credentials to access the bucket, and a ConfigMap which contains the bucket's endpoint. Application pods consume the information in the Secret and ConfigMap to access the bucket. Please note that to make provisioner watch the cluster namespace only you need to set `ROOKOBCWATCHOPERATORNAMESPACE` to `true` in the operator manifest, otherwise it watches all namespaces. The OBC provisioner name found in the storage class by default includes the operator namespace as a prefix. A custom prefix can be applied by the operator setting in the `rook-ceph-operator-config` configmap: `ROOKOBCPROVISIONERNAMEPREFIX`. !!! Note Changing the prefix is not supported on existing clusters. This may impact the function of existing OBCs. ```yaml apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: ceph-bucket [1] namespace: rook-ceph [2] spec: bucketName: [3] generateBucketName: photo-booth [4] storageClassName: rook-ceph-bucket [5] additionalConfig: [6] maxObjects: \"1000\" maxSize: \"2G\" ``` `name` of the `ObjectBucketClaim`. This name becomes the name of the Secret and ConfigMap. `namespace`(optional) of the `ObjectBucketClaim`, which is also the namespace of the ConfigMap and Secret. `bucketName` name of the `bucket`. Not recommended for new buckets since names must be unique within an entire object store. `generateBucketName` value becomes the prefix for a randomly generated name, if supplied then `bucketName` must be empty. If both `bucketName` and `generateBucketName` are supplied then `BucketName` has precedence and `GenerateBucketName` is"
},
{
"data": "If both `bucketName` and `generateBucketName` are blank or omitted then the storage class is expected to contain the name of an existing bucket. It's an error if all three bucket related names are blank or omitted. `storageClassName` which defines the StorageClass which contains the names of the bucket provisioner, the object-store and specifies the bucket retention policy. `additionalConfig` is an optional list of key-value pairs used to define attributes specific to the bucket being provisioned by this OBC. This information is typically tuned to a particular bucket provisioner and may limit application portability. Options supported: `maxObjects`: The maximum number of objects in the bucket `maxSize`: The maximum size of the bucket, please note minimum recommended value is 4K. ```yaml apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: creationTimestamp: \"2019-10-18T09:54:01Z\" generation: 2 name: ceph-bucket namespace: default [1] resourceVersion: \"559491\" spec: ObjectBucketName: obc-default-ceph-bucket [2] additionalConfig: null bucketName: photo-booth-c1178d61-1517-431f-8408-ec4c9fa50bee [3] storageClassName: rook-ceph-bucket [4] status: phase: Bound [5] ``` `namespace` where OBC got created. `ObjectBucketName` generated OB name created using name space and OBC name. the generated (in this case), unique `bucket name` for the new bucket. name of the storage class from OBC got created. phases of bucket creation: Pending: the operator is processing the request. Bound: the operator finished processing the request and linked the OBC and OB Released: the OB has been deleted, leaving the OBC unclaimed but unavailable. Failed: not currently set. ```yaml apiVersion: v1 kind: Pod metadata: name: app-pod namespace: dev-user spec: containers: name: mycontainer image: redis envFrom: [1] configMapRef: name: ceph-bucket [2] secretRef: name: ceph-bucket [3] ``` use `env:` if mapping of the defined key names to the env var names used by the app is needed. makes available to the pod as env variables: `BUCKETHOST`, `BUCKETPORT`, `BUCKET_NAME` makes available to the pod as env variables: `AWSACCESSKEYID`, `AWSSECRETACCESSKEY` ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-ceph-bucket labels: aws-s3/object [1] provisioner: rook-ceph.ceph.rook.io/bucket [2] parameters: [3] objectStoreName: my-store objectStoreNamespace: rook-ceph bucketName: ceph-bucket [4] reclaimPolicy: Delete [5] ``` `label`(optional) here associates this `StorageClass` to a specific provisioner. `provisioner` responsible for handling `OBCs` referencing this `StorageClass`. all `parameter` required. `bucketName` is required for access to existing buckets but is omitted when provisioning new buckets. Unlike greenfield provisioning, the brownfield bucket name appears in the `StorageClass`, not the `OBC`. rook-ceph provisioner decides how to treat the `reclaimPolicy` when an `OBC` is deleted for the bucket. See explanation as Delete = physically delete the bucket. Retain = do not physically delete the bucket."
}
] |
{
"category": "Runtime",
"file_name": "ceph-object-bucket-claim.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Generate the autocompletion script for bash Generate the autocompletion script for the bash shell. This script depends on the 'bash-completion' package. If it is not installed already, you can install it via your OS's package manager. To load completions in your current shell session: source <(cilium-operator-alibabacloud completion bash) To load completions for every new session, execute once: cilium-operator-alibabacloud completion bash > /etc/bash_completion.d/cilium-operator-alibabacloud cilium-operator-alibabacloud completion bash > $(brew --prefix)/etc/bash_completion.d/cilium-operator-alibabacloud You will need to start a new shell for this setup to take effect. ``` cilium-operator-alibabacloud completion bash ``` ``` -h, --help help for bash --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell"
}
] |
{
"category": "Runtime",
"file_name": "cilium-operator-alibabacloud_completion_bash.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "<!-- Thanks for sending a pull request! Here are some tips for you: If this is your first time, read our contributor guidelines https://github.com/nokia/danm/blob/master/CONTRIBUTING.md --> What type of PR is this? Uncomment only one, leave it on its own line: bug cleanup design documentation failing-test feature What does this PR give to us: Which issue(s) this PR fixes (in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged): Fixes # Special notes for your reviewer: Does this PR introduce a user-facing change?: <!-- If no, just write \"NONE\". If yes, a release note is required: Enter your extended release note in the block below. If the PR requires additional action from users switching to the new release, include the string \"action required\". --> ```release-note ```"
}
] |
{
"category": "Runtime",
"file_name": "PULL_REQUEST_TEMPLATE.md",
"project_name": "DANM",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "The MIT License (MIT) Copyright (c) 2013-19 by Carlos Cobo and contributors. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE."
}
] |
{
"category": "Runtime",
"file_name": "LICENSE.md",
"project_name": "Spiderpool",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "This document describes Sysbox configuration. Note that usually you don't need to modify Sysbox's default configuration. The Sysbox installer starts the automatically as Systemd services. Sysbox is made up of 3 systemd services: `sysbox.service`: top level service to start, stop, and restart Sysbox. `sysbox-mgr.service`: sub-service that starts the sysbox-mgr daemon `sysbox-fs.service`: sub-service that starts the sysbox-fs daemon These are usually located in `/lib/systemd/system/`. Users normally interact with the top-level `sysbox.service` to start, stop, and restart Sysbox. For example, to stop Sysbox you can type: ```console $ sudo systemctl stop sysbox ``` NOTE: Make sure all Sysbox containers are stopped and removed before stopping Sysbox, as otherwise they will stop operating properly. To start Sysbox again you can type: ```console $ sudo systemctl start sysbox ``` Users don't normally interact with the sub-services (sysbox-mgr and sysbox-fs), except when you wish to modify command line parameters associated with these (see next section). Normally users need not worry about reconfiguring Sysbox as the default configuration suffices in most cases. However, there are scenarios where the sysbox-mgr or sysbox-fs daemons may need to be reconfigured. You can type `sudo sysbox-fs --help` or `sudo sysbox-mgr --help` to see the command line configuration options that each take. For example, when troubleshooting, it's useful to increase the log level by passing the `--log-level debug` to the sysbox-fs or sysbox-mgr daemons. The reconfiguration is done by modifying the systemd sub-service associated with sysbox-fs and/or sysbox-mgr (per your needs). Once the systemd sub-service is reconfigured, you restart Sysbox using the . NOTE: Always use the top level Sysbox service to start, stop, and restart Sysbox. Do not do this on the sub-services directly. For example, to reconfigure Sysbox, do the following: Stop all system containers (there is a sample script for this ). Modify the `ExecStart` command in the appropriate systemd service (`sysbox-fs.service` or `sysbox-mgr.service`): For example, if you wish to change the log-level of the sysbox-fs service, do the following: ```console $ sudo sed -i --follow-symlinks '/^ExecStart/ s/$/ --log-level debug/' /lib/systemd/system/sysbox-fs.service $ $ egrep \"ExecStart\" /lib/systemd/system/sysbox-fs.service ExecStart=/usr/bin/sysbox-fs --log /var/log/sysbox-fs.log --log-level debug ``` Reload Systemd to digest the previous change: ```console $ sudo systemctl daemon-reload ``` Restart the sysbox (top-level) service: ```console $ sudo systemctl restart sysbox ``` Verify the sysbox service is running: ```console $ sudo systemctl status sysbox.service sysbox.service - Sysbox General Service Loaded: loaded (/lib/systemd/system/sysbox.service; enabled; vendor preset: enabled) Active: active (exited) since Sun 2019-10-27 05:18:59 UTC; 14s ago Process: 26065 ExecStart=/bin/true (code=exited, status=0/SUCCESS) Main PID: 26065 (code=exited, status=0/SUCCESS) Oct 27 05:18:59 disco1 systemd[1]: sysbox.service: Succeeded. Oct 27 05:18:59 disco1 systemd[1]: Stopped Sysbox General Service. Oct 27 05:18:59 disco1 systemd[1]: Stopping Sysbox General Service... Oct 27 05:18:59 disco1 systemd[1]: Starting Sysbox General Service... Oct 27 05:18:59 disco1 systemd[1]: Started Sysbox General Service. ``` That's it. You can now start using Sysbox with the updated configuration. As root, type `sysbox-mgr --help` to get a list of command line options supported by the sysbox-mgr component. Same for sysbox-fs: `sysbox-fs --help`. The Sysbox logs are located at `/var/log/sysbox-*.log`. You can change the location of the log file via the `--log` option in both the sysbox-fs and sysbox-mgr daemons. In addition, the format of the logs can be controlled. By default they are in text format, but you can change them to json format via the `--log-format` config option in both the sysbox-mgr and sysbox-fs"
},
{
"data": "Finally, the log-level (info, debug, etc) can be changed via the `--log-level` option. This is useful for debugging. As part of its operation, Sysbox uses a host directory as a data store. By default, Sysbox uses `/var/lib/sysbox`, but this can be changed if needed (see below). Depending on the workloads running on the system containers created by Sysbox, the amount of data stored in the Sysbox data store can be significant (hundreds of MBs to several GBs). We recommend that the Sysbox data store be no smaller than 10GB, but the capacity really depends on how many system container instances you will be running, whether inside of those containers you will be deploying inner containers, and the size of those inner container images. For example, when running Docker inside a system container, the inner Docker images are stored in this Sysbox's data store. The size of those can add up quickly. Similarly, when running Kubernetes inside a system container, the Kubelet's data is also stored in the Sysbox's data store. It's important to understand that this data resides in the Sysbox data store only while the container is running. When the container stops, Sysbox deletes the associated data (containers are stateless by default). You can change the location of the Sysbox data store by passing the `--data-root` option to the Sysbox Manager daemon via its associated systemd service (`/lib/systemd/system/sysbox-mgr.service`): ExecStart=/usr/bin/sysbox-mgr --data-root /some/other/dir Once reconfigured, restart the Sysbox systemd service as described in above. Finally: if you create a system container and mount a Docker volume (or a host directory) into the container's `/var/lib/docker` directory, then the inner Docker images are stored in that Docker volume rather than in the Sysbox data store. This also means that the data can persist across the container's life-cycle (i.e., it won't be deleted when the container is removed). See for an example of how to do this. By default, Sysbox assigns process capabilities in the container as follows: Enables all process capabilities for the system container's init process when owned by the root user. Disables all process capabilities for the system container's init process when owned by a non-root user. This mimics the way capabilities are assigned to processes on a physical host (or a VM) and relieves users of the burden on having to understand what capabilities are needed by the processes running inside the container. Note that with Sysbox, container capabilities are always isolated from the host via the Linux user-namespace, which Sysbox enables on all containers. This means you can run a root process with full capabilities inside the container in a more secure way than with regular containers. See the for more on this. While Sysbox's default behavior for assigning capabilities is beneficial in most cases, it has the drawback that it does not allow fine-grained control of the capabilities used by the Sysbox container init process (e.g., Docker's `--cap-add` and `--cap-drop` don't have any effect). In some situations, having such control is beneficial. To overcome this, starting with Sysbox v0.5.0 it's possible to configure Sysbox to honor the capabilities passed to it by the higher level container manager (e.g., Docker or Kubernetes) via the container's OCI spec. The configuration can be done on a per-container basis, or globally for all containers. To do this on a per-container basis, pass the `SYSBOXHONORCAPS=TRUE` environment variable to the container, as follows: ``` $ docker run --runtime=sysbox-runc -e SYSBOXHONORCAPS=TRUE --rm -it alpine"
},
{
"data": "# cat /proc/self/status | grep -i cap CapInh: 00000000a80425fb CapPrm: 00000000a80425fb CapEff: 00000000a80425fb CapBnd: 00000000a80425fb CapAmb: 0000000000000000 ``` To do this globally (i.e., for all Sysbox containers), configure the sysbox-mgr with the `--honor-caps` command line option (see . If you set `--honor-caps` globally, you can always deploy a container with the default behavior by passing environment variable `SYSBOXHONORCAPS=FALSE`. Note that while configuring Sysbox to honor capabilities gives you full control over the container's process capabilities (which is good for extra security), the drawback is that you must understand all the capabilities required by the processes inside the container (e.g., if you run Docker inside the Sysbox container, you must understand what capabilities Docker requires in order to operate properly; same if you run systemd inside the container). This can be tricky since such software is complex and may require different capabilities depending on what functions it's performing inside the container. The filesystem is commonly used by container managers (e.g., Docker or Kubernetes) to set up the container's root filesystem. To support file removal, overlayfs uses a concept called which relies on setting the `trusted.overlay.opaque` extended attribute (xattr) on the removed file. Setting `trusted.*` xattrs requires a process to have elevated privileges (i.e, `CAPSYSADMIN`) since they are expected to be set by admin processes, not by ordinary ones. In addition, Linux does not allow `trusted.*` xattrs to be set from within a user-namespace (even if a process has `CAPSYSADMIN` within the user-namespace), given that otherwise a regular user could create a user-namespace and set the `trusted.*` xattr on the file. This limitation for setting `trusted.*` xattr from within a user-namespace creates a problem for containers that are secured with the user-namespace, such as Sysbox containers. The reason is that it prevents software such as Docker, which commonly uses overlayfs and sets the `trusted.overlay.opaque` xattr on files, from working properly inside the Sysbox container. To overcome this, Sysbox has a mechanism that allows the `trusted.overlay.opaque` xattr to be set from within a container. It does this by trapping the `*xattr()` syscalls inside the container (e.g., `setxattr`, `getxattr`, etc.) and performing the required operation at host level. This is pretty safe, given that the container processes can only set the `trusted.overlay.opaque` xattr on files within the container's chroot jail. The drawback however is that this can impact performance (sometimes severely), particularly in workloads that perform lots of `*xattr()` syscalls from within the container. Fortunately, starting with Linux kernel 5.11, overlayfs supports whiteouts using an alternative `user.overlay.opaque` xattr which can be configured from within a user-namespace (see ) In addition, Docker versions >= 20.10.9 include code that takes advantage of this feature. Therefore, if your host has kernel >= 5.11 and you run Docker >= 20.10.9 inside a Sysbox container, you don't need Sysbox to trap the `*xattr()` syscalls, which in turn improves performance (in some cases significantly). It's possible to configure Sysbox to not trap the `*xattr()` syscalls. This can be done on a per-container basis by passing the `SYSBOXALLOWTRUSTED_XATTR=FALSE` environment variable to the container: ``` $ docker run --runtime=sysbox-runc -e SYSBOXALLOWTRUSTED_XATTR=FALSE --rm -it alpine ``` In essence, this tells Sysbox to not allow the container to set `trusted.overlay.opaque` inside the container, which in turn means it won't trap the `*xattr()` syscalls. You can also configure this globally (i.e., for all Sysbox containers), by starting the sysbox-mgr with the `--allow-trusted-xattr=false` command line option (see"
},
{
"data": "If you set `--allow-trusted-xattr=false` globally, you can always deploy a Sysbox container with the default behavior by passing environment variable `SYSBOXALLOWTRUSTED_XATTR=TRUE`. NOTE: Starting with Sysbox v0.6.3, Sysbox starts with `--allow-trusted-xattr=false` by default. This improves performance (sometimes significantly) because Sysbox need not trap `*xattr()` syscalls.However, it also means that applications that want to use `trusted.*` extended file attributes won't work (i.e., they'll receive an EPERM when trying to access such attributes), unless the user explicitly starts the container with `SYSBOXALLOWTRUSTED_XATTR=TRUE`. This change was done because applications such as Docker Engine (version 20.10.9+) no longer use `trusted.*` extended file attributes when they run inside a Linux user-namespace (e.g., as in Sysbox containers), so having Sysbox intercept all `*xattr()` syscalls was unnecessarily adding a performance penalty. Nonetheless, other applications may still rely on `trusted.*` attributes, and those will not work properly inside Sysbox containers unless the container is started with `SYSBOXALLOWTRUSTED_XATTR=TRUE` as described above (or unless the sysbox-mgr is configured with `--allow-trusted-xattr=true` in the respective Sysbox systemd unit file). Sysbox has a feature that allows preloading of container images into Sysbox container images. That is, when you launch a Sysbox container, it will come preloaded with inner Docker images for example. This feature is enabled by default in Sysbox containers, and it's useful when running Docker inside Sysbox containers, as the Docker instance inside the Sysbox container will find preloaded container images so it does not have to pull them from the web, thereby saving time and network bandwidth. Image preloading is done when building the Sysbox container image (e.g., via `docker build`) or by snapshotting a running Sysbox container (e.g., via `docker commit`). More info . However, this feature increases the container startup and stop latency, and the delay can be significant even if you don't have a need for preloading inner container images into Sysbox containers. For example, if you launch a Sysbox container with Docker inside, and that inner Docker engine pulls lots of container images, then when the Sysbox container stops there may be a delay of several seconds. This delay occurs because Sysbox is moving data from the `/var/lib/sysbox` directory to the container's root filesystem on the assumption that it may be needed as part of a `docker build` or `docker commit` of the Sysbox container (unless the container was launched with the Docker `--rm` option, in which case Sysbox won't move any data since it knows the container will be deleted immediately after stopping). To speed up Sysbox, if you have no need to create or run Sysbox container images that come preloaded with inner containers, you can disable the feature by passing the `--disable-inner-image-preload` to sysbox-mgr. See for more info how to do this. Inside a Sysbox container, the `/sys` directory (i.e., the sysfs mountpoint) shows up as owned by `nobody:nogroup` (rather than `root:root`). Moreover, changing the ownership of `/sys/` to `root:root` fails with `Operation not permitted`. This is due to a technical limitation in Sysbox and the Linux kernel. ``` $ docker run --runtime=sysbox-runc -it --rm alpine / # ls -l / | grep sys dr-xr-xr-x 13 nobody nobody 0 Mar 11 23:14 sys / # chown root:root /sys chown: /sys: Operation not permitted ``` Though not common, some application that users run inside Sysbox containers (notably the `rpm` package manager) may try to change the ownership of `/sys` inside the container. Since this operation fails, the application reports an error and"
},
{
"data": "To overcome this, Sysbox can be configured to ignore chowns to `/sys` inside the container by passing the `SYSBOXIGNORESYSFS_CHOWN=TRUE` environment variable to the container, as shown below: ``` $ docker run --runtime=sysbox-runc -e SYSBOXIGNORESYSFS_CHOWN=TRUE --rm -it alpine / # chown root:root /sys / # echo $? 0 / # ls -l / | grep sys dr-xr-xr-x 13 nobody nobody 0 Mar 11 23:17 sys ``` You can also configure this globally (i.e., for all Sysbox containers), by starting the sysbox-mgr with the `--ignore-sysfs-chown` command line option (see . Note that configuring Sysbox to ignore chown on sysfs requires that Sysbox trap the `chown` syscall. This can slow down the container, in some cases significantly (i.e., if the processes inside the container perform lots of chown syscalls). As mentioned in the , Sysbox uses a Linux kernel feature called ID-mapped mounts (available in kernel >= 5.12) to expose host files inside the (rootless) container with proper permissions. While not usually required (except for testing or debugging), it's possible to disable Sysbox's usage of ID-mapped mounts by passing the `--disable-idmapped-mount` option to the sysbox-mgr's command line. See the section on above for further info on how to do this. Shiftfs serves a purpose similar to ID-mapped mounts, as described in the . While not usually required (except for testing or debugging), it's possible to disable Sysbox's usage of shiftfs by passing the `--disable-shiftfs` option to the sysbox-mgr's command line. See the section on above for further info on how to do this. Although uncommon, in some scenarios it's desirable to tell Sysbox to not use ID-mapped-mounts or shiftfs on files or directories bind-mounted into the container. This is useful when neither of these mechanisms works well with the filesystem where the file to be bind-mounted resides. To tell Sysbox to not use ID-mapped-mounts or shiftfs on a bind-mounted directory or file, set the `SYSBOXSKIPUID_SHIFT=<bind-mount-path>` environment variable when creating the container. For example, normally when you create a bind-mount into a container, Sysbox will use ID-mapped-mounts or shiftfs to ensure the bind-mounted file shows up with proper ownership inside the rootless Sysbox container: ``` $ docker run --runtime=sysbox-runc -it --rm -v /path/to/somefile:/mnt/somefile alpine / # cat /proc/self/uid_map 0 165536 65536 / # mount | grep somefile /dev/nvme0n1p5 on /mnt/somefile type ext4 (rw,relatime,idmapped,errors=remount-ro) / # ls -l /mnt total 4 -rw-rw-r-- 1 1000 1000 6 Jan 9 22:04 somefile ``` In contrast, if we pass `-e SYSBOXSKIPUID_SHIFT=/mnt/somefile`, this happens: ``` $ docker run --runtime=sysbox-runc -it --rm -v /path/to/somefile:/mnt/somefile -e SYSBOXSKIPUID_SHIFT=/mnt/somefile alpine / # cat /proc/self/uid_map 0 165536 65536 / # mount | grep somefile /dev/nvme0n1p5 on /mnt/somefile type ext4 (rw,relatime,errors=remount-ro) / # ls -l /mnt total 4 -rw-rw-r-- 1 nobody nobody 6 Jan 9 22:04 somefile ``` Though this feature is usually not needed, it is sometimes useful if the file being bind-mounted resides on a file-system where id-mapped-mounts or shiftfs does not work properly. Finally, the `SYSBOXSKIPUID_SHIFT=<path>` flag only applies to bind-mounts. That is, `<path>` has to be a path inside the container that is a bind-mount target. Sysbox requires some kernel parameters to be modified from their default values, in order to ensure the kernel allocates sufficient resources needed to run system containers. The Sysbox installer performs these changes automatically. Below is the list of kernel parameters configured by Sysbox (via `sysctl`): ```console fs.inotify.maxqueuedevents = 1048576 fs.inotify.maxuserwatches = 1048576 fs.inotify.maxuserinstances = 1048576 kernel.keys.maxkeys = 20000 kernel.keys.maxbytes = 400000 ```"
}
] |
{
"category": "Runtime",
"file_name": "configuration.md",
"project_name": "Sysbox",
"subcategory": "Container Runtime"
}
|
[
{
"data": "English | `Weave`, an open-source network solution, provides network connectivity and policies for containers by creating a virtual network, automatically discovering and connecting containers. Also known as a Kubernetes Container Network Interface (CNI) solution, `Weave` utilizes the built-in `IPAM` to allocate IP addresses for Pods by default, with limited visibility and IPAM capabilities for Pods. This page demonstrates how `Weave` and `Spiderpool` can be integrated to extend `Weave`'s IPAM capabilities while preserving its original functions. A ready Kubernetes cluster without any CNI installed Helm, Kubectl and Jq (optional) Install Weave: ```shell kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml ``` Wait for Pod Running: ```shell [root@node1 ~]# kubectl get po -n kube-system | grep weave weave-net-ck849 2/2 Running 4 0 1m weave-net-vhmqx 2/2 Running 4 0 1m ``` Install Spiderpool ```shell helm repo add spiderpool https://spidernet-io.github.io/spiderpool helm repo update spiderpool helm install spiderpool spiderpool/spiderpool --namespace kube-system --set multus.multusCNI.install=false ``` > If you are a mainland user who is not available to access ghcr.io, you can specify the parameter `-set global.imageRegistryOverride=ghcr.m.daocloud.io` to avoid image pulling failures for Spiderpool. > > Specify the name of the NetworkAttachmentDefinition instance for the default CNI used by Multus via `multus.multusCNI.defaultCniCRName`. If the `multus.multusCNI.defaultCniCRName` option is provided, an empty NetworkAttachmentDefinition instance will be automatically generated upon installation. Otherwise, Multus will attempt to create a NetworkAttachmentDefinition instance based on the first CNI configuration found in the /etc/cni/net.d directory. If no suitable configuration is found, a NetworkAttachmentDefinition instance named `default` will be created to complete the installation of Multus. Wait for Pod Running and create the IPPool used by Pod: ```shell cat << EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: weave-ippool-v4 labels: ipam.spidernet.io/subnet-cidr: 10-32-0-0-12 spec: ips: 10.32.0.100-10.32.50.200 subnet: 10.32.0.0/12 EOF ``` > `Weave` uses `10.32.0.0/12` as the cluster's default subnet, and thus a SpiderIPPool with /the same subnet needs to be created in this case. Verify installation ```shell [root@node1 ~]# kubectl get po -n kube-system | grep spiderpool spiderpool-agent-7hhkz 1/1 Running 0 13m spiderpool-agent-kxf27 1/1 Running 0 13m spiderpool-controller-76798dbb68-xnktr 1/1 Running 0 13m spiderpool-init 0/1 Completed 0 13m [root@node1 ~]# kubectl get sp NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT DISABLE weave-ippool-v4 4"
},
{
"data": "0 12901 false ``` Change the `ipam` field of `/etc/cni/net.d/10-weave.conflist` on each node: Change the following: ```shell [root@node1 ~]# cat /etc/cni/net.d/10-weave.conflist { \"cniVersion\": \"0.3.0\", \"name\": \"weave\", \"plugins\": [ { \"name\": \"weave\", \"type\": \"weave-net\", \"hairpinMode\": true }, { \"type\": \"portmap\", \"capabilities\": {\"portMappings\": true}, \"snat\": true } ] } ``` To: ```json { \"cniVersion\": \"0.3.0\", \"name\": \"weave\", \"plugins\": [ { \"name\": \"weave\", \"type\": \"weave-net\", \"ipam\": { \"type\": \"spiderpool\" }, \"hairpinMode\": true }, { \"type\": \"portmap\", \"capabilities\": {\"portMappings\": true}, \"snat\": true } ] } ``` Alternatively, it can be changed with `jq` in one step. If `jq` is not installed, you can use the following command to install it: ```shell yum -y install jq ``` Change the CNI configuration file: ```shell cat <<< $(jq '.plugins[0].ipam.type = \"spiderpool\" ' /etc/cni/net.d/10-weave.conflist) > /etc/cni/net.d/10-weave.conflist ``` Make sure to run this command at each node Specify that the Pods will be allocated IPs from that SpiderSubnet via the annotation `ipam.spidernet.io/ippool`: ```shell [root@node1 ~]# cat << EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: annotations: ipam.spidernet.io/ippool: '{\"ipv4\":[\"weave-ippool-v4\"]}' labels: app: nginx spec: containers: image: nginx imagePullPolicy: IfNotPresent lifecycle: {} name: container-1 EOF ``` spec.template.metadata.annotations.ipam.spidernet.io/subnet: specifies that the Pods will be assigned IPs from SpiderSubnet: `weave-ippool-v4`. The Pods have been created and allocated IP addresses from Spiderpool Subnets: ```shell [root@node1 ~]# kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-5745d9b5d7-2rvn7 1/1 Running 0 8s 10.32.22.190 node1 <none> <none> nginx-5745d9b5d7-5ssck 1/1 Running 0 8s 10.32.35.87 node2 <none> <none> [root@node1 ~]# kubectl get sp NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT DISABLE weave-ippool-v4 4 10.32.0.0/12 2 2 false ``` To test connectivity, let's use inter-node communication between Pods as an example: ```shell [root@node1 ~]# kubectl exec nginx-5745d9b5d7-2rvn7 -- ping 10.32.35.87 -c 2 PING 10.32.35.87 (10.32.35.87): 56 data bytes 64 bytes from 10.32.35.87: seq=0 ttl=64 time=4.561 ms 64 bytes from 10.32.35.87: seq=1 ttl=64 time=0.632 ms 10.32.35.87 ping statistics 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.632/2.596/4.561 ms ``` The test results indicate that IP allocation and network connectivity are normal. `Spiderpool` has extended the capabilities of Weave's IPAM. Next, you can go to to explore other features of `Spiderpool`."
}
] |
{
"category": "Runtime",
"file_name": "get-started-weave.md",
"project_name": "Spiderpool",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: Adding and Removing Hosts Dynamically menu_order: 50 search_type: Documentation To add a host to an existing Weave network, launch Weave Net on the host, and supply the address of at least one host. Weave Net automatically discovers any other hosts in the network and establishes connections with them if it can (in order to avoid unnecessary multi-hop routing). In some situations existing Weave Net hosts may be unreachable from the new host due to firewalls, etc. However, it is still possible to add the new host, provided that inverse connections, for example, from existing hosts to the new hosts, are available. To accomplish this, launch Weave Net onto the new host without supplying any additional addresses and then, from one of the existing hosts run: host# weave connect $NEW_HOST Any other existing hosts on the Weave network will attempt to establish connections to the new host as well. To instruct a peer to forget a particular host specified to it via `weave launch` or `weave connect` run: host# weave forget $DECOMMISSIONED_HOST This prevents the peer from reconnecting to that host once connectivity to it is lost, and can be used to administratively remove any decommissioned peers from the network. Hosts can also be bulk-replaced. All existing hosts will be forgotten, and the new hosts added: host# weave connect --replace $NEWHOST1 $NEWHOST2 If Weave Net is restarted by Docker it automatically remembers any previous connect and forget operations, however if you stop it manually and launch it again, it will not remember any prior connects. If you want to launch again and retain the results of those operations use `--resume`: host# weave launch --resume Note: In this case, you cannot specify a list of addresses, since the previous peer list is used exclusively. For complete control over the peer topology, disable automatic discovery using the `--no-discovery` option with `weave launch`. If discovery if disabled, Weave Net only connects to the addresses specified at launch time and with `weave connect`. To return a list of all hosts and their peer connections established with `weave launch` and `weave connect` run: host# weave status targets See Also *"
}
] |
{
"category": "Runtime",
"file_name": "finding-adding-hosts-dynamically.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "name: Enhancement Request about: Suggest an enhancement to the JuiceFS project labels: kind/feature <!-- Please only use this template for submitting enhancement requests --> What would you like to be added: Why is this needed:"
}
] |
{
"category": "Runtime",
"file_name": "enhancement.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "is a system for managing containerized applications across a cluster of machines. Kubernetes runs all applications in containers. In the default setup, this is performed using the Docker engine, but Kubernetes also features support for using rkt as its container runtime backend. This allows a Kubernetes cluster to leverage some of rkt's security features and native pod support. The container runtime is configured at the kubelet level. The kubelet is the agent that runs on each machine to manage containers. The kubelet provides several flags to use rkt as the container runtime: `--container-runtime=rkt` Sets the node's container runtime to rkt. `--rkt-api-endpoint=HOST:PORT` Sets the endpoint of the rkt API service. Default to `localhost:15441`. `--rkt-path=PATHTORKT_BINARY` Sets the path of the rkt binary. If empty, it will search for rkt in `$PATH`. `--rkt-stage1-image=STAGE1_NAME` Sets the name of the stage1 image, e.g. `coreos.com/rkt/stage1-coreos`. If not set, the default stage1 image (`coreos.com/rkt/stage1-coreos`) is used. Check the for information about setting up and using a rktnetes cluster. The and repos both support configuring rkt as the Kubernetes runtime out of the box. Check out the coreos-kubernetes repo if you want to spin up a cluster on or . The common configuration option here is setting `CONTAINER_RUNTIME` environment variable to rkt. For baremetal, check out the Kubernetes guides . Minikube is a tool that makes it easy to run Kubernetes locally. It launches a single-node cluster inside a VM aimed at users looking to try out Kubernetes. Follow the instructions in the Minikube section on how to get started with rktnetes. Integration of rkt as a container runtime was officially . Known issues and tips for using rkt with Kubernetes can be found in the ."
}
] |
{
"category": "Runtime",
"file_name": "using-rkt-with-kubernetes.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "The quality control of Curve is the key to the maintenance and development of the project. It includes two parts: quality control theories and process control. Different levels of tests are included in Curve for testing the correctness of newly introduced and current functions more efficiently. These tests includes: Unit test Unit test can help us find out problems in the code at the least cost when testing the correctness of a module, a function or a class. Integration test Based on unit testing, integration test is for testing whether all the software units achieve the technical specifications when assembled into modules, subsystems or systems according to the design. The goal of integration test is to construct the program using components that already passed unit test according to the design requirements, since a single high quality module is not enough to guarantee the quality of the entire system. Many hidden failures are caused by unexpected interactions between high quality modules. Thus, the test will try to ensure that the units are combined to operate in accordance with the intended functionality, and the incremental behavior is correct, by testing the interface between the units and the integrated functions. System test System test is a \"black box\" test based on the software requirements specification. It's a thorough test of the integrated system to verify whether the correctness and performance of the system meets the requirements specified in the specification, and check whether the behavior and output Correct. Not a simple task, it's called the \"prophet question\" of testing. Therefore, system test should be carried out in accordance with the test plan, and its input, output, and other dynamic operating behaviors should be compared with the software specification. There are many methods for this test, including functional test, performance test, random testing, etc. <p align=\"center\"> <img src=\"../images/quality-theory.png\" alt=\"quality-theory.png\" width=\"650\" /> </p> The quality of the code merged into the repository is controlled through strict procedures to ensure that each line of code merged will not affect the function of the main process before submitted to QA, and conform to the code style guide (Curve is developed using C++, and uses ``cpplint`` and ``cppcheck`` for code style checking). We have already transfer the development of Curve to GitHub, and this procedure has been be transplanted. Two important steps should be mentioned: manual reviewing and continuous integration (CI) <img src=\"../images/quality-control-1.png\" alt=\"quality-control-1.png\" width=\"850\" /> <img src=\"../images/quality-control-2.png\" alt=\"quality-control-2.png\" width=\"850\" /> Unit test for testing the correctness of a module, a function, or a class, it will guarantee the correctness of module's behavior in a great extend, and serves as the most basic test in the procedure of software"
},
{
"data": "Code coverage is used to measure the completeness of the unit test. Figure 4 below shows a statistical chart of Curve code coverage, calculated by the tool \"lcov (for c++)\", and rendered by \"genhtml\". Other languages such as go also have their own Coverage statistics tool. The unit tests of curve modules are under folder \"curve/test\". <p align=\"center\"> <img src=\"../images/quality-cov.png\" alt=\"quality-cov.png\" width=\"950\" /><br> <font size=3>Figure 4 Curve code coverage result</font> </p> The issues to be considered in the integration test are mainly the problems after the modules are connected: Whether the data passing through the module interface will be lost; Whether the combination of sub-functions can meet the expectation; Whether the sub-modules will affect each other; Whether the error accumulation of a single module will be amplified to an unacceptable level; Therefore, it is necessary to conduct a integration test to find and eliminate the above problems, which may occur in the module connection after the unit test, and finally construct the required software subsystem or system. Function test Function test is to conduct a comprehensive test on the functions provided by the module from user's prospective. Before the test, it is necessary to design sufficient test cases, considering various system states and parameter input to decide whether the module can still work normally, and return the expected results. In this process, a scientific test cases design method is required to consider various input scenarios, and complete the test process with least cases and execution steps. Exception test Different from function test and performance test, exception test is another test method for discovering performance bottlenecks caused by exceptions such as system exception, dependencies exception and the exception of the application itself in order to improve the stability of the system. Common exceptions include disk errors, network errors, data errors or program restarts, etc. We use software to simulate various exceptions in the development stage, such as \"libfiu\" in C++ and \"gofail'' in go, which can simulate various exceptions. Scale test This is to test whether the module can work normally under a certain scale, and whether it will crash or work unexpectedly by observing the utilization of system resources. For example, test whether errors will occurred when opening a large number of files and whether the memory usage will be too large. Concurrent/pressure test Functional test is more of a single-threaded test. It is also necessary to test the module in a concurrent scenario to observe whether it can still work normally, and whether there will be errors in logic or data. When considering the concurrency degree, we consider a larger pressure, for example, 2 times, 10 times or more than our normal"
},
{
"data": "Here we provide an . As mentioned at the beginning, the system test is a black box test. The system test of Curve usually conducted by QA. Including: normal test, performance test, stability test, exception test and chaos test. Normal test is basically manual test for new features; Performance test is for testing the performance; Stability test requires the system to run for a certain amount of time under normal pressure Anomaly test injects software and hardware exception into the normal process Chaos test is to randomly combine software and hardware exceptions into normal process Unit tests and integration tests are usually coded by developers in development stage and can be accumulated continuously. In the process of system testing, we also hope to automate test cases as much as possible to reduce manual regression costs, so that test cases can continue to accumulate. There is also a very important reason: many hidden problems are occasional, and cannot be triggered frequently by manual test. Currently, exception test and chaos test are automated in Curve. <img src=\"../images/quality-auto-abnormal.png\" alt=\" quality-auto-abnormal.png\" width=\"950\" /> Supports keywords, tests are flexible Comprehensive test report Compatible with Jenkins CI Sufficient third party libraries supported (ssh, paramiko, request, multiprocess) Diversified parameters, supports batch running modes including log level definition, tag filtering and random scramble. <img src=\"../images/quality-auto-robotframework.png\" alt=\"quality-auto-robotframework.png\" width=\"550\" /> No need to bind to a specific environment, \"pull up at will\" Self configurable (able to configure environment definition, workspace path and test load) Case independence Case universality (curve ceph nbs lvm) Tag standardized (priority, version, running time) Consider improving coverage (rich scenes, disordered sequence, some random sleep) Polling instead of sleep Accuracy (checkpoint) Stability (avoid environmental factors, other module interference) Control the test cases runtime (Consider some compromises) High pressure multi-level malfunctions is the chaos test of Curve, based on robotframework but injects malfunctions randomly selected. As introduced in the beginning, Curve's procedure control is mainly strict code integration and regular CI triggering. Strict code depends on two parts: Code review. We use gerrit platform, two code review+1 and CI +1 will be required before the code can be merged. Code review is a procedure replied on manual during the code submission, but it is also very important, and will have relatively high requirements for code readability. CI. We configured to use Jekins. Benefits from the automation of our tests, unit test, integration test and exception test will be required to pass before the code can be merged, which greatly reduces the appearance of low-level bugs. Regular CI triggering: After the automation, the test becomes very convenient. Frequent triggering can help us find many corner cases. <img src=\"../images/quality-process.png\" alt=\" quality-process.png\" width=\"850\" />"
}
] |
{
"category": "Runtime",
"file_name": "quality_en.md",
"project_name": "Curve",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Use iptables-wrapper in Antrea container. Now antrea-agent can work with distros that lack the iptables kernel module of \"legacy\" mode (ip_tables). (, [@antoninbas]) Reduce permissions of Antrea ServiceAccount for updating annotations. (, [@tnqn]) , [@wenyingd]) Fix DNS resolution error of antrea-agent on AKS by using `ClusterFirst` dnsPolicy. (, [@tnqn]) Fix status report of Antrea-native policies with multiple rules that have different AppliedTo. (, [@tnqn]) Upgrade Go version to 1.17 to pick up security fix for CVE-2021-44716. (, [@antoninbas]) Fix NetworkPolicy resources dump for Agent's supportbundle. (, [@antoninbas]) Fix gateway interface MTU configuration error on Windows. (, [@lzhecheng]) [Windows] Fix initialization error of antrea-agent on Windows by specifying hostname explicitly in VMSwitch commands. (, [@XinShuYang]) [Windows] Ensure that the Windows Node name obtained from the environment or from hostname is converted to lower-case. (, [@shettyg]) [Windows] Fix typos in the example YAML in antrea-network-policy doc. ( , [@antoninbas] [@Jexf]) Fix ipBlock referenced in nested ClusterGroup not processed correctly. (, [@Dyanngg]) Fix NetworkPolicy may not be enforced correctly after restarting a Node. (, [@tnqn]) Fix antrea-agent crash caused by interface detection in AKS/EKS with NetworkPolicyOnly mode. (, [@wenyingd]) Fix locally generated packets from Node net namespace might be SNATed mistakenly when Egress is enabled. (, [@tnqn]) Support returning partial supportbundle results when some Nodes fail to respond. (, [@hangyan]) Remove restriction that only GRE tunnels can be used when enabling IPsec: VXLAN can also be used, and so can Geneve (if the Linux kernel version for the Nodes is recent enough). (, [@luolanzone]) Reduce memory usage of antctl when collecting supportbundle. (, [@tnqn]) Fix nil pointer error when collecting a supportbundle on a Node for which the antrea-agent container image does not include \"iproute2\"; this does not affect the standard antrea/antrea-ubuntu container image. (, [@liu4480]) When creating an IPsec OVS tunnel port to a remote Node, handle the case where the port already exists but with a stale config graciously: delete the existing port first, then recreate it. (, [@luolanzone]) Fix panic in the Antrea Controller when it processes ClusterGroups that are used by multiple ClusterNetworkPolicies. (, [@tnqn]) Fix nil pointer error when antrea-agent updates OpenFlow priorities of Antrea-native policies without Service ports. (, [@wenyingd]) Fix Pod-to-Service access on Windows when the Endpoints are not non-hostNetwork Pods (e.g. the `kubernetes` Service). (, [@wenyingd]) [Windows] Fix container network interface MTU configuration error when using containerd as the runtime on Windows. (, [@wenyingd]) [Windows] Update , [@srikartati] [@zyiou]) Handle transient iptables-restore failures (caused by xtables lock contention) in the NodePortLocal initialization logic. (, [@antoninbas]) Fix handling of the \"reject\" packets generated by the Antrea Agent in the OVS pipeline, to avoid infinite looping when traffic between two endpoints is rejected by network policies in both directions. (, [@GraysonWu]) Fix interface naming for IPsec tunnels: based on Node names, the first char could sometimes be a dash, which is not valid. (, [@luolanzone]) Install all Endpoint flows belonging to a Service via a single OpenFlow bundle, to reduce flow installation time when the Agent starts. (, [@tnqn]) Improve the batch installation of NetworkPolicy rules when the Agent starts: only generate flow operations based on final desired state instead of"
},
{
"data": "(, [@tnqn]) Use GroupMemberSet.Merge instead of GroupMemberSet.Union to reduce CPU usage and memory footprint in the Agent's policy controller. (, [@tnqn]) When checking for the existence of an iptables chain, stop listing all the chains and searching through them; this change reduces the Agent's memory footprint. (, [@tnqn]) Tolerate more failures for the Agent's readiness probe, as the Agent may stay disconnected from the Controller for a long time in some scenarios. (, [@tnqn]) When listing NetworkPolicyStats through the Controller API, return an empty list if the `NetworkPolicyStats` Feature Gate is disabled, instead of returning an error. (, [@PeterEltgroth]) Fix panic in Agent when calculating the stats for a rule newly added to an existing NetworkPolicy. (, [@tnqn]) Fix bug in iptables rule installation for dual-stack clusters: if a rule was already present for one protocol but not the other, its installation may have been skipped. (, [@lzhecheng]) Fix deadlock in the Agent's FlowExporter, between the export goroutine and the conntrack polling goroutine. (, [@srikartati]) Upgrade OVS version to 2.14.2 to pick up security fixes for CVE-2015-8011, CVE-2020-27827 and CVE-2020-35498. (, [@antoninbas]) Upgrade OVS version to 2.14.2-antrea.1 for Windows Nodes; this version of OVS is built on top of the upstream 2.14.2 release and also includes a patch to fix TCP checksum computation when the DNAT action is used. (, [@lzhecheng]) [Windows] Periodically delete stale connections in the Flow Exporter if they cannot be exported (e.g. because the collector is not available), to avoid running out-of-memory. (, [@srikartati]) Clean up log files for the Flow Aggregator periodically: prior to this fix, the \"--logfilemaxsize\" and \"--logfilemaxnum\" command-line flags were ignore for the flow-aggregator Pod. (, [@srikartati]) Fix missing template ID when sending the first IPFIX flow record from the FlowAggregator. (, [@zyiou]) Fix reference Logstash configuration to avoid division by zero in throughput calculation. (, [@zyiou]) The NetworkPolicyStats feature is graduated from Alpha to Beta and is therefore enabled by default. Add new ExternalIPPool API to define ranges of IP addresses which can be used as Egress SNAT IPs; these IPs are allocated to Nodes according to a nodeSelector, with support for failover if a Node goes down. ( , [@tnqn] [@wenqiq]) Refer to the for more information Use OpenFlow meters on Linux to rate-limit PacketIn messages sent by the OVS datapath to the Antrea Agent. (, [@GraysonWu] [@antoninbas]) Add K8s labels for the source and destination Pods (when applicable) as IPFIX Information Elements when exporting flow records from the FlowAggregator. (, [@dreamtalen]) Add ability to print Antrea Agent and / or Antrea Controller FeatureGates using antctl, with the \"antctl get featuregates\" command. (, [@luolanzone]) Add support for running the same Traceflow request again (with the same parameters) from the Antrea Octant plugin. (, [@Dhruv-J]) Add ability for the Antrea Agent to configure SR-IOV secondary network interfaces for Pods (these interfaces are not attached to the OVS bridge); however, there is currently no available API for users to request secondary Pod network interfaces. (, [@ramay1]) When enabling NodePortLocal on a Service, use the Service's target ports instead of the (optional) container ports for the selected Pods to determine how to configure port forwarding for the"
},
{
"data": "(, [@monotosh-avi]) Update version of the , [@zyiou]) Remove deprecated API version networking.antrea.tanzu.vmware.com/v1beta1 as per our API deprecation policy. (, [@hangyan]) Show translated source IP address in Traceflow observations when Antrea performs SNAT in OVS. (, [@luolanzone]) Remove unnecessary IPFIX Information Elements from the flow records exported by the FlowAggregator: \"originalExporterIPv4Address\", \"originalExporterIPv6Address\" and \"originalObservationDomainId\". (, [@zyiou]) Ignore non-TCP Service ports in the NodePortLocal implementation and document the restriction that only TCP is supported. (, [@antoninbas]) Drop packets received by the uplink in PREROUTING (using iptables) when using the OVS userspace datapath (Kind clusters), to prevent these packets from being processed by the Node's TCP/IP stack. (, [@antoninbas]) Improve documentation for Antrea-native policies to include information about the \"namespaces\" field introduced in Antrea v1.1 for the ClusterNetworkPolicy API. (, [@abhiraut]) Fix inter-Node ClusterIP Service access when AntreaProxy is disabled. (, [@tnqn]) Fix duplicate group ID allocation in AntreaProxy when using a combination of IPv4 and IPv6 Services in dual-stack clusters; this was causing Service connectivity issues. (, [@hongliangl]) Fix intra-Node ClusterIP Service access when both the AntreaProxy and Egress features are enabled. (, [@tnqn]) Fix deadlock when initializing the GroupEntityIndex (in the Antrea Controller) with many groups; this was preventing correct distribution and enforcement of NetworkPolicies. (, [@tnqn]) Fix implementation of ClusterNetworkPolicy rules with an empty \"From\" field (for ingress rules) or an empty \"To\" field (for egress rules). (, [@Dyanngg]) Use \"os/exec\" package instead of third-party modules to run PowerShell commands and configure host networking on Windows; this change prevents Agent goroutines from getting stuck when configuring routes. (, [@lzhecheng]) [Windows] Fix invalid clean-up of the HNS Endpoint during Pod deletion, when Docker is used as the container runtime. (, [@wenyingd]) [Windows] Fix race condition on Windows when retrieving the local HNS Network created by Antrea for containers. (, [@tnqn]) [Windows] Fix checksum computation error when sending PacketOut messages to OVS. (, [@Dyanngg]) Fix invalid conversion function between internal and versioned types for controlplane API, which was causing JSON marshalling errors. (, [@tnqn]) Fix implementation of the v1beta1 version of the legacy \"controlplane.antrea.tanzu.vmware.com\" API: the API was incorrectly using some v1beta2 types and it was missing some field selectors. (, [@tnqn]) Verify that the discovered uplink is not virtual when creating the HNSNetwork; if it is, log a better error message. (, [@tnqn]) [Windows] When allocating a host port for NodePortLocal, make sure that the port is available first and reserve it by binding to it. (, [@antoninbas]) Change default port range for NodePortLocal to 61000-62000, in order to avoid conflict with the default iplocalport_range on Linux. (, [@antoninbas]) Add NamespaceIndex to PodInformer of the NodePortLocal Controller to avoid error logs and slow searches. (, [@tnqn]) When mutating an Antrea-native policy, only set the \"PatchType\" field in the mutating webhook's response if the \"Patch\" field is not empty, or the response may not be valid. (, [@Dyanngg]) Populate the \"egressNetworkPolicyRuleAction\" IPFIX Information Element correctly in the FlowAggregator. (, [@zyiou]) Protect Traceflow state from concurrent access in Antrea Octant plugin (in case of multiple browser sessions). (, [@antoninbas]) Remove assumption that there is a single ovs-vswitchd .ctl file when invoking ovs-appctl from the Antrea Agent. (, [@antoninbas]) Fix file permissions for the , [@antoninbas])"
}
] |
{
"category": "Runtime",
"file_name": "CHANGELOG-1.2.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: \"Build from source\" layout: docs Access to a Kubernetes cluster, version 1.7 or later. Version 1.7.5 or later is required to run `ark backup delete`. A DNS server on the cluster `kubectl` installed installed (minimum version 1.8) Install with go: ``` go get github.com/heptio/ark ``` The files are installed in `$GOPATH/src/github.com/heptio/ark`. You can build your Ark image locally on the machine where you run your cluster, or you can push it to a private registry. This section covers both workflows. Set the `$REGISTRY` environment variable (used in the `Makefile`) to push the Heptio Ark images to your own registry. This allows any node in your cluster to pull your locally built image. In the Ark root directory, to build your container with the tag `$REGISTRY/ark:$VERSION`, run: ``` make container ``` To push your image to a registry, use `make push`. The following files are automatically generated from the source code: The clientset Listers Shared informers Documentation Protobuf/gRPC types If you make any of the following changes, you must run `make update` to regenerate the files: Add/edit/remove command line flags and/or their help text Add/edit/remove commands or subcommands Add new API types If you make the following change, you must run to regenerate files: Add/edit/remove protobuf message or service definitions. These changes require the . By default, `make` builds an `ark` binary that runs on your host operating system and architecture. To build for another platform, run `make build-<GOOS>-<GOARCH`. For example, to build for the Mac, run `make build-darwin-amd64`. All binaries are placed in `output/bin/<GOOS>/<GOARCH>`-- for example, `output/bin/darwin/amd64/ark`. Ark's `Makefile` has a convenience target, `all-build`, that builds the following platforms: linux-amd64 linux-arm linux-arm64 darwin-amd64 windows-amd64 To run unit tests, use `make test`. You can also run `make verify` to ensure that all generated files (clientset, listers, shared informers, docs) are up to date. When running Heptio Ark, you will need to account for the following (all of which are handled in the manifests): Appropriate RBAC permissions in the cluster Read access for all data from the source cluster and namespaces Write access to the target cluster and namespaces Cloud provider credentials Read/write access to volumes Read/write access to object storage for backup data A definition for the Ark server See for more details. When your Ark deployment is up and running, you must replace the Heptio-provided Ark image with the image that you built. Run: ``` kubectl set image deployment/ark ark=$REGISTRY/ark:$VERSION ``` where `$REGISTRY` and `$VERSION` are the values that you built with. If you need to add or update the vendored dependencies, see ."
}
] |
{
"category": "Runtime",
"file_name": "build-from-scratch.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Portworx link: https://github.com/portworx/velero-plugin objectStorage: false volumesnapshotter: true localStorage: true To take snapshots of Portworx volumes through Velero you need to install and configure the Portworx plugin."
}
] |
{
"category": "Runtime",
"file_name": "05-portworx.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This document describes how to set up a single-machine Kubernetes (k8s) cluster. The Kubernetes cluster will use the and to launch workloads. Kubernetes, Kubelet, `kubeadm` containerd Kata Containers Note: For information about the supported versions of these components, see the Kata Containers file. First, follow the to install and configure containerd. Then, make sure the containerd works with the . Follow the instructions for . Check `kubeadm` is now available ```bash $ command -v kubeadm ``` In order to allow Kubelet to use containerd (using the CRI interface), configure the service to point to the `containerd` socket. Configure Kubernetes to use `containerd` ```bash $ sudo mkdir -p /etc/systemd/system/kubelet.service.d/ $ cat << EOF | sudo tee /etc/systemd/system/kubelet.service.d/0-containerd.conf [Service] Environment=\"KUBELETEXTRAARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock\" EOF ``` Inform systemd about the new configuration ```bash $ sudo systemctl daemon-reload ``` If you are behind a proxy, use the following script to configure your proxy for docker, Kubelet, and containerd: ```bash $ services=\" kubelet containerd docker \" $ for service in ${services}; do service_dir=\"/etc/systemd/system/${service}.service.d/\" sudo mkdir -p ${service_dir} cat << EOF | sudo tee \"${service_dir}/proxy.conf\" [Service] Environment=\"HTTPPROXY=${httpproxy}\" Environment=\"HTTPSPROXY=${httpsproxy}\" Environment=\"NOPROXY=${noproxy}\" EOF done $ sudo systemctl daemon-reload ``` Make sure `containerd` is up and running ```bash $ sudo systemctl restart containerd $ sudo systemctl status containerd ``` Prevent conflicts between `docker` iptables (packet filtering) rules and k8s pod communication If Docker is installed on the node, it is necessary to modify the rule below. See https://github.com/kubernetes/kubernetes/issues/40182 for further details. ```bash $ sudo iptables -P FORWARD ACCEPT ``` Start cluster using `kubeadm` ```bash $ sudo kubeadm init --cri-socket /run/containerd/containerd.sock --pod-network-cidr=10.244.0.0/16 $ export KUBECONFIG=/etc/kubernetes/admin.conf $ sudo -E kubectl get nodes $ sudo -E kubectl get pods ``` A pod network plugin is needed to allow pods to communicate with each other. You can find more about CNI plugins from the guide. By default the CNI plugin binaries is installed under `/opt/cni/bin` (in package `kubernetes-cni`), you only need to create a configuration file for CNI plugin. ```bash $ sudo -E mkdir -p /etc/cni/net.d $ sudo -E cat > /etc/cni/net.d/10-mynet.conf <<EOF { \"cniVersion\": \"0.2.0\", \"name\": \"mynet\", \"type\": \"bridge\", \"bridge\": \"cni0\", \"isGateway\": true, \"ipMasq\": true, \"ipam\": { \"type\": \"host-local\", \"subnet\": \"172.19.0.0/24\", \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ] } } EOF ``` By default, the cluster will not schedule pods in the control-plane node. To enable control-plane node scheduling: ```bash $ sudo -E kubectl taint nodes --all node-role.kubernetes.io/control-plane- ``` By default, all pods are created with the default runtime configured in containerd. From Kubernetes v1.12, users can use to specify a different runtime for Pods. ```bash $ cat > runtime.yaml <<EOF apiVersion: node.k8s.io/v1 kind: RuntimeClass metadata: name: kata handler: kata EOF $ sudo -E kubectl apply -f runtime.yaml ``` If a pod has the `runtimeClassName` set to `kata`, the CRI runs the pod with the . Create an pod configuration that using Kata Containers runtime ```bash $ cat << EOF | tee nginx-kata.yaml apiVersion: v1 kind: Pod metadata: name: nginx-kata spec: runtimeClassName: kata containers: name: nginx image: nginx EOF ``` Create the pod ```bash $ sudo -E kubectl apply -f nginx-kata.yaml ``` Check pod is running ```bash $ sudo -E kubectl get pods ``` Check hypervisor is running ```bash $ ps aux | grep qemu ``` ```bash $ sudo -E kubectl delete -f nginx-kata.yaml ```"
}
] |
{
"category": "Runtime",
"file_name": "how-to-use-k8s-with-containerd-and-kata.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "slug: 2 title: Reliable Helper System for HwameiStor Is Online authors: [JieLi, Michael] tags: [hello, Hwameistor] News today: The HwameiStor Reliable Helper System, an automatic and reliable cloud-native local storage maintenance system dedicated to system operation and maintenance, has been launched. DaoCloud officially opens source of HwameiStor Reliable Helper System, a cloud native, automatic, reliable local storage maintenance system. This system is still in the alpha stage. HwameiStor creates a local storage pool with HDD, SSD, and NVMe disks for a central management. As the underlying data base used by applications, disks often face risks such as natural and unintentional damage. In this case, the Reliable Helper System comes out for disk operation and maintenance. All developers and enthusiasts are welcome to try it out. In the cloud native era, application developers can focus on the business logic itself, while the agility, scalability, and reliability required by the application runtime attribute to the infrastructure platform and O\\&M team. The HwameiStor Reliable Helper System is a reliable operation and maintenance system that meets the requirements of the cloud-native era. Currently, it supports the feature of one-click disk replacement. Reliable data migration and backfill Automatically recognize RAID disks and determine if the data migration and backfill is required to guarantee data reliability. One-click disk replacement This feature is implemented by using the disk uuid. Intuitive alert reminder If any exceptions occur in the process of one-click disk replacement, the system will raise an alert to remind you. If the coming future is an era of intelligent Internet, developers will be the pioneers to that milestone, and the open source community will become the \"metaverse\" of developers. If you have any questions about the HwameiStor cloud-native local storage system, welcome to join the community to explore this metaverse world dedicated for developers and grow together."
}
] |
{
"category": "Runtime",
"file_name": "2022-05-19_helper-system-post.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "% crio.conf(5) Kubernetes Container Runtime Daemon for Open Container Initiative Containers % Aleksa Sarai % OCTOBER 2016 crio.conf - configuration file of the CRI-O OCI Kubernetes Container Runtime daemon The CRI-O configuration file specifies all of the available configuration options and command-line flags for the , but in a TOML format that can be more easily modified and versioned. CRI-O supports partial configuration reload during runtime, which can be done by sending SIGHUP to the running process. Currently supported options in `crio.conf` are explicitly marked with 'This option supports live configuration reload'. The containers-registries.conf(5) file can be reloaded as well by sending SIGHUP to the `crio` process. The default crio.conf is located at /etc/crio/crio.conf. The is used as the encoding of the configuration file. Every option and subtable listed here is nested under a global \"crio\" table. No bare options are used. The format of TOML can be simplified to: [table] option = value [table.subtable1] option = value [table.subtable2] option = value CRI-O reads its storage defaults from the containers-storage.conf(5) file located at /etc/containers/storage.conf. Modify this storage configuration if you want to change the system's defaults. If you want to modify storage just for CRI-O, you can change the storage configuration options here. root=\"/var/lib/containers/storage\" Path to the \"root directory\". CRI-O stores all of its data, including containers images, in this directory. runroot=\"/var/run/containers/storage\" Path to the \"run directory\". CRI-O stores all of its state in this directory. storage_driver=\"overlay\" Storage driver used to manage the storage of images and containers. Please refer to containers-storage.conf(5) to see all available storage drivers. storage_option=[] List to pass options to the storage driver. Please refer to containers-storage.conf(5) to see all available storage options. log_dir=\"/var/log/crio/pods\" The default log directory where all logs will go unless directly specified by the kubelet. The log directory specified must be an absolute directory. version_file=\"/var/run/crio/version\" Location for CRI-O to lay down the temporary version file. It is used to check if crio wipe should wipe containers, which should always happen on a node reboot version_file_persist=\"\" Location for CRI-O to lay down the persistent version file. It is used to check if crio wipe should wipe images, which should only happen when CRI-O has been upgraded imagestore=\"\" Store newly pulled images in the specified path, rather than the path provided by --root. internal_wipe=true This option is currently DEPRECATED, and will be removed in the future. Whether CRI-O should wipe containers after a reboot and images after an upgrade when the server starts. If set to false, one must run `crio wipe` to wipe the containers and images in these situations. internal_repair=false InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart. If it was, CRI-O also attempts to repair the storage. clean_shutdown_file=\"/var/lib/crio/clean.shutdown\" Location for CRI-O to lay down the clean shutdown file. It is used to check whether crio had time to sync before shutting down. If not found, crio wipe will clear the storage directory. The `crio.api` table contains settings for the kubelet/gRPC interface. listen=\"/var/run/crio/crio.sock\" Path to AF_LOCAL socket on which CRI-O will listen. stream_address=\"127.0.0.1\" IP address on which the stream server will listen. stream_port=\"0\" The port on which the stream server will listen. If the port is set to \"0\", then CRI-O will allocate a random free port number. stream_enable_tls=false Enable encrypted TLS transport of the stream server. stream_idle_timeout=\"\" Length of time until open streams terminate due to lack of"
},
{
"data": "stream_tls_cert=\"\" Path to the x509 certificate file used to serve the encrypted stream. This file can change and CRI-O will automatically pick up the changes within 5 minutes. stream_tls_key=\"\" Path to the key file used to serve the encrypted stream. This file can change and CRI-O will automatically pick up the changes within 5 minutes. stream_tls_ca=\"\" Path to the x509 CA(s) file used to verify and authenticate client communication with the encrypted stream. This file can change and CRI-O will automatically pick up the changes within 5 minutes. grpc_max_send_msg_size=83886080 Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 1024 1024. grpc_max_recv_msg_size=83886080 Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 1024 1024. The `crio.runtime` table contains settings pertaining to the OCI runtime used and options for how to set up and manage the OCI runtime. default_runtime=\"runc\" The name of the OCI runtime to be used as the default. This option supports live configuration reload. default_ulimits=[] A list of ulimits to be set in containers by default, specified as \"<ulimit name>=<soft limit>:<hard limit>\", for example:\"nofile=1024:2048\". If nothing is set here, settings will be inherited from the CRI-O daemon. no_pivot=false If true, the runtime will not use `pivotroot`, but instead use `MSMOVE`. decryption_keys_path=\"/etc/crio/keys/\" Path where the keys required for image decryption are located conmon=\"\" Path to the conmon binary, used for monitoring the OCI runtime. Will be searched for using $PATH if empty. This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorPath. conmon_cgroup=\"\" Cgroup setting for conmon This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup. conmon_env=[] Environment variable list for the conmon process, used for passing necessary environment variables to conmon or the runtime. This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv. default_env=[] Additional environment variables to set for all the containers. These are overridden if set in the container image spec or in the container runtime configuration. selinux=false If true, SELinux will be used for pod separation on the host. This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future. seccomp_profile=\"\" Path to the seccomp.json profile which is used as the default seccomp profile for the runtime. If not specified, then the internal default seccomp profile will be used. This option is currently deprecated, and will be replaced by the SeccompDefault FeatureGate in Kubernetes. apparmor_profile=\"\" Used to change the name of the default AppArmor profile of CRI-O. The default profile name is \"crio-default\". blockio_config_file=\"\" Path to the blockio class configuration file for configuring the cgroup blockio controller. blockio_reload=false If true, the runtime reloads blockioconfigfile and rescans block devices in the system before applying blockio parameters. cdi_spec_dirs=[] Directories to scan for Container Device Interface Specifications to enable CDI device injection. For more details about CDI and the syntax of CDI Spec files please refer to https://github.com/container-orchestrated-devices/container-device-interface. Directories later in the list have precedence over earlier ones. The default directory list is: ``` cdispecdirs = [ \"/etc/cdi\", \"/var/run/cdi\", ] ``` irqbalance_config_file=\"/etc/sysconfig/irqbalance\" Used to change irqbalance service config file which is used by CRI-O. For CentOS/SUSE, this file is located at /etc/sysconfig/irqbalance. For Ubuntu, this file is located at /etc/default/irqbalance. irqbalance_config_restore_file=\"/etc/sysconfig/origirqbanned_cpus\" Used to set the irqbalance banned cpu mask to restore at CRI-O startup. If set to 'disable', no restoration attempt will be done. rdt_config_file=\"\" Path to the RDT configuration file for configuring the resctrl pseudo-filesystem. cgroup_manager=\"systemd\" Cgroup management implementation used for the runtime. default_capabilities=[] List of default capabilities for"
},
{
"data": "If it is empty or commented out, only the capabilities defined in the container json file by the user/kube will be added. The default list is: ``` default_capabilities = [ \"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NETBINDSERVICE\", \"KILL\", ] ``` add_inheritable_capabilities=false Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective. If capabilities are expected to work for non-root users, this option should be set. default_sysctls=[] List of default sysctls. If it is empty or commented out, only the sysctls defined in the container json file by the user/kube will be added. One example would be allowing ping inside of containers. On systems that support `/proc/sys/net/ipv4/pinggrouprange`, the default list could be: ``` default_sysctls = [ \"net.ipv4.pinggrouprange = 0 2147483647\", ] ``` allowed_devices=[] List of devices on the host that a user can specify with the \"io.kubernetes.cri-o.Devices\" allowed annotation. additional_devices=[] List of additional devices. Specified as \"<device-on-host>:<device-on-container>:<permissions>\", for example: \"--additional-devices=/dev/sdc:/dev/xvdc:rwm\". If it is empty or commented out, only the devices defined in the container json file by the user/kube will be added. hooks_dir=[\"path\", ...] Each `*.json` file in the path configures a hook for CRI-O containers. For more details on the syntax of the JSON files and the semantics of hook injection, see `oci-hooks(5)`. CRI-O currently support both the 1.0.0 and 0.1.0 hook schemas, although the 0.1.0 schema is deprecated. Paths listed later in the array have higher precedence (`oci-hooks(5)` discusses directory precedence). For the annotation conditions, CRI-O uses the Kubernetes annotations, which are a subset of the annotations passed to the OCI runtime. For example, `io.kubernetes.cri-o.Volumes` is part of the OCI runtime configuration annotations, but it is not part of the Kubernetes annotations being matched for hooks. For the bind-mount conditions, only mounts explicitly requested by Kubernetes configuration are considered. Bind mounts that CRI-O inserts by default (e.g. `/dev/shm`) are not considered. default_mounts=[] List of default mounts for each container. Deprecated: this option will be removed in future versions in favor of `defaultmountsfile`. default_mounts_file=\"\" Path to the file specifying the defaults mounts for each container. The format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads its default mounts from the following two files: 1) `/etc/containers/mounts.conf` (i.e., defaultmountsfile): This is the override file, where users can either add in their own default mounts, or override the default mounts shipped with the package. 2) `/usr/share/containers/mounts.conf`: This is the default file read for mounts. If you want CRI-O to read from a different, specific mounts file, you can change the defaultmountsfile. Note, if this is done, CRI-O will only add mounts it finds in this file. pids_limit=-1 Maximum number of processes allowed in a container. This option is deprecated. The Kubelet flag `--pod-pids-limit` should be used instead. log_filter=\"\" Filter the log messages by the provided regular expression. This option supports live configuration reload. For example 'request:.*' filters all gRPC requests. log_level=\"info\" Changes the verbosity of the logs based on the level it is set to. Options are fatal, panic, error, warn, info, debug, and trace. This option supports live configuration reload. log_size_max=-1 Maximum size allowed for the container log file. Negative numbers indicate that no size limit is imposed. If it is positive, it must be >= 8192 to match/exceed conmon's read buffer. The file is truncated and re-opened so the limit is never exceeded. This option is deprecated. The Kubelet flag `--container-log-max-size` should be used"
},
{
"data": "log_to_journald=false Whether container output should be logged to journald in addition to the kubernetes log file. container_exits_dir=\"/var/run/crio/exits\" Path to directory in which container exit files are written to by conmon. container_attach_socket_dir=\"/var/run/crio\" Path to directory for container attach sockets. bind_mount_prefix=\"\" A prefix to use for the source of the bind mounts. This option would be useful when running CRI-O in a container and the / directory on the host is mounted as /host in the container. Then if CRI-O runs with the --bind-mount-prefix=/host option, CRI-O would add the /host directory to any bind mounts it hands over to CRI. If Kubernetes asked to have /var/lib/foobar bind mounted into the container, then CRI-O would bind mount /host/var/lib/foobar. Since CRI-O itself is running in a container with / or the host mounted on /host, the container would end up with /var/lib/foobar from the host mounted in the container rather than /var/lib/foobar from the CRI-O container. read_only=false If set to true, all containers will run in read-only mode. uid_mappings=\"\" The UID mappings for the user namespace of each container. A range is specified in the form containerUID:HostUID:Size. Multiple ranges must be separated by comma. This option is deprecated, and will be replaced with native Kubernetes user namespace support in the future. minimum_mappable_uid=-1 The lowest host UID which can be specified in mappings supplied, either as part of a uid_mappings or as part of a request received over CRI, for a pod that will be run as a UID other than 0. This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future. gid_mappings=\"\" The GID mappings for the user namespace of each container. A range is specified in the form containerGID:HostGID:Size. Multiple ranges must be separated by comma. This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future. minimum_mappable_gid=-1 The lowest host GID which can be specified in mappings supplied, either as part of a gid_mappings or as part of a request received over CRI, for a pod that will be run as a UID other than 0. This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future. ctr_stop_timeout=30 The minimal amount of time in seconds to wait before issuing a timeout regarding the proper termination of the container. drop_infra_ctr=true Determines whether we drop the infra container when a pod does not have a private PID namespace, and does not use a kernel separating runtime (like kata). Requires manage_ns_lifecycle to be true. infra_ctr_cpuset=\"\" Determines the CPU set to run infra containers. If not specified, the CRI-O will use all online CPUs to run infra containers. You can specify CPUs in the Linux CPU list format. To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus. shared_cpuset=\"\" Determines the CPU set which is allowed to be shared between guaranteed containers, regardless of, and in addition to, the exclusiveness of their CPUs. This field is optional and would not be used if not specified. You can specify CPUs in the Linux CPU list format. namespaces_dir=\"/var/run\" The directory where the state of the managed namespaces gets tracked. Only used when managenslifecycle is true pinns_path=\"\" The path to find the pinns binary, which is needed to manage namespace lifecycle absent_mount_sources_to_reject=[] A list of paths that, when absent from the host, will cause a container creation to fail (as opposed to the current behavior of creating a"
},
{
"data": "device_ownership_from_security_context=false Changes the default behavior of setting container devices uid/gid from CRI's SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid. enable_criu_support=true Enable CRIU integration, requires that the criu binary is available in $PATH. (default: true) enable_pod_events=false Enable CRI-O to generate the container pod-level events in order to optimize the performance of the Pod Lifecycle Event Generator (PLEG) module in Kubelet. hostnetwork_disable_selinux=true Determines whether SELinux should be disabled within a pod when it is running in the host network namespace. disable_hostport_mapping=false Enable/Disable the container hostport mapping in CRI-O. Default value is set to 'false'. timezone=\"\" To set the timezone for a container in CRI-O. If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine. The \"crio.runtime.runtimes\" table defines a list of OCI compatible runtimes. The runtime to use is picked based on the runtime handler provided by the CRI. If no runtime handler is provided, the runtime will be picked based on the level of trust of the workload. This option supports live configuration reload. This option supports live configuration reload. runtime_path=\"\" Path to the OCI compatible runtime used for this runtime handler. runtime_root=\"\" Root directory used to store runtime data runtime_type=\"oci\" Type of the runtime used for this runtime handler. \"oci\", \"vm\" runtime_config_path=\"\" Path to the runtime configuration file, should only be used with VM runtime types privileged_without_host_devices=false Whether this runtime handler prevents host devices from being passed to privileged containers. allowed_annotations=[] This field is currently DEPRECATED. If you'd like to use allowed_annotations, please use a workload. A list of experimental annotations this runtime handler is allowed to process. The currently recognized values are: \"io.kubernetes.cri-o.userns-mode\" for configuring a user namespace for the pod. \"io.kubernetes.cri-o.Devices\" for configuring devices for the pod. \"io.kubernetes.cri-o.ShmSize\" for configuring the size of /dev/shm. \"io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME\" for configuring the cgroup v2 unified block for a container. \"io.containers.trace-syscall\" for tracing syscalls via the OCI seccomp BPF hook. \"seccomp-profile.kubernetes.cri-o.io\" for setting the seccomp profile for: a specific container by using: \"seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>\" a whole pod by using: \"seccomp-profile.kubernetes.cri-o.io/POD\" Note that the annotation works on containers as well as on images. For images, the plain annotation `seccomp-profile.kubernetes.cri-o.io` can be used without the required `/POD` suffix or a container name. container_min_memory=\"\" The minimum memory that must be set for a container. This value can be used to override the currently set global value for a specific runtime. If not set, a global default value of \"12 MiB\" will be used. platform_runtime_paths={} A mapping of platforms to the corresponding runtime executable paths for the runtime handler. The \"crio.runtime.workloads\" table defines a list of workloads - a way to customize the behavior of a pod and container. A workload is chosen for a pod based on whether the workload's activation_annotation is an annotation on the pod. activation_annotation=\"\" activation_annotation is the pod annotation that activates these workload settings. annotation_prefix=\"\" annotation_prefix is the way a pod can override a specific resource for a container. The full annotation must be of the form `$annotation_prefix.$resource/$ctrname = $value`. allowed_annotations=[] allowed_annotations is a slice of experimental annotations that this workload is allowed to process. The currently recognized values are: \"io.kubernetes.cri-o.userns-mode\" for configuring a user namespace for the pod. \"io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw\" for mounting cgroups writably when set to \"true\". \"io.kubernetes.cri-o.Devices\" for configuring devices for the pod. \"io.kubernetes.cri-o.ShmSize\" for configuring the size of /dev/shm. \"io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME\" for configuring the cgroup v2 unified block for a container. \"io.containers.trace-syscall\" for tracing syscalls via the OCI seccomp BPF hook. \"io.kubernetes.cri-o.seccompNotifierAction\" for enabling the seccomp notifier feature. \"io.kubernetes.cri-o.umask\" for setting the umask for container init process."
},
{
"data": "for setting the RDT class of a container \"seccomp-profile.kubernetes.cri-o.io\" for setting the seccomp profile for: a specific container by using: \"seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>\" a whole pod by using: \"seccomp-profile.kubernetes.cri-o.io/POD\" Note that the annotation works on containers as well as on images. \"io.kubernetes.cri-o.DisableFIPS\" for disabling FIPS mode for a pod within a FIPS-enabled Kubernetes cluster. This feature can help you to debug seccomp related issues, for example if blocked syscalls (permission denied errors) have negative impact on the workload. To be able to use this feature, configure a runtime which has the annotation \"io.kubernetes.cri-o.seccompNotifierAction\" in the `allowed_annotations` array. It also requires at least runc 1.1.0 or crun 0.19 which support the notifier feature. If everything is setup, CRI-O will modify chosen seccomp profiles for containers if the annotation \"io.kubernetes.cri-o.seccompNotifierAction\" is set on the Pod sandbox. CRI-O will then get notified if a container is using a blocked syscall and then terminate the workload after a timeout of 5 seconds if the value of \"io.kubernetes.cri-o.seccompNotifierAction=stop\". This also means that multiple syscalls can be captured during that period, while the timeout will get reset once a new syscall has been discovered. This also means that the Pods \"restartPolicy\" has to be set to \"Never\", otherwise the kubelet will restart the container immediately. Please be aware that CRI-O is not able to get notified if a syscall gets blocked based on the seccomp defaultAction, which is a general runtime limitation. The resources table is a structure for overriding certain resources for pods using this workload. This structure provides a default value, and can be overridden by using the AnnotationPrefix. cpushares=\"\" Specifies the number of CPU shares this pod has access to. cpuset=\"\" Specifies the cpuset this pod has access to. The `crio.image` table contains settings pertaining to the management of OCI images. CRI-O reads its configured registries defaults from the system wide containers-registries.conf(5) located in /etc/containers/registries.conf. If you want to modify just CRI-O, you can change the registries configuration in this file. Otherwise, leave `insecure_registries` and `registries` commented out to use the system's defaults from /etc/containers/registries.conf. default_transport=\"docker://\" Default transport for pulling images from a remote container storage. global_auth_file=\"\" The path to a file like /var/lib/kubelet/config.json holding credentials necessary for pulling images from secure registries. pause_image=\"registry.k8s.io/pause:3.9\" The on-registry image used to instantiate infra containers. The value should start with a registry host name. This option supports live configuration reload. pause_image_auth_file=\"\" The path to a file like /var/lib/kubelet/config.json holding credentials specific to pulling the pause_image from above. This option supports live configuration reload. pause_command=\"/pause\" The command to run to have a container stay in the paused state. This option supports live configuration reload. pinned_images=[] A list of images to be excluded from the kubelet's garbage collection. It allows specifying image names using either exact, glob, or keyword patterns. Exact matches must match the entire name, glob matches can have a wildcard * at the end, and keyword matches can have wildcards on both ends. By default, this list includes the `pause` image if configured by the user, which is used as a placeholder in Kubernetes pods. signature_policy=\"\" Path to the file which decides what sort of policy we use when deciding whether or not to trust an image that we've pulled. It is not recommended that this option be used, as the default behavior of using the system-wide default policy (i.e., /etc/containers/policy.json) is most often preferred. Please refer to containers-policy.json(5) for more details. signature_policy_dir=\"/etc/crio/policies\" Root path for pod namespace-separated signature policies. The final policy to be used on image pull will be"
},
{
"data": "If no pod namespace is being provided on image pull (via the sandbox config), or the concatenated path is non existent, then the signature_policy or system wide policy will be used as fallback. Must be an absolute path. image_volumes=\"mkdir\" Controls how image volumes are handled. The valid values are mkdir, bind and ignore; the latter will ignore volumes entirely. insecure_registries=[] List of registries to skip TLS verification for pulling images. registries=[\"docker.io\"] List of registries to be used when pulling an unqualified image. Note support for this option has been dropped and it has no effect. Please refer to `containers-registries.conf(5)` for configuring unqualified-search registries. big_files_temporary_dir=\"\" Path to the temporary directory to use for storing big files, used to store image blobs and data streams related to containers image management. separate_pull_cgroup=\"\" [EXPERIMENTAL] If its value is set, then images are pulled into the specified cgroup. If its value is set to \"pod\", then the pod's cgroup is used. It is currently supported only with the systemd cgroup manager. auto_reload_registries=false If true, CRI-O will automatically reload the mirror registry when there is an update to the 'registries.conf.d' directory. Default value is set to 'false'. The `crio.network` table containers settings pertaining to the management of CNI plugins. cni_default_network=\"\" The default CNI network name to be selected. If not set or \"\", then CRI-O will pick-up the first one found in network_dir. network_dir=\"/etc/cni/net.d/\" Path to the directory where CNI configuration files are located. plugin_dirs=[\"/opt/cni/bin/\",] List of paths to directories where CNI plugin binaries are located. The `crio.metrics` table containers settings pertaining to the Prometheus based metrics retrieval. enable_metrics=false Globally enable or disable metrics support. metrics_collectors=[\"imagepullslayersize\", \"containerseventsdroppedtotal\", \"containersoomtotal\", \"processesdefunct\", \"operationstotal\", \"operationslatencyseconds\", \"operationslatencysecondstotal\", \"operationserrorstotal\", \"imagepullsbytestotal\", \"imagepullsskippedbytestotal\", \"imagepullsfailuretotal\", \"imagepullssuccesstotal\", \"imagelayerreusetotal\", \"containersoomcounttotal\", \"containersseccompnotifiercounttotal\", \"resourcesstalledat_stage\"] Specify enabled metrics collectors. Per default all metrics are enabled. metrics_host=\"127.0.0.1\" The IP address or hostname on which the metrics server will listen. metrics_port=9090 The port on which the metrics server will listen. metrics_socket=\"\" The socket on which the metrics server will listen. metrics_cert=\"\" The certificate for the secure metrics server. metrics_key=\"\" The certificate key for the secure metrics server. [EXPERIMENTAL] The `crio.tracing` table containers settings pertaining to the export of OpenTelemetry trace data. enable_tracing=false Globally enable or disable OpenTelemetry trace data exporting. tracing_endpoint=\"0.0.0.0:4317\" Address on which the gRPC trace collector will listen. tracing_sampling_rate_per_million=\"\" Number of samples to collect per million OpenTelemetry spans. Set to 1000000 to always sample. The `crio.stats` table specifies all necessary configuration for reporting container and pod stats. stats_collection_period=0 The number of seconds between collecting pod and container stats. If set to 0, the stats are collected on-demand instead. DEPRECATED: This option will be removed in the future. Please use `collection_period` instead. collection_period=0 The number of seconds between collecting pod/container stats and pod sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead. included_pod_metrics=[] A list of pod metrics to include. Specify the names of the metrics to include in this list. The `crio.nri` table contains settings for controlling NRI (Node Resource Interface) support in CRI-O. enable_nri=true Enable CRI-O NRI support. nri_plugin_dir=\"/opt/nri/plugins\" Directory to scan for pre-installed plugins to automatically start. nri_plugin_config_dir=\"/etc/nri/conf.d\" Directory to scan for configuration of pre-installed plugins. nri_listen=\"/var/run/nri/nri.sock\" Socket to listen on for externally started NRI plugins to connect to. nri_disable_connections=false Disable connections from externally started NRI plugins. nri_plugin_registration_timeout=\"5s\" Timeout for a plugin to register itself with NRI. nri_plugin_request_timeout=\"2s\" Timeout for a plugin to handle an NRI request. crio.conf.d(5), containers-storage.conf(5), containers-policy.json(5), containers-registries.conf(5), crio(8) Aug 2018, Update to the latest state by Valentin Rothberg <[email protected]> Oct 2016, Originally compiled by Aleksa Sarai <[email protected]>"
}
] |
{
"category": "Runtime",
"file_name": "crio.conf.5.md",
"project_name": "CRI-O",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Velero should provide a way to delete items created during a backup, with a model and interface similar to that of BackupItemAction and RestoreItemAction plugins. These plugins would be invoked when a backup is deleted, and would receive items from within the backup tarball. As part of Container Storage Interface (CSI) snapshot support, Velero added a new pattern for backing up and restoring snapshots via BackupItemAction and RestoreItemAction plugins. When others have tried to use this pattern, however, they encountered issues with deleting the resources made in their own ItemAction plugins, as Velero does not expose any sort of extension at backup deletion time. These plugins largely seek to delete resources that exist outside of Kubernetes. This design seeks to provide the missing extension point. Provide a DeleteItemAction API for plugins to implement Update Velero backup deletion logic to invoke registered DeleteItemAction plugins. Specific implementations of the DeleteItemAction API beyond test cases. Rollback of DeleteItemAction execution. The DeleteItemAction plugin API will closely resemble the RestoreItemAction plugin design, in that plugins will receive the Velero `Backup` Go struct that is being deleted and a matching Kubernetes resource extracted from the backup tarball. The Velero backup deletion process will be modified so that if there are any DeleteItemAction plugins registered, the backup tarball will be downloaded and extracted, similar to how restore logic works now. Then, each item in the backup tarball will be iterated over to see if a DeleteItemAction plugin matches for it. If a DeleteItemAction plugin matches, the `Backup` and relevant item will be passed to the DeleteItemAction. The DeleteItemAction plugins will be run first in the backup deletion process, before deleting snapshots from storage or `Restore`s from the Kubernetes API server. DeleteItemAction plugins cannot rollback their actions. This is because there is currently no way to recover other deleted components of a backup, such as volume/restic snapshots or other DeleteItemAction resources. DeleteItemAction plugins will be run in alphanumeric order based on their registered names. The `DeleteItemAction` interface is as follows: ```go // DeleteItemAction is an actor that performs an action based on an item in a backup that is being deleted. type DeleteItemAction interface { // AppliesTo returns information about which resources this action should be invoked for. // A DeleteItemAction's Execute function will only be invoked on items that match the returned // selector. A zero-valued ResourceSelector matches all resources. AppliesTo() (ResourceSelector, error) // Execute allows the ItemAction to perform arbitrary logic with the item being deleted. Execute(DeleteItemActionInput) error } ``` The `DeleteItemActionInput` type is defined as follows: ```go type DeleteItemActionInput struct { // Item is the item taken from the pristine backed up version of resource. Item runtime.Unstructured // Backup is the representation of the backup resource processed by Velero. Backup *api.Backup } ``` Both `DeleteItemAction` and `DeleteItemActionInput` will be defined in `pkg/plugin/velero/deleteitemaction.go`. In `pkg/plugin/proto`, add `DeleteItemAction.proto`. Protobuf definitions will be necessary for: ```protobuf message DeleteItemActionExecuteRequest { ... } message DeleteItemActionExecuteResponse { ... } message DeleteItemActionAppliesToRequest { ... } message DeleteItemActionAppliesToResponse {"
},
{
"data": "} service DeleteItemAction { rpc AppliesTo(DeleteItemActionAppliesToRequest) returns (DeleteItemActionAppliesToResponse) rpc Execute(DeleteItemActionExecuteRequest) returns (DeleteItemActionExecuteResponse) } ``` Once these are written, then a client and server implementation can be written in `pkg/plugin/framework/deleteitemactionclient.go` and `pkg/plugin/framework/deleteitemactionserver.go`, respectively. These should be largely the same as the client and server implementations for `RestoreItemAction` and `BackupItemAction` plugins. Similar to `RestoreItemAction` and `BackupItemAction` plugins, restartable processes will need to be implemented. In `pkg/plugin/clientmgmt`, add `restartabledeleteitem_action.go`, creating the following unexported type: ```go type restartableDeleteItemAction struct { key kindAndName sharedPluginProcess RestartableProcess config map[string]string } // newRestartableDeleteItemAction returns a new restartableDeleteItemAction. func newRestartableDeleteItemAction(name string, sharedPluginProcess RestartableProcess) *restartableDeleteItemAction { // ... } // getDeleteItemAction returns the delete item action for this restartableDeleteItemAction. It does not restart the // plugin process. func (r *restartableDeleteItemAction) getDeleteItemAction() (velero.DeleteItemAction, error) { // ... } // getDelegate restarts the plugin process (if needed) and returns the delete item action for this restartableDeleteItemAction. func (r *restartableDeleteItemAction) getDelegate() (velero.DeleteItemAction, error) { // ... } // AppliesTo restarts the plugin's process if needed, then delegates the call. func (r *restartableDeleteItemAction) AppliesTo() (velero.ResourceSelector, error) { // ... } // Execute restarts the plugin's process if needed, then delegates the call. func (r restartableDeleteItemAction) Execute(input velero.DeleteItemActionInput) (error) { // ... } ``` This file will be very similar in structure to Add the following methods to `pkg/plugin/clientmgmt/manager.go`'s `Manager` interface: ```go type Manager interface { ... // Get DeleteItemAction returns a DeleteItemAction plugin for name. GetDeleteItemAction(name string) (DeleteItemAction, error) // GetDeteteItemActions returns the all DeleteItemAction plugins. GetDeleteItemActions() ([]DeleteItemAction, error) } ``` The unexported `manager` type should implement both the `GetDeleteItemAction` and `GetDeleteItemActions`. Both of these methods should have the same exception for `velero.io/`-prefixed plugins that all other types do. `GetDeleteItemAction` and `GetDeleteItemActions` will invoke the `restartableDeleteItemAction` implementations. `pkg/controller/backupdeletioncontroller.go` will be updated to have plugin management invoked. In `processRequest`, before deleting snapshots, get any registered `DeleteItemAction` plugins. If there are none, proceed as normal. If there are one or more, download the backup tarball from backup storage, untar it to temporary storage, and iterate through the items, matching them to the applicable plugins. Another proposal for higher level `DeleteItemActions` was initially included, which would require implementers to individually download the backup tarball themselves. While this may be useful long term, it is not a good fit for the current goals as each plugin would be re-implementing a lot of boilerplate. See the deletion-plugins.md file for this alternative proposal in more detail. The `VolumeSnapshotter` interface is not generic enough to meet the requirements here, as it is specifically for taking snapshots of block devices. By their nature, `DeleteItemAction` plugins will be deleting data, which would normally be a security concern. However, these will only be invoked in two situations: either when a `BackupDeleteRequest` is sent via a user with the `velero` CLI or some other management system, or when a Velero `Backup` expires by going over its TTL. Because of this, the data deletion is not a concern. In terms of backwards compatibility, this design should stay compatible with most Velero installations that are upgrading. If not DeleteItemAction plugins are present, then the backup deletion process should proceed the same way it worked prior to their inclusion. The implementation dependencies are, roughly, in the order as they are described in the section."
}
] |
{
"category": "Runtime",
"file_name": "delete-item-action.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "layout: global title: Metrics System Metrics provide insight into what is going on in the cluster. They are an invaluable resource for monitoring and debugging. Alluxio has a configurable metrics system based on the . In the metrics system, sources generate metrics, and sinks consume these metrics. The metrics system polls sources periodically and passes metric records to sinks. Alluxio's metrics are partitioned into different instances corresponding to Alluxio components. Within each instance, users can configure a set of sinks to which metrics are reported. The following instances are currently supported: Master: The Alluxio master process. Worker: The Alluxio worker process. Client: Any process with the Alluxio client library. When the cluster is running with High Availability, by default the standby masters do not serve metrics. Set `alluxio.standby.master.metrics.sink.enabled=true` to have the standby masters also serve metrics. Each metric falls into one of the following metric types: Gauge: Records a value Meter: Measures the rate of events over time (e.g., \"requests per minute\") Counter: Measures the number of times an event occurs Timer: Measures both the rate that a particular event is called and the distribution of its duration For more details about the metric types, please refer to . A <b>sink</b> specifies where metrics are delivered to. Each instance can report to zero or more sinks. `PrometheusMetricsServlet`: Adds a servlet in Web UI to serve metrics data in Prometheus format. `ConsoleSink`: Outputs metrics values to the console. `CsvSink`: Exports metrics data to CSV files at regular intervals. `JmxSink`: Registers metrics for viewing in a JMX console. `GraphiteSink`: Sends metrics to a Graphite server. `MetricsServlet`: Adds a servlet in Web UI to serve metrics data as JSON data. The metrics system is configured via a configuration file that Alluxio expects to be present at `${ALLUXIO_HOME}/conf/metrics.properties`. A custom file location can be specified via the `alluxio.metrics.conf.file` configuration property. Refer to `${ALLUXIO_HOME}/conf/metrics.properties.template` for all possible sink specific configurations. To configure the metrics system on Kubernetes, refer to The Alluxio leading master emits both its instance metrics and a summary of the cluster-wide aggregated metrics. Alluxio leading master and workers: no prerequisites, enabled by default : setting `alluxio.fuse.web.enabled` to `true` in `${ALLUXIO_HOME}/conf/alluxio-site.properties` before launching the standalone Fuse process. You can send an HTTP request to `/metrics/json/` of the target Alluxio processes to get a snapshot of all metrics in JSON format. ```shell $ curl <LEADINGMASTERHOSTNAME>:<MASTERWEBPORT>/metrics/json/ $ curl <WORKERHOSTNAME>:<WORKERWEB_PORT>/metrics/json/ ``` ```shell $ curl 127.0.0.1:19999/metrics/json/ ``` ```shell $ curl 127.0.0.1:30000/metrics/json/ ``` ```shell $ curl 127.0.0.1:20002/metrics/json/ ``` ```shell $ curl 127.0.0.1:30003/metrics/json/ ``` After setting alluxio.fuse.web.enabled=true and launching the standalone Fuse process, get the metrics with its default web port: ```shell $ curl <FUSEWEBHOSTNAME>:<FUSEWEBPORT>/metrics/json/ $ curl 127.0.0.1:49999/metrics/json/ ``` is a monitoring tool that can help to monitor Alluxio metrics changes. In the metrics property file, `$ALLUXIO_HOME/conf/metrics.properties` by default, add the following properties: ```properties sink.prometheus.class=alluxio.metrics.sink.PrometheusMetricsServlet ``` If Alluxio is deployed in a cluster, this file needs to be distributed to all the nodes. Restart the Alluxio servers to activate new configuration"
},
{
"data": "To enable Prometheus Sink Setup in the , setting `alluxio.fuse.web.enabled` to `true` in `${ALLUXIO_HOME}/conf/alluxio-site.properties` before launching the standalone Fuse process. You can send an HTTP request to `/metrics/prometheus/` of the target Alluxio process to get a snapshot of metrics in Prometheus format. Get the metrics in Prometheus format from Alluxio leading master or workers or job service or standalone fuse: ```shell $ curl <LEADINGMASTERHOSTNAME>:<MASTERWEBPORT>/metrics/prometheus/ $ curl <WORKERHOSTNAME>:<WORKERWEB_PORT>/metrics/prometheus/ $ curl <LEADINGJOBMASTERHOSTNAME>:<JOBMASTERWEBPORT>/metrics/prometheus/ $ curl <JOBWORKERHOSTNAME>:<JOBWORKERWEB_PORT>/metrics/prometheus/ $ curl <FUSEWEBHOSTNAME>:<FUSEWEBPORT>/metrics/prometheus/ ``` ```shell $ curl 127.0.0.1:19999/metrics/prometheus/ ``` ```shell $ curl 127.0.0.1:30000/metrics/prometheus/ ``` ```shell $ curl 127.0.0.1:20002/metrics/prometheus/ ``` ```shell $ curl 127.0.0.1:30003/metrics/prometheus/ ``` ```shell $ curl 127.0.0.1:49999/metrics/prometheus/ ``` You can now direct platforms like Grafana or Datadog to these HTTP endpoints and read the metrics in Prometheus format. Alternatively, you can configure the Prometheus client using this sample `prometheus.yml` to read the endpoints. This is recommended for interfacing with Grafana. ```yaml scrape_configs: job_name: \"alluxio master\" metrics_path: '/metrics/prometheus/' static_configs: targets: [ '<LEADINGMASTERHOSTNAME>:<MASTERWEBPORT>' ] job_name: \"alluxio worker\" metrics_path: '/metrics/prometheus/' static_configs: targets: [ '<WORKERHOSTNAME>:<WORKERWEB_PORT>' ] job_name: \"alluxio job master\" metrics_path: '/metrics/prometheus/' static_configs: targets: [ '<LEADINGJOBMASTERHOSTNAME>:<JOBMASTERWEBPORT>' ] job_name: \"alluxio job worker\" metrics_path: '/metrics/prometheus/' static_configs: targets: [ '<JOBWORKERHOSTNAME>:<JOBWORKERWEB_PORT>' ] job_name: \"alluxio standalone fuse\" metrics_path: '/metrics/prometheus/' static_configs: targets: [ '<FUSEWEBHOSTNAME>:<FUSEWEBPORT>' ] ``` <b>Be wary when specifying which metrics you want to poll.</b> Prometheus modifies metrics names in order to process them. It usually replaces `.` with `_`, and sometimes appends text. It is good practice to use the `curl` commands listed above to see how the names are transformed by Prometheus. This section gives an example of writing collected metrics to CSV files. First, create the polling directory for `CsvSink` (if it does not already exist): ```shell $ mkdir /tmp/alluxio-metrics ``` In the metrics property file, `$ALLUXIO_HOME/conf/metrics.properties` by default, add the following properties: ```properties sink.csv.class=alluxio.metrics.sink.CsvSink sink.csv.period=1 sink.csv.unit=seconds sink.csv.directory=/tmp/alluxio-metrics ``` If Alluxio is deployed in a cluster, this file needs to be distributed to all the nodes. Restart the Alluxio servers to activate the new configuration changes. After starting Alluxio, the CSV files containing metrics will be found in the `sink.csv.directory`. The filename will correspond with the metric name. Besides the raw metrics shown via metrics servlet or custom metrics configuration, users can track key cluster performance metrics in a more human-readable way in the web interface of Alluxio leading master (`http://<leadingmasterhost>:19999/metrics`). The web page includes the following information: Timeseries for Alluxio space and root UFS space percentage usage information Timeseries for aggregated cluster throughput which is essential for determining the effectiveness of the Alluxio cache Cumulative RPC invocations and operations performed by the Alluxio cluster Cumulative API calls served per mount point that can serve as a strong metric for quantifying the latency and potential cost savings provided by Alluxio's namespace virtualization The nickname and original metric name corresponding are shown: <table class=\"table table-striped\"> <tr> <th>Nick Name</th> <th>Original Metric Name</th> </tr> <tr> <td markdown=\"span\">Local Alluxio (Domain Socket) Read</td> <td markdown=\"span\">`Cluster.BytesReadDomain`</td> </tr> <tr> <td markdown=\"span\">Local Alluxio (Domain Socket) Write</td> <td markdown=\"span\">`Cluster.BytesWrittenDomain`</td> </tr> <tr> <td markdown=\"span\">Local Alluxio (Short-circuit) Read</td> <td markdown=\"span\">`Cluster.BytesReadLocal`</td> </tr> <tr> <td markdown=\"span\">Local Alluxio (Short-circuit) Write</td> <td"
},
{
"data": "</tr> <tr> <td markdown=\"span\">Remote Alluxio Read</td> <td markdown=\"span\">`Cluster.BytesReadRemote`</td> </tr> <tr> <td markdown=\"span\">Remote Alluxio Write</td> <td markdown=\"span\">`Cluster.BytesWrittenRemote`</td> </tr> <tr> <td markdown=\"span\">Under Filesystem Read</td> <td markdown=\"span\">`Cluster.BytesReadUfsAll`</td> </tr> <tr> <td markdown=\"span\">Under Filesystem Write</td> <td markdown=\"span\">`Cluster.BytesWrittenUfsAll`</td> </tr> </table> Detailed descriptions of those metrics are in . `Mounted Under FileSystem Read` shows the `Cluster.BytesReadPerUfs.UFS:<UFS_ADDRESS>` of each Alluxio UFS. `Mounted Under FileSystem Write` shows the `Cluster.BytesWrittenPerUfs.UFS:<UFS_ADDRESS>` of each Alluxio UFS. `Logical Operations` and `RPC Invocations` present parts of the . `Saved Under FileSystem Operations` shows the operations fulfilled by Alluxio's namespace directly without accessing UFSes. Performance improvement can be significant if the target UFS is remote or slow in response. Costs can be saved if the underlying storage charges are based on requests. Grafana is a metrics analytics and visualization software used for visualizing time series data. You can use Grafana to better visualize the various metrics that Alluxio collects. The software allows users to more easily see changes in memory, storage, and completed operations in Alluxio. Grafana supports visualizing data from Prometheus. The following steps can help you to build your Alluxio monitoring based on Grafana and Prometheus easily. Install Grafana using the instructions . the Grafana template JSON file for Alluxio. Import the template JSON file to create a dashboard. See this for importing a dashboard. Add the Prometheus data source to Grafana with a custom name, for example, prometheus-alluxio. Refer to the for help on importing a dashboard. Modify the variables in the dashboard/settings with instructions and save your dashboard. <table class=\"table table-striped\"> <tr> <th>Variable</th> <th>Value</th> </tr> <tr> <td markdown=\"span\">`alluxio_datasource`</td> <td markdown=\"span\">Your prometheus datasource name (eg. prometheus-alluxio used in step 4)</td> </tr> <tr> <td markdown=\"span\">`masters`</td> <td markdown=\"span\">Master 'job_name' configured in `prometheus.yml` (eg. alluxio master)</td> </tr> <tr> <td markdown=\"span\">`workers`</td> <td markdown=\"span\">Worker 'job_name' configured in `prometheus.yml` (eg. alluxio worker)</td> </tr> <tr> <td markdown=\"span\">`alluxio_user`</td> <td markdown=\"span\">The user used to start up Alluxio (eg. alluxio)</td> </tr> </table> If your Grafana dashboard appears like the screenshot below, you have built your monitoring successfully. Of course, you can modify the JSON file or just operate on the dashboard to design your monitoring. Datadog is a metrics analytics and visualization software, much like Grafana above. It supports visualizing data from Prometheus. The following steps can help you to build your Alluxio monitoring based on Datadog and Prometheus easily. Install and run the Datadog agent using the instructions . Modify the `conf.d/openmetrics.d/conf.yaml` file (which you can locate using ). Here is a sample `conf.yaml` file: ```yaml init_config: instances: prometheusurl: 'http://<LEADINGMASTERHOSTNAME>:<MASTERWEB_PORT>/metrics/prometheus/' namespace: 'alluxioMaster' metrics: [ \"<Master metric 1>\", \"<Master metric 2>\" ] prometheusurl: 'http://<WORKERHOSTNAME>:<WORKERWEBPORT>/metrics/prometheus/' namespace: 'alluxioWorker' metrics: [ \"<Worker metric 1>\", \"<Worker metric 2>\" ] ``` Restart the Datadog agent (instructions ). The metrics emitted by Alluxio should now display on the Datadog web interface. You can get JVM related metrics via `jvm_exporter` as a Java agent. Download and run: ```shell $ java -javaagent:./jmxprometheusjavaagent-0.16.0.jar=8080:config.yaml -jar yourJar.jar ``` Metrics will now be accessible at http://localhost:8080/metrics. `config.yaml` file provides the configuration for jmx_exporter. Empty file can be used for a quick start. For more information, please refer to ."
}
] |
{
"category": "Runtime",
"file_name": "Metrics-System.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: \"ark restore get\" layout: docs Get restores Get restores ``` ark restore get [flags] ``` ``` -h, --help help for get --label-columns stringArray a comma-separated list of labels to be displayed as columns -o, --output string Output display format. For create commands, display the object but do not send it to the server. Valid formats are 'table', 'json', and 'yaml'. (default \"table\") -l, --selector string only show items matching this label selector --show-labels show labels in the last column ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Work with restores"
}
] |
{
"category": "Runtime",
"file_name": "ark_restore_get.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "(devices-disk)= ```{note} The `disk` device type is supported for both containers and VMs. It supports hotplugging for both containers and VMs. ``` Disk devices supply additional storage to instances. For containers, they are essentially mount points inside the instance (either as a bind-mount of an existing file or directory on the host, or, if the source is a block device, a regular mount). Virtual machines share host-side mounts or directories through `9p` or `virtiofs` (if available), or as VirtIO disks for block-based disks. (devices-disk-types)= You can create disk devices from different sources. The value that you specify for the `source` option specifies the type of disk device that is added: Storage volume : The most common type of disk device is a storage volume. To add a storage volume, specify its name as the `source` of the device: incus config device add <instancename> <devicename> disk pool=<poolname> source=<volumename> [path=<pathininstance>] The path is required for file system volumes, but not for block volumes. Alternatively, you can use the command to {ref}`storage-attach-volume`. Both commands use the same mechanism to add a storage volume as a disk device. Path on the host : You can share a path on your host (either a file system or a block device) to your instance by adding it as a disk device with the host path as the `source`: incus config device add <instancename> <devicename> disk source=<pathonhost> [path=<pathininstance>] The path is required for file systems, but not for block devices. Ceph RBD : Incus can use Ceph to manage an internal file system for the instance, but if you have an existing, externally managed Ceph RBD that you would like to use for an instance, you can add it with the following command: incus config device add <instancename> <devicename> disk source=ceph:<poolname>/<volumename> ceph.username=<username> ceph.clustername=<clustername> [path=<pathininstance>] The path is required for file systems, but not for block"
},
{
"data": "CephFS : Incus can use Ceph to manage an internal file system for the instance, but if you have an existing, externally managed Ceph file system that you would like to use for an instance, you can add it with the following command: incus config device add <instancename> <devicename> disk source=cephfs:<fsname>/<path> ceph.username=<username> ceph.clustername=<clustername> path=<pathin_instance> ISO file : You can add an ISO file as a disk device for a virtual machine. It is added as a ROM device inside the VM. This source type is applicable only to VMs. To add an ISO file, specify its file path as the `source`: incus config device add <instancename> <devicename> disk source=<filepathon_host> VM `cloud-init` : You can generate a `cloud-init` configuration ISO from the {config:option}`instance-cloud-init:cloud-init.vendor-data` and {config:option}`instance-cloud-init:cloud-init.user-data` configuration keys and attach it to a virtual machine. The `cloud-init` that is running inside the VM then detects the drive on boot and applies the configuration. This source type is applicable only to VMs. To add such a device, use the following command: incus config device add <instancename> <devicename> disk source=cloud-init:config VM `agent` : You can generate an `agent` configuration ISO which will contain the agent binary, configuration files and installation scripts. This is required for environments where `9p` isn't supported and where an alternative way to load the agent is required. This source type is applicable only to VMs. To add such a device, use the following command: incus config device add <instancename> <devicename> disk source=agent:config (devices-disk-initial-config)= Initial volume configuration allows setting specific configurations for the root disk devices of new instances. These settings are prefixed with `initial.` and are only applied when the instance is created. This method allows creating instances that have unique configurations, independent of the default storage pool settings. For example, you can add an initial volume configuration for `zfs.block_mode` to an existing profile, and this will then take effect for each new instance you create using this profile: incus profile device set <profilename> <devicename> initial.zfs.block_mode=true You can also set an initial configuration directly when creating an instance. For example: incus init <image> <instancename> --device <devicename>,initial.zfs.block_mode=true Note that you cannot use initial volume configurations with custom volume options or to set the volume's size. `disk` devices have the following device options: % Include content from ```{include} ../config_options.txt :start-after: <!-- config group devices-disk start --> :end-before: <!-- config group devices-disk end --> ```"
}
] |
{
"category": "Runtime",
"file_name": "devices_disk.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: OpenShift Common Issues OpenShift Console uses OpenShift Prometheus for monitoring and populating data in Storage Dashboard. Additional configuration is required to monitor the Ceph Cluster from the storage dashboard. Change the monitoring namespace to `openshift-monitoring` Change the namespace of the RoleBinding `rook-ceph-metrics` from `rook-ceph` to `openshift-monitoring` for the `prometheus-k8s` ServiceAccount in . ```yaml subjects: kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring ``` Enable Ceph Cluster monitoring Follow . Set the required label on the namespace ```console oc label namespace rook-ceph \"openshift.io/cluster-monitoring=true\" ``` !!! attention Switch to `rook-ceph` namespace using `oc project rook-ceph`. Ensure ceph-mgr pod is Running ```console $ oc get pods -l app=rook-ceph-mgr NAME READY STATUS RESTARTS AGE rook-ceph-mgr 1/1 Running 0 14h ``` Ensure service monitor is present ```console $ oc get servicemonitor rook-ceph-mgr NAME AGE rook-ceph-mgr 14h ``` Ensure the prometheus rules object has been created ```console $ oc get prometheusrules -l prometheus=rook-prometheus NAME AGE prometheus-ceph-rules 14h ```"
}
] |
{
"category": "Runtime",
"file_name": "openshift-common-issues.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "layout: global title: Alluxio Software Requirements Listed below are the generic requirements to run Alluxio locally or as a cluster. Cluster nodes should be running one of the following supported operating systems: MacOS 10.10 or later CentOS - 6.8 or 7 RHEL - 7.x Ubuntu - 16.04 Java JDK 11 (Oracle or OpenJDK distributions supported) Alluxio works on IPv4 networks only. Allow the following ports and protocols: Inbound TCP 22 - ssh as a user to install Alluxio components across specified nodes. It is recommended to use hardware based on the x86 architecture. It is verified that Alluxio can run on the ARM architecture, but it may be possible that certain features may not work. Alluxio's overall performance is similar between the two architectures, based on benchmark results when running on (ex. ) versus (ex. ). There are Alluxio-specific requirements for cluster nodes running the master process. Note that these are bare minimum requirements to run the software. Running Alluxio at scale and under high load will increase these requirements. Minimum 1 GB disk space Minimum 1 GB memory (6 GB if using embedded journal) Minimum 2 CPU cores Allow the following ports and protocols: Inbound TCP 19998 - The Alluxio master's default RPC port Inbound TCP 19999 - The Alluxio master's default web UI port: `http://<master-hostname>:19999` Inbound TCP 20001 - The Alluxio job master's default RPC port Inbound TCP 20002 - The Alluxio job master's default web UI port Embedded Journal Requirements Only Inbound TCP 19200 - The Alluxio master's default port for internal leader election Inbound TCP 20003 - The Alluxio job master's default port for internal leader election There are Alluxio-specific requirements for cluster nodes running the worker process: Minimum 1 GB disk space Minimum 1 GB memory Minimum 2 CPU cores Allow the following ports and protocols: Inbound TCP 29999 - The Alluxio worker's default RPC port Inbound TCP 30000 - The Alluxio worker's default web UI port: `http://<worker-hostname>:30000` Inbound TCP 30001 - The Alluxio job worker's default RPC port Inbound TCP 30002 - The Alluxio job worker's default data port Inbound TCP 30003 - The Alluxio job worker's default web UI port: `http://<worker-hostname>:30003` There are Alluxio-specific requirements for nodes running the fuse process. Note that these are bare minimum requirements to run the software. Running Alluxio Fuse under high load will increase these requirements. Minimum 1 CPU core Minimum 1 GB memory FUSE installed libfuse 2.9.3 or newer for Linux, recommend to use libfuse >= 3.0.0 osxfuse 3.7.1 or newer for MacOS"
}
] |
{
"category": "Runtime",
"file_name": "Software-Requirements.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Devmapper is a `containerd` snapshotter plugin that stores snapshots in filesystem images in a Device-mapper thin-pool. The devmapper plugin takes advantage of Device-mapper features like . To use the devmapper snapshotter plugin you need to prepare a Device-mapper `thin-pool` in advance and update containerd's configuration file. This file is typically located at `/etc/containerd/config.toml`. Here's a minimal sample entry that can be made in the configuration file: ```toml version = 2 [plugins] ... [plugins.\"io.containerd.snapshotter.v1.devmapper\"] root_path = \"/var/lib/containerd/devmapper\" pool_name = \"containerd-pool\" baseimagesize = \"8192MB\" ... ``` The following configuration flags are supported: `root_path` - a directory where the metadata will be available (if empty default location for `containerd` plugins will be used) `pool_name` - a name to use for the Device-mapper thin-pool. Pool name should be the same as in `/dev/mapper/` directory `baseimagesize` - defines how much space to allocate when creating thin device snapshots from the base (pool) device `async_remove` - flag to async remove device using snapshot GC's cleanup callback (default: `false`) `discard_blocks` - whether to discard blocks when removing a device. This is especially useful for returning disk space to the filesystem when using loopback devices. (default: `false`) `fs_type` - defines the file system to use for snapshot device mount. Valid values are `ext4` and `xfs` (default: `\"ext4\"`) `fs_options` - optionally defines the file system options. This is currently only applicable to `ext4` file system. (default: `\"\"`) `rootpath`, `poolname`, and `baseimagesize` are required snapshotter parameters. Give it a try with the following commands: ```bash ctr images pull --snapshotter devmapper docker.io/library/hello-world:latest ctr run --snapshotter devmapper docker.io/library/hello-world:latest test ``` The devicemapper snapshotter requires `dmsetup` (>= 1.02.110) command line tool to be installed and available on your computer. On Ubuntu, it can be installed with `apt-get install dmsetup` command. There are many ways how to configure a Device-mapper thin-pool depending on your requirements, disk configuration, and environment. Two common configurations are provided below, one for development environments and one for production environments. On local dev environment you can utilize loopback"
},
{
"data": "This type of configuration is simple and suits well for development and testing (please note that this configuration is slow and not recommended for production uses). Run the following script to create a thin-pool device with associated metadata and data device files: ```bash set -ex DATA_DIR=/var/lib/containerd/devmapper POOL_NAME=devpool sudo mkdir -p ${DATA_DIR} sudo touch \"${DATA_DIR}/data\" sudo truncate -s 100G \"${DATA_DIR}/data\" sudo touch \"${DATA_DIR}/meta\" sudo truncate -s 10G \"${DATA_DIR}/meta\" DATADEV=$(sudo losetup --find --show \"${DATADIR}/data\") METADEV=$(sudo losetup --find --show \"${DATADIR}/meta\") SECTOR_SIZE=512 DATASIZE=\"$(sudo blockdev --getsize64 -q ${DATADEV})\" LENGTHINSECTORS=$(bc <<< \"${DATASIZE}/${SECTORSIZE}\") DATABLOCKSIZE=128 LOWWATERMARK=32768 sudo dmsetup create \"${POOL_NAME}\" \\ --table \"0 ${LENGTHINSECTORS} thin-pool ${METADEV} ${DATADEV} ${DATABLOCKSIZE} ${LOWWATERMARK}\" cat << EOF [plugins] [plugins.\"io.containerd.snapshotter.v1.devmapper\"] poolname = \"${POOLNAME}\" rootpath = \"${DATADIR}\" baseimagesize = \"10GB\" discard_blocks = true EOF ``` Use `dmsetup` to verify that the thin-pool was created successfully: ```bash sudo dmsetup ls devpool (253:0) ``` Once `containerd` is configured and restarted, you'll see the following output: ``` INFO[2020-03-17T20:24:45.532604888Z] loading plugin \"io.containerd.snapshotter.v1.devmapper\"... type=io.containerd.snapshotter.v1 INFO[2020-03-17T20:24:45.532672738Z] initializing pool device \"devpool\" ``` Another way to setup a thin-pool is via the tool (formerly known as `docker-storage-setup`). It is a script to configure CoW file systems like devicemapper: ```bash set -ex BLOCK_DEV=/dev/sdf POOL_NAME=devpool VG_NAME=containerd git clone https://github.com/projectatomic/container-storage-setup.git cd container-storage-setup/ sudo make install-core echo \"Using version $(container-storage-setup -v)\" sudo tee /etc/sysconfig/docker-storage-setup <<EOF DEVS=${BLOCK_DEV} VG=${VG_NAME} CONTAINERTHINPOOL=${POOLNAME} EOF sudo container-storage-setup cat << EOF [plugins] [plugins.devmapper] root_path = \"/var/lib/containerd/devmapper\" poolname = \"${VGNAME}-${POOL_NAME}\" baseimagesize = \"10GB\" EOF ``` If successful `container-storage-setup` will output: ``` echo VG=containerd sudo container-storage-setup INFO: Volume group backing root filesystem could not be determined INFO: Writing zeros to first 4MB of device /dev/xvdf 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.0162906 s, 257 MB/s INFO: Device node /dev/xvdf1 exists. Physical volume \"/dev/xvdf1\" successfully created. Volume group \"containerd\" successfully created Rounding up size to full physical extent 12.00 MiB Thin pool volume with chunk size 512.00 KiB can address at most 126.50 TiB of data. Logical volume \"devpool\" created. Logical volume containerd/devpool changed. ... ``` And `dmsetup` will produce the following output: ```bash sudo dmsetup ls containerd-devpool (253:2) containerd-devpool_tdata (253:1) containerd-devpool_tmeta (253:0) ``` See also for additional information about production devmapper setups. For more information on Device-mapper, thin provisioning, etc., you can refer to the following resources: https://access.redhat.com/documentation/en-us/redhatenterpriselinux/6/html/logicalvolumemanageradministration/device_mapper https://en.wikipedia.org/wiki/Device_mapper https://docs.docker.com/storage/storagedriver/device-mapper-driver/ https://www.kernel.org/doc/Documentation/device-mapper/thin-provisioning.txt https://www.kernel.org/doc/Documentation/device-mapper/snapshot.txt"
}
] |
{
"category": "Runtime",
"file_name": "devmapper.md",
"project_name": "containerd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<!-- toc --> - - - - <!-- /toc --> `TrafficControl` is a CRD API that manages and manipulates the transmission of Pod traffic. It allows users to mirror or redirect specific traffic originating from specific Pods or destined for specific Pods to a local network device or a remote destination via a tunnel of various types. It provides full visibility into network traffic, including both north-south and east-west traffic. You may be interested in using this capability if any of the following apply: You want to monitor network traffic passing in or out of a set of Pods for purposes such as troubleshooting, intrusion detection, and so on. You want to redirect network traffic passing in or out of a set of Pods to applications that enforce policies, and reject traffic to prevent intrusion. This guide demonstrates how to configure `TrafficControl` to achieve the above goals. TrafficControl was introduced in v1.7 as an alpha feature. A feature gate, `TrafficControl` must be enabled on the antrea-agent in the `antrea-config` ConfigMap for the feature to work, like the following: ```yaml kind: ConfigMap apiVersion: v1 metadata: name: antrea-config namespace: kube-system data: antrea-agent.conf: | featureGates: TrafficControl: true ``` A TrafficControl in Kubernetes is a REST object. Like all the REST objects, you can POST a TrafficControl definition to the API server to create a new instance. For example, supposing you have a set of Pods which contain a label `app=web`, the following specification creates a new TrafficControl object named \"mirror-web-app\", which mirrors all traffic from or to any Pod with the `app=web` label and send them to a receiver running on \"10.0.10.2\" encapsulated within a VXLAN tunnel: ```yaml apiVersion: crd.antrea.io/v1alpha2 kind: TrafficControl metadata: name: mirror-web-app spec: appliedTo: podSelector: matchLabels: app: web direction: Both action: Mirror targetPort: vxlan: remoteIP: 10.0.10.2 ``` The `appliedTo` field specifies the grouping criteria of Pods to which the TrafficControl applies to. Pods can be selected cluster-wide using `podSelector`. If set with a `namespaceSelector`, all Pods from Namespaces selected by the `namespaceSelector` will be selected. Specific Pods from specific Namespaces can be selected by providing both a `podSelector` and a `namespaceSelector`. Empty `appliedTo` selects nothing. The field is mandatory. The `direction` field specifies the direction of traffic that should be matched. It can be `Ingress`, `Egress`, or `Both`. The `action` field specifies which action should be taken for the traffic. It can be `Mirror` or `Redirect`. For the `Mirror` action, `targetPort` must be set to the port to which the traffic will be"
},
{
"data": "For the `Redirect` action, both `targetPort` and `returnPort` need to be specified, the latter of which represents the port from which the traffic could be sent back to OVS and be forwarded to its original destination. Once redirected, a packet should be either dropped or sent back to OVS without modification, otherwise it would lead to undefined behavior. The `targetPort` field specifies the port to which the traffic should be redirected or mirrored. There are five kinds of ports that can be used to receive mirrored traffic: ovsInternal: This specifies an OVS internal port on all Nodes. A Pod's traffic will be redirected or mirrored to the OVS internal port on the same Node that hosts the Pod. The port doesn't need to exist in advance, Antrea will create the port if it doesn't exist. To use an OVS internal port, the `name` of the port must be provided: ```yaml ovsInternal: name: tap0 ``` device: This specifies a network device on all Nodes. A Pod's traffic will be redirected or mirrored to the network device on the same Node that hosts the Pod. The network device must exist on all Nodes and Antrea will attach it to the OVS bridge if not already attached. To use a network device, the `name` of the device must be provided: ```yaml device: name: eno2 ``` geneve: This specifies a remote destination for a GENEVE tunnel. All selected Pods' traffic will be redirected or mirrored to the destination via a GENEVE tunnel. The `remoteIP` field must be provided to specify the IP address of the destination. Optionally, the `destinationPort` field could be used to specify the UDP destination port of the tunnel, or 6081 will be used by default. If Virtual Network Identifier (VNI) is desired, the `vni` field can be specified to an integer in the range 0-16,777,215: ```yaml geneve: remoteIP: 10.0.10.2 destinationPort: 6081 vni: 1 ``` vxlan: This specifies a remote destination for a VXLAN tunnel. All selected Pods' traffic will be redirected or mirrored to the destination via a VXLAN tunnel. The `remoteIP` field must be provided to specify the IP address of the destination. Optionally, the `destinationPort` field could be used to specify the UDP destination port of the tunnel, or 4789 will be used by default. If Virtual Network Identifier (VNI) is desired, the `vni` field can be specified to an integer in the range 0-16,777,215: ```yaml vxlan: remoteIP: 10.0.10.2 destinationPort: 4789 vni: 1 ``` gre: This specifies a remote destination for a GRE tunnel. All selected Pods' traffic will be redirected or mirrored to the destination via a GRE tunnel. The `remoteIP` field must be provided to specify the IP address of the"
},
{
"data": "If GRE key is desired, the `key` field can be specified to an integer in the range 0-4,294,967,295: ```yaml gre: remoteIP: 10.0.10.2 key: 1 ``` erspan: This specifies a remote destination for an ERSPAN tunnel. All selected Pods' traffic will be mirrored to the destination via an ERSPAN tunnel. The `remoteIP` field must be provided to specify the IP address of the destination. If ERSPAN session ID is desired, the `sessionID` field can be specified to an integer in the range 0-1,023. The `version` field must be provided to specify the ERSPAN version: 1 for version 1 (type II), or 2 for version 2 (type III). For version 1, the `index` field can be specified to associate with the ERSPAN traffic's source port and direction. An example of version 1 might look like this: ```yaml erspan: remoteIP: 10.0.10.2 sessionID: 1 version: 1 index: 1 ``` For version 2, the `dir` field can be specified to indicate the mirrored traffic's direction: 0 for ingress traffic, 1 for egress traffic. The `hardwareID` field can be specified as an unique identifier of an ERSPAN v2 engine. An example of version 2 might look like this: ```yaml erspan: remoteIP: 10.0.10.2 sessionID: 1 version: 2 dir: 0 hardwareID: 4 ``` The `returnPort` field should only be set when the `action` is `Redirect`. It is similar to the `targetPort` field, but meant for specifying the port from which the traffic will be sent back to OVS and be forwarded to its original destination. In this example, we will mirror all Pods' traffic and send them to a remote destination via a GENEVE tunnel: ```yaml apiVersion: crd.antrea.io/v1alpha2 kind: TrafficControl metadata: name: mirror-all-to-remote spec: appliedTo: podSelector: {} direction: Both action: Mirror targetPort: geneve: remoteIP: 10.0.10.2 ``` In this example, we will redirect traffic of all Pods in the Namespace `prod` to OVS internal ports named `tap0` configured on Nodes that these Pods run on. The `returnPort` configuration means, if the traffic is sent back to OVS from OVS internal ports named `tap1`, it will be forwarded to its original destination. Therefore, if an intrusion prevention system or a network firewall is configured to capture and forward traffic between `tap0` and `tap1`, it can actively scan forwarded network traffic for malicious activities and known attack patterns, and drop the traffic determined to be malicious. ```yaml apiVersion: crd.antrea.io/v1alpha2 kind: TrafficControl metadata: name: redirect-prod-to-local spec: appliedTo: namespaceSelector: matchLabels: kubernetes.io/metadata.name: prod direction: Both action: Redirect targetPort: ovsInternal: name: tap0 returnPort: ovsInternal: name: tap1 ``` With the `TrafficControl` capability, Antrea can be used with threat detection engines to provide network-based IDS/IPS to Pods. We provide a reference cookbook on how to implement IDS using Suricata. For more information, refer to the ."
}
] |
{
"category": "Runtime",
"file_name": "traffic-control.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "This page shows you how to deploy a sample site using . Take the following steps to enable the Kubernetes Engine API: Visit the in the Google Cloud Platform Console. Create or select a project. Create a node pool inside your cluster with option `--sandbox type=gvisor` added to the command, like below: ```shell gcloud container node-pools create gvisor --cluster=${CLUSTER_NAME?} --sandbox type=gvisor --machine-type=e2-standard-2 ``` If you prefer to use the console, select your cluster and select the ADD NODE POOL button: Then click on the Security tab on the left and select Enable sandbox with gVisor option. Select other options as you like: The gvisor `RuntimeClass` is instantiated during node creation. You can check for the existence of the gvisor `RuntimeClass` using the following command: ```shell $ kubectl get runtimeclass/gvisor NAME HANDLER AGE gvisor gvisor 1h ``` Now, let's deploy a WordPress site using GKE Sandbox. WordPress site requires two pods: web server in the frontend, MySQL database in the backend. Both applications use `PersistentVolumes` to store the site data. In addition, they use secret store to share MySQL password between them. Note: This example uses gVisor to sandbox the frontend web server, but not the MySQL database backend. In a production setup, due to imposed by gVisor, it is not recommended to run your database in a sandbox. The frontend is the critical component with the largest outside attack surface, where gVisor's security/performance trade-off makes the most sense. See the [Production guide] for more details. First, let's download the deployment configuration files to add the runtime class annotation to them: ```shell curl -LO https://k8s.io/examples/application/wordpress/wordpress-deployment.yaml curl -LO https://k8s.io/examples/application/wordpress/mysql-deployment.yaml ``` Add a spec.template.spec.runtimeClassName set to gvisor to both files, as shown below:"
},
{
"data": "```yaml apiVersion: v1 kind: Service metadata: name: wordpress labels: app: wordpress spec: ports: port: 80 selector: app: wordpress tier: frontend type: LoadBalancer apiVersion: v1 kind: PersistentVolumeClaim metadata: name: wp-pv-claim labels: app: wordpress spec: accessModes: ReadWriteOnce resources: requests: storage: 20Gi apiVersion: apps/v1 kind: Deployment metadata: name: wordpress labels: app: wordpress spec: selector: matchLabels: app: wordpress tier: frontend strategy: type: Recreate template: metadata: labels: app: wordpress tier: frontend spec: runtimeClassName: gvisor # ADD THIS LINE containers: image: wordpress:4.8-apache name: wordpress env: name: WORDPRESSDBHOST value: wordpress-mysql name: WORDPRESSDBPASSWORD valueFrom: secretKeyRef: name: mysql-pass key: password ports: containerPort: 80 name: wordpress volumeMounts: name: wordpress-persistent-storage mountPath: /var/www/html volumes: name: wordpress-persistent-storage persistentVolumeClaim: claimName: wp-pv-claim ``` mysql-deployment.yaml: ```yaml apiVersion: v1 kind: Service metadata: name: wordpress-mysql labels: app: wordpress spec: ports: port: 3306 selector: app: wordpress tier: mysql clusterIP: None apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim labels: app: wordpress spec: accessModes: ReadWriteOnce resources: requests: storage: 20Gi apiVersion: apps/v1 kind: Deployment metadata: name: wordpress-mysql labels: app: wordpress spec: selector: matchLabels: app: wordpress tier: mysql strategy: type: Recreate template: metadata: labels: app: wordpress tier: mysql spec: containers: image: mysql:5.6 name: mysql env: name: MYSQLROOTPASSWORD valueFrom: secretKeyRef: name: mysql-pass key: password ports: containerPort: 3306 name: mysql volumeMounts: name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim ``` Note that apart from `runtimeClassName: gvisor`, nothing else about the Deployment has is changed. You are now ready to deploy the entire application. Just create a secret to store MySQL's password and apply both deployments: ```shell $ kubectl create secret generic mysql-pass --from-literal=password=${YOURSECRETPASSWORD_HERE?} $ kubectl apply -f mysql-deployment.yaml $ kubectl apply -f wordpress-deployment.yaml ``` Wait for the deployments to be ready and an external IP to be assigned to the Wordpress service: ```shell $ watch kubectl get service wordpress NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE wordpress LoadBalancer 10.120.16.63 35.203.179.216 80:31025/TCP 1m ``` Now, copy the service's `EXTERNAL-IP` from above to your favorite browser to view and configure your new WordPress site. Congratulations! You have just deployed a WordPress site using GKE Sandbox. To learn more about GKE Sandbox and how to run your deployment securely, take a look at the . Before taking this deployment to production, review the [Production guide]."
}
] |
{
"category": "Runtime",
"file_name": "kubernetes.md",
"project_name": "gVisor",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: JuiceFS Article Collection sidebar_position: 2 slug: /articles description: Explore JuiceFS' collection of technical articles and real-world case studies in AI, machine learning, deep learning, big data, data sharing, backup, and recovery scenarios. JuiceFS is widely applicable to various data storage and sharing scenarios. This page compiles its technical articles and real-world case studies. Explore valuable insights and practical examples to deepen your understanding of JuiceFS and related applications. We encourage all community users to contribute and maintain this list. , 2024-04-09, Sui Su , 2024-04-03, Xin Wang @ Zhihu , 2024-02-29, Xipeng Guan @ BentoML , 2024-01-24, Juchao Song @ coScene , 2024-01-17, Nam Kyung-wan @ NAVER , 2023-12-14, Jichuan Sun @ SmartMore , 2023-11-09 , 2023-10-12, Jiapeng Sun @ Xiaomi , 2023-06-09, Chen Hong @ Zhejiang Lab , 2023-05-06, Sui Su , 2022-09-06, Dongdong Lv @ Unisound , 2024-02-07, Experienced JuiceFS user , 2023-11-01 , 2023-09-28, Ming Li @ DMALL , 2023-09-14, Weihong Ke @ NetEase Games , 2023-08-09, Chang Liu & Yangliang Li @ Yimian , 2023-05-10, Fengyu Cao @ Douban , 2021-10-09, Gaoding SRE Team , 2021-10-09, Teng @ Shopee , 2022-09-19, Miaocheng & Xiaofeng @ Trip.com , 2024-05-16, Jian Zhi , 2024-05-08, Jian Zhi , 2024-04-30, Jiefeng Huang , 2024-04-22, Jian Zhi , 2024-04-18, Herald Yu , 2024-03-27, Jet , 2024-03-07, Feihu Mo , 2024-02-22, Sandy , 2023-12-28, Herald Yu , 2023-12-20, Jian Zhi , 2023-12-07, Yifu Liu , 2023-11-23, Herald Yu , 2023-11-20 , 2023-10-26, Sandy , 2023-08-29, Herald Yu , 2023-07-19, Herald Yu , 2023-06-06, Changjian Gao , 2023-04-25, Changjian Gao , Youtube video, by Education Ecosystem , 2022-10-14, Sandy , 2022-07-22, Changjian Gao on Redis Monthly Live with Davies Liu and Mikhail Volkov, YouTube video , George Liu (eva2000) , Dollarkillerx , 2023-11-16, Herald Yu , 2023-10-20, Changjian Gao , 2023-09-31, Yifu Liu , 2023-09-21, Sandy If you want to add JuiceFS application cases to this list, you can do so through the following methods: Feel free to contribute by creating a branch in this repository on GitHub. Add the title and URL of your case page to the appropriate category, and then submit a pull request for review. Our team will review the submission and merge the branch if approved. You can join the official JuiceFS . There, you can get in touch with any staff member to discuss your contribution."
}
] |
{
"category": "Runtime",
"file_name": "articles.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "(TBD) Adding a new filter entails the following: Writing a module in oio.event.filters, refer to the already defined filters for a template. Adding the filter to the file. Modify the event agent configuration template in to include an entry for the filter at the bottom (a `[filter:mynewfilter]` entry)and to place the filter in the pipelines of needed handlers. _(cascading filters is possible and working)_"
}
] |
{
"category": "Runtime",
"file_name": "README.md",
"project_name": "OpenIO",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Intel QuickAssist Technology (QAT) provides hardware acceleration for security (cryptography) and compression. These instructions cover the steps for the latest which already include the QAT host driver. These instructions can be adapted to any Linux distribution. These instructions guide the user on how to download the kernel sources, compile kernel driver modules against those sources, and load them onto the host as well as preparing a specially built Kata Containers kernel and custom Kata Containers rootfs. Download kernel sources Compile Kata kernel Compile kernel driver modules against those sources Download rootfs Add driver modules to rootfs Build rootfs image There are some steps to complete only once, some steps to complete with every reboot, and some steps to complete when the host kernel changes. The following list of variables must be set before running through the scripts. These variables refer to locations to store modules and configuration files on the host and links to the drivers to use. Modify these as needed to point to updated drivers or different install locations. Make sure to check for the latest driver. ```bash $ export QATDRIVERVER=qat1.7.l.4.14.0-00031.tar.gz $ export QATDRIVERURL=https://downloadmirror.intel.com/30178/eng/${QATDRIVERVER} $ export QATCONFLOCATION=~/QAT_conf $ export QAT_DOCKERFILE=https://raw.githubusercontent.com/intel/intel-device-plugins-for-kubernetes/main/demo/openssl-qat-engine/Dockerfile $ export QAT_SRC=~/src/QAT $ export GOPATH=~/src/go $ export KATAKERNELLOCATION=~/kata $ export KATAROOTFSLOCATION=~/kata ``` The host could be a bare metal instance or a virtual machine. If using a virtual machine, make sure that KVM nesting is enabled. The following instructions reference an Intel C62X chipset. Some of the instructions must be modified if using a different Intel QAT device. The Intel QAT chipset can be identified by executing the following. ```bash $ for i in 0434 0435 37c8 1f18 1f19; do lspci -d 8086:$i; done ``` These packages are necessary to compile the Kata kernel, Intel QAT driver, and to prepare the rootfs for Kata. also needs to be installed to be able to build the rootfs. To test that everything works a Kubernetes pod is started requesting Intel QAT resources. For the pass through of the virtual functions the kernel boot parameter needs to have `INTEL_IOMMU=on`. ```bash $ sudo apt update $ sudo apt install -y golang-go build-essential python pkg-config zlib1g-dev libudev-dev bison libelf-dev flex libtool automake autotools-dev autoconf bc libpixman-1-dev coreutils libssl-dev $ sudo sed -i 's/GRUBCMDLINELINUXDEFAULT=\"\"/GRUBCMDLINELINUXDEFAULT=\"intel_iommu=on\"/' /etc/default/grub $ sudo update-grub $ sudo reboot ``` This will download the . Make sure to check the website for the latest version. ```bash $ mkdir -p $QAT_SRC $ cd $QAT_SRC $ curl -L $QATDRIVERURL | tar zx ``` Modify the instructions below as necessary if using a different Intel QAT hardware platform. You can learn more about customizing configuration files at the This section starts from a base config file and changes the `SSL` section to `SHIM` to support the OpenSSL engine. There are more tweaks that you can make depending on the use case and how many Intel QAT engines should be run. You can find more information about how to customize in the Note: This section assumes that a Intel QAT `c6xx` platform is used. ```bash $ mkdir -p $QATCONFLOCATION $ cp $QATSRC/quickassist/utilities/adfctl/conffiles/c6xxvfdev0.conf.vm $QATCONFLOCATION/c6xxvf_dev0.conf $ sed -i 's/\\[SSL\\]/\\[SHIM\\]/g' $QATCONFLOCATION/c6xxvf_dev0.conf ``` To enable virtual functions, the host OS should have IOMMU groups enabled. In the UEFI Firmware Intel Virtualization Technology for Directed I/O (Intel VT-d) must be enabled. Also, the kernel boot parameter should be `inteliommu=on` or `inteliommu=ifgx_off`. This should have been set from the instructions"
},
{
"data": "Check the output of `/proc/cmdline` to confirm. The following commands assume you installed an Intel QAT card, IOMMU is on, and VT-d is enabled. The vendor and device ID add to the `VFIO-PCI` driver so that each exposed virtual function can be bound to the `VFIO-PCI` driver. Once complete, each virtual function passes into a Kata Containers container using the PCIe device passthrough feature. For Kubernetes, the for Kubernetes handles the binding of the driver, but the VFs still must be enabled. ```bash $ sudo modprobe vfio-pci $ QATPCIBUSPFNUMBERS=$((lspci -d :435 && lspci -d :37c8 && lspci -d :19e2 && lspci -d :6f54) | cut -d ' ' -f 1) $ QATPCIBUSPF1=$(echo $QATPCIBUSPFNUMBERS | cut -d ' ' -f 1) $ echo 16 | sudo tee /sys/bus/pci/devices/0000:$QATPCIBUSPF1/sriov_numvfs $ QATPCIIDVF=$(cat /sys/bus/pci/devices/0000:${QATPCIBUSPF1}/virtfn0/uevent | grep PCIID) $ QATVENDORANDIDVF=$(echo ${QATPCIIDVF/PCIID=} | sed 's/:/ /') $ echo $QATVENDORANDIDVF | sudo tee --append /sys/bus/pci/drivers/vfio-pci/new_id ``` Loop through all the virtual functions and bind to the VFIO driver ```bash $ for f in /sys/bus/pci/devices/0000:$QATPCIBUSPF1/virtfn* do QATPCIBUS_VF=$(basename $(readlink $f)) echo $QATPCIBUS_VF | sudo tee --append /sys/bus/pci/drivers/c6xxvf/unbind echo $QATPCIBUS_VF | sudo tee --append /sys/bus/pci/drivers/vfio-pci/bind done ``` If the following command returns empty, then the virtual functions are not properly enabled. This command checks the enumerated device IDs for just the virtual functions. Using the Intel QAT as an example, the physical device ID is `37c8` and virtual function device ID is `37c9`. The following command checks if VF's are enabled for any of the currently known Intel QAT device ID's. The following `ls` command should show the 16 VF's bound to `VFIO-PCI`. ```bash $ for i in 0442 0443 37c9 19e3; do lspci -d 8086:$i; done ``` Another way to check is to see what PCI devices that `VFIO-PCI` is mapped to. It should match the device ID's of the VF's. ```bash $ ls -la /sys/bus/pci/drivers/vfio-pci ``` This example automatically uses the latest Kata kernel supported by Kata. It follows the instructions from the and uses the latest Kata kernel . There are some patches that must be installed as well, which the `build-kernel.sh` script should automatically apply. If you are using a different kernel version, then you might need to manually apply them. Since the Kata Containers kernel has a minimal set of kernel flags set, you must create a Intel QAT kernel fragment with the necessary `CONFIGCRYPTO*` options set. Update the config to set some of the `CRYPTO` flags to enabled. This might change with different kernel versions. The following instructions were tested with kernel `v5.4.0-64-generic`. ```bash $ mkdir -p $GOPATH $ cd $GOPATH $ go get -v github.com/kata-containers/kata-containers $ cat << EOF > $GOPATH/src/github.com/kata-containers/kata-containers/tools/packaging/kernel/configs/fragments/common/qat.conf CONFIG_PCIEAER=y CONFIG_UIO=y CONFIGCRYPTOHW=y CONFIGCRYPTODEVQATC62XVF=m CONFIGCRYPTOCBC=y CONFIG_MODULES=y CONFIGMODULESIG=y CONFIGCRYPTOAUTHENC=y CONFIGCRYPTODH=y EOF $ $GOPATH/src/github.com/kata-containers/kata-containers/tools/packaging/kernel/build-kernel.sh setup ``` ```bash $ cd $GOPATH $ export LINUX_VER=$(ls -d kata-linux-*) $ sed -i 's/EXTRAVERSION =/EXTRAVERSION = .qat.container/' $LINUX_VER/Makefile $ $GOPATH/src/github.com/kata-containers/kata-containers/tools/packaging/kernel/build-kernel.sh build ``` ```bash $ export KATAKERNELNAME=vmlinux-${LINUXVER}qat $ mkdir -p $KATAKERNELLOCATION $ cp ${GOPATH}/${LINUXVER}/vmlinux ${KATAKERNELLOCATION}/${KATAKERNEL_NAME} ``` These instructions build upon the OS builder instructions located in the . At this point it is recommended that is installed first, and then is use to install Kata. This will make sure that the correct `agent` version is installed into the rootfs in the steps"
},
{
"data": "The following instructions use Ubuntu as the root filesystem with systemd as the init and will add in the `kmod` binary, which is not a standard binary in a Kata rootfs image. The `kmod` binary is necessary to load the Intel QAT kernel modules when the virtual machine rootfs boots. ```bash $ export OSBUILDER=$GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder $ export ROOTFS_DIR=${OSBUILDER}/rootfs-builder/rootfs $ export EXTRA_PKGS='kmod' ``` Make sure that the `kata-agent` version matches the installed `kata-runtime` version. Also make sure the `kata-runtime` install location is in your `PATH` variable. The following `AGENT_VERSION` can be set manually to match the `kata-runtime` version if the following commands don't work. ```bash $ export PATH=$PATH:/opt/kata/bin $ cd $GOPATH $ export AGENT_VERSION=$(kata-runtime version | head -n 1 | grep -o \"[0-9.]\\+\") $ cd ${OSBUILDER}/rootfs-builder $ sudo rm -rf ${ROOTFS_DIR} $ script -fec 'sudo -E GOPATH=$GOPATH USE_DOCKER=true SECCOMP=no ./rootfs.sh ubuntu' ``` After the Kata Containers kernel builds with the proper configuration flags, you must build the Intel QAT drivers against that Kata Containers kernel version in a similar way they were previously built for the host OS. You must set the `KERNELSOURCEROOT` variable to the Kata Containers kernel source directory and build the Intel QAT drivers again. The `make` command will install the Intel QAT modules into the Kata rootfs. ```bash $ cd $GOPATH $ export LINUX_VER=$(ls -d kata*) $ export KERNELMAJORVERSION=$(awk '/^VERSION =/{print $NF}' $GOPATH/$LINUX_VER/Makefile) $ export KERNELPATHLEVEL=$(awk '/^PATCHLEVEL =/{print $NF}' $GOPATH/$LINUXVER/Makefile) $ export KERNELSUBLEVEL=$(awk '/^SUBLEVEL =/{print $NF}' $GOPATH/$LINUXVER/Makefile) $ export KERNELEXTRAVERSION=$(awk '/^EXTRAVERSION =/{print $NF}' $GOPATH/$LINUXVER/Makefile) $ export KERNELROOTFSDIR=${KERNELMAJORVERSION}.${KERNELPATHLEVEL}.${KERNELSUBLEVEL}${KERNEL_EXTRAVERSION} $ cd $QAT_SRC $ KERNELSOURCEROOT=$GOPATH/$LINUX_VER ./configure --enable-icp-sriov=guest $ sudo -E make all -j $($(nproc ${CI:+--ignore 1})) $ sudo -E make INSTALLMODPATH=$ROOTFS_DIR qat-driver-install -j $($(nproc ${CI:+--ignore 1})) ``` The `usdm_drv` module also needs to be copied into the rootfs modules path and `depmod` should be run. ```bash $ sudo cp $QATSRC/build/usdmdrv.ko $ROOTFSDIR/lib/modules/${KERNELROOTFS_DIR}/updates/drivers $ sudo depmod -a -b ${ROOTFSDIR} ${KERNELROOTFS_DIR} $ cd ${OSBUILDER}/image-builder $ script -fec 'sudo -E USEDOCKER=true ./imagebuilder.sh ${ROOTFS_DIR}' ``` Note: Ignore any errors on modules.builtin and modules.order when running `depmod`. ```bash $ mkdir -p $KATAROOTFSLOCATION $ cp ${OSBUILDER}/image-builder/kata-containers.img $KATAROOTFSLOCATION ``` The following instructions uses a OpenSSL Dockerfile that builds the Intel QAT engine to allow OpenSSL to offload crypto functions. It is a convenient way to test that VFIO device passthrough for the Intel QAT VFs are working properly with the Kata Containers VM. Use the OpenSSL Intel QAT to build a container image with an optimized OpenSSL engine for Intel QAT. Using `docker build` with the Kata Containers runtime can sometimes have issues. Therefore, make sure that `runc` is the default Docker container runtime. ```bash $ cd $QAT_SRC $ curl -O $QAT_DOCKERFILE $ sudo docker build -t openssl-qat-engine . ``` Note: The Intel QAT driver version in this container might not match the Intel QAT driver compiled and loaded on the host when compiling. The `ctr` tool can be used to interact with the containerd daemon. It may be more convenient to use this tool to verify the kernel and image instead of setting up a Kubernetes cluster. The correct Kata runtimes need to be added to the containerd `config.toml`. Below is a sample snippet that can be added to allow QEMU and Cloud Hypervisor (CLH) to work with `ctr`. ``` [plugins.cri.containerd.runtimes.kata-qemu] runtime_type = \"io.containerd.kata-qemu.v2\" privilegedwithouthost_devices = true pod_annotations = [\"io.katacontainers.*\"] [plugins.cri.containerd.runtimes.kata-qemu.options] ConfigPath = \"/opt/kata/share/defaults/kata-containers/configuration-qemu.toml\" [plugins.cri.containerd.runtimes.kata-clh] runtime_type = \"io.containerd.kata-clh.v2\" privilegedwithouthost_devices = true pod_annotations = [\"io.katacontainers.*\"] [plugins.cri.containerd.runtimes.kata-clh.options] ConfigPath ="
},
{
"data": "``` In addition, containerd expects the binary to be in `/usr/local/bin` so add this small script so that it redirects to be able to use either QEMU or Cloud Hypervisor with Kata. ```bash $ echo '#!/usr/bin/env bash' | sudo tee /usr/local/bin/containerd-shim-kata-qemu-v2 $ echo 'KATACONFFILE=/opt/kata/share/defaults/kata-containers/configuration-qemu.toml /opt/kata/bin/containerd-shim-kata-v2 $@' | sudo tee -a /usr/local/bin/containerd-shim-kata-qemu-v2 $ sudo chmod +x /usr/local/bin/containerd-shim-kata-qemu-v2 $ echo '#!/usr/bin/env bash' | sudo tee /usr/local/bin/containerd-shim-kata-clh-v2 $ echo 'KATACONFFILE=/opt/kata/share/defaults/kata-containers/configuration-clh.toml /opt/kata/bin/containerd-shim-kata-v2 $@' | sudo tee -a /usr/local/bin/containerd-shim-kata-clh-v2 $ sudo chmod +x /usr/local/bin/containerd-shim-kata-clh-v2 ``` After the OpenSSL image is built and imported into containerd, a Intel QAT virtual function exposed in the step above can be added to the `ctr` command. Make sure to change the `/dev/vfio` number to one that actually exists on the host system. When using the `ctr` tool, the`configuration.toml` for Kata needs to point to the custom Kata kernel and rootfs built above and the Intel QAT modules in the Kata rootfs need to load at boot. The following steps assume that `kata-deploy` was used to install Kata and QEMU is being tested. If using a different hypervisor, different install method for Kata, or a different Intel QAT chipset then the command will need to be modified. Note: The following was tested with . ```bash $ config_file=\"/opt/kata/share/defaults/kata-containers/configuration-qemu.toml\" $ sudo sed -i \"/kernel =/c kernel = \"\\\"${KATAROOTFSLOCATION}/${KATAKERNELNAME}\\\"\"\" $config_file $ sudo sed -i \"/image =/c image = \"\\\"${KATAKERNELLOCATION}/kata-containers.img\\\"\"\" $config_file $ sudo sed -i -e 's/^kernelparams = \"\\(.*\\)\"/kernelparams = \"\\1 modules-load=usdmdrv,qatc62xvf\"/g' $config_file $ sudo docker save -o openssl-qat-engine.tar openssl-qat-engine:latest $ sudo ctr images import openssl-qat-engine.tar $ sudo ctr run --runtime io.containerd.run.kata-qemu.v2 --privileged -t --rm --device=/dev/vfio/180 --mount type=bind,src=/dev,dst=/dev,options=rbind:rw --mount type=bind,src=${QATCONFLOCATION}/c6xxvfdev0.conf,dst=/etc/c6xxvfdev0.conf,options=rbind:rw docker.io/library/openssl-qat-engine:latest bash ``` Below are some commands to run in the container image to verify Intel QAT is working ```sh root@67561dc2757a/ # cat /proc/modules qat_c62xvf 16384 - - Live 0xffffffffc00d9000 (OE) usdm_drv 86016 - - Live 0xffffffffc00e8000 (OE) intel_qat 249856 - - Live 0xffffffffc009b000 (OE) root@67561dc2757a/ # adf_ctl restart Restarting all devices. Processing /etc/c6xxvf_dev0.conf root@67561dc2757a/ # adf_ctl status Checking status of all devices. There is 1 QAT acceleration device(s) in the system: qatdev0 - type: c6xxvf, instid: 0, node_id: 0, bsf: 0000:01:01.0, #accel: 1 #engines: 1 state: up root@67561dc2757a/ # openssl engine -c -t qat-hw (qat-hw) Reference implementation of QAT crypto engine v0.6.1 [RSA, DSA, DH, AES-128-CBC-HMAC-SHA1, AES-128-CBC-HMAC-SHA256, AES-256-CBC-HMAC-SHA1, AES-256-CBC-HMAC-SHA256, TLS1-PRF, HKDF, X25519, X448] [ available ] ``` Start a Kubernetes cluster with containerd as the CRI. The host should already be setup with 16 virtual functions of the Intel QAT card bound to `VFIO-PCI`. Verify this by looking in `/dev/vfio` for a listing of devices. You might need to disable Docker before initializing Kubernetes. Be aware that the OpenSSL container image built above will need to be exported from Docker and imported into containerd. If Kata is installed through there will be multiple `configuration.toml` files associated with different hypervisors. Rather than add in the custom Kata kernel, Kata rootfs, and kernel modules to each `configuration.toml` as the default, instead use in the Kubernetes YAML file to tell Kata which kernel and rootfs to use. The easy way to do this is to use `kata-deploy` which will install the Kata binaries to `/opt` and properly configure the `/etc/containerd/config.toml` with annotation support. However, the `configuration.toml` needs to enable support for annotations as well. The following configures both QEMU and Cloud Hypervisor `configuration.toml` files that are currently available with Kata Container versions 2.0 and higher. ```bash $ sudo sed -i 's/enableannotations\\s=\\s\\[\\]/enableannotations = [\".*\"]/' /opt/kata/share/defaults/kata-containers/configuration-qemu.toml $ sudo sed -i 's/enableannotations\\s=\\s\\[\\]/enableannotations = [\".*\"]/' /opt/kata/share/defaults/kata-containers/configuration-clh.toml ``` Export the OpenSSL image from Docker and import into"
},
{
"data": "```bash $ sudo docker save -o openssl-qat-engine.tar openssl-qat-engine:latest $ sudo ctr -n=k8s.io images import openssl-qat-engine.tar ``` The needs to be started so that the virtual functions can be discovered and used by Kubernetes. The following YAML file can be used to start a Kata container with Intel QAT support. If Kata is installed with `kata-deploy`, then the containerd `configuration.toml` should have all of the Kata runtime classes already populated and annotations supported. To use a Intel QAT virtual function, the Intel QAT plugin needs to be started after the VF's are bound to `VFIO-PCI` as described . Edit the following to point to the correct Kata kernel and rootfs location built with Intel QAT support. ```bash $ cat << EOF > kata-openssl-qat.yaml apiVersion: v1 kind: Pod metadata: name: kata-openssl-qat labels: app: kata-openssl-qat annotations: io.katacontainers.config.hypervisor.kernel: \"$KATAKERNELLOCATION/$KATAKERNELNAME\" io.katacontainers.config.hypervisor.image: \"$KATAROOTFSLOCATION/kata-containers.img\" io.katacontainers.config.hypervisor.kernelparams: \"modules-load=usdmdrv,qat_c62xvf\" spec: runtimeClassName: kata-qemu containers: name: kata-openssl-qat image: docker.io/library/openssl-qat-engine:latest imagePullPolicy: IfNotPresent resources: limits: qat.intel.com/generic: 1 cpu: 1 securityContext: capabilities: add: [\"IPCLOCK\", \"SYSADMIN\"] volumeMounts: mountPath: /etc/c6xxvf_dev0.conf name: etc-mount mountPath: /dev name: dev-mount volumes: name: dev-mount hostPath: path: /dev name: etc-mount hostPath: path: $QATCONFLOCATION/c6xxvf_dev0.conf EOF ``` Use `kubectl` to start the pod. Verify that Intel QAT card acceleration is working with the Intel QAT engine. ```bash $ kubectl apply -f kata-openssl-qat.yaml ``` ```sh $ kubectl exec -it kata-openssl-qat -- adf_ctl restart Restarting all devices. Processing /etc/c6xxvf_dev0.conf $ kubectl exec -it kata-openssl-qat -- adf_ctl status Checking status of all devices. There is 1 QAT acceleration device(s) in the system: qatdev0 - type: c6xxvf, instid: 0, node_id: 0, bsf: 0000:01:01.0, #accel: 1 #engines: 1 state: up $ kubectl exec -it kata-openssl-qat -- openssl engine -c -t qat-hw (qat-hw) Reference implementation of QAT crypto engine v0.6.1 [RSA, DSA, DH, AES-128-CBC-HMAC-SHA1, AES-128-CBC-HMAC-SHA256, AES-256-CBC-HMAC-SHA1, AES-256-CBC-HMAC-SHA256, TLS1-PRF, HKDF, X25519, X448] [ available ] ``` Check that `/dev/vfio` has VFs enabled. ```sh $ ls /dev/vfio 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 vfio ``` Check that the modules load when inside the Kata Container. ```sh bash-5.0# egrep \"qat|usdm_drv\" /proc/modules qat_c62xvf 16384 - - Live 0x0000000000000000 (O) usdm_drv 86016 - - Live 0x0000000000000000 (O) intel_qat 184320 - - Live 0x0000000000000000 (O) ``` Verify that at least the first `c6xxvf_dev0.conf` file mounts inside the container image in `/etc`. You will need one configuration file for each VF passed into the container. ```sh bash-5.0# ls /etc c6xxvfdev0.conf c6xxvfdev11.conf c6xxvfdev14.conf c6xxvfdev3.conf c6xxvfdev6.conf c6xxvfdev9.conf resolv.conf c6xxvfdev1.conf c6xxvfdev12.conf c6xxvfdev15.conf c6xxvfdev4.conf c6xxvf_dev7.conf hostname c6xxvfdev10.conf c6xxvfdev13.conf c6xxvfdev2.conf c6xxvfdev5.conf c6xxvf_dev8.conf hosts ``` Check `dmesg` inside the container to see if there are any issues with the Intel QAT driver. If there are issues building the OpenSSL Intel QAT container image, then check to make sure that runc is the default runtime for building container. ```sh $ cat /etc/systemd/system/docker.service.d/50-runtime.conf [Service] Environment=\"DOCKERDEFAULTRUNTIME=--default-runtime runc\" ``` To check the built in firmware counters, the Intel QAT driver has to be compiled and installed to the host and can't rely on the built in host driver. The counters will increase when the accelerator is actively being used. To verify Intel QAT is actively accelerating the containerized application, use the following instructions to check if any of the counters increment. Make sure to change the PCI Device ID to match whats in the system. ```bash $ for i in 0434 0435 37c8 1f18 1f19; do lspci -d 8086:$i; done $ sudo watch cat /sys/kernel/debug/qatc6xx0000\\:b1\\:00.0/fw_counters $ sudo watch cat /sys/kernel/debug/qatc6xx0000\\:b3\\:00.0/fw_counters $ sudo watch cat /sys/kernel/debug/qatc6xx0000\\:b5\\:00.0/fw_counters ```"
}
] |
{
"category": "Runtime",
"file_name": "using-Intel-QAT-and-kata.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Welcome to Kubernetes. We are excited about the prospect of you joining our ! The Kubernetes community abides by the CNCF . Here is an excerpt: As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities. We have full documentation on how to get started contributing here: Kubernetes projects require that you sign a Contributor License Agreement (CLA) before we can accept your pull requests - Main contributor documentation, or you can just jump directly to the - Common resources for existing developers - We have a diverse set of mentorship programs available that are always looking for volunteers!"
}
] |
{
"category": "Runtime",
"file_name": "CONTRIBUTING.md",
"project_name": "Inclavare Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "oep-number: draft Auto Snapshot deletion 20190715 title: Jiva auto snapshot deletion authors: \"@utkarshmani1997\" owners: \"@vishnuitta\" \"@payes\" \"@utkarshmani1997\" editor: \"@utkarshmani1997\" creation-date: 2019-07-15 last-updated: 2019-09-03 status: provisional see-also: NA replaces: NA superseded-by: NA - - - - This proposal is aimed for the users who experience very frequent network glitch at there cluster, due to which replica get restarted and autogenerated snapshots files are created. A max of 512 snapshots can be created. Once 512 snapshots are created, the replica won't be able to connect again automatically and requires manual cleanup of snapshot. The motivation behind this approach is to remove/reduce the efforts of the user for managing the storage layer Ability to remove autogenerated snapshots automatically in background without any manual intervention from the user. Though there is a working flow of deletion of snapshot via REST Api/ jivactl cli but this requires manual intervention from user i.e, user needs to list the snap shots and delete it one by one via given approaches but if number of snapshots have been grown significantly it will take too much time to cleanup those snapshots. The proposed design is to have a background job in jiva controller that keeps autogenerated snapshots in check. Jiva controller will start a goroutine for the cleanup of snapshots. Cleanup goroutine will start the cleanup process as follows. Cleanup goroutine will verify the rebuild, check if the replication factor is met and other prerequisites, and then snapshot deletion will be triggered in the background. Verify that snapshot count is not reached at threshold, if the snapshot count is below the threshold, exit the cleanup job. Deletion of snapshot is time taking process since data needs to be merged to its siblings, so larger the data in snapshot, time taken for this will grow significantly and may impact the current IO's. There are some cases where we may have stale snapshot entries if replica is restarted while snapshot deletion was in progress. There are following approaches which can be used to implement this feature: a) Manual deletion: Delete snapshots via cli tool jivactl `jivactl snapshot ls` to list snapshots `jivactl snapshot rm <snap_name>` from above output cons: User needs to delete the snapshots one by one by specifying snapshot name Requires manual cleanup of snapshots if replicas are restarted while snapshot deletion is in progress by validating the chain and restart is required to rebuild the replica to be on the safer side. b) Cleanup in background by picking the snapshots with smallest size: Run a goroutine which will pick up snapshots based on its size and start cleaning up the snapshots which has size less then any given size (for exp < 2-5G). NOTE: This approach will help in reducing the performance imapact and can be implemented in future."
}
] |
{
"category": "Runtime",
"file_name": "2019152019-jiva-autosnap-deletion.md",
"project_name": "OpenEBS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: \"ark describe\" layout: docs Describe ark resources Describe ark resources ``` -h, --help help for describe ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Back up and restore Kubernetes cluster resources. - Describe backups - Describe restores - Describe schedules"
}
] |
{
"category": "Runtime",
"file_name": "ark_describe.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "slug: test title: HwameiStor Capability, Security, Operation, and Maintenance Test authors: [Simon, Michael] tags: [Test] This test environment is Kubernetes 1.22. We deployed TiDB 1.3 after the HwameiStor local storage is attached. Then, we performed the basic SQL capability test, system security test, and operation and maintenance management test. All the tests passed successfully, it is acknowledged that HwameiStor can support distributed database application scenarios such as TiDB with high availability, strong consistency requirements, and large data scale. TiDB is a distributed database product that supports OLTP (Online Transactional Processing), OLAP (Online Analytical Processing), and HTAP (Hybrid Transactional and Analytical Processing) services, compatible with key features such as MySQL 5.7 protocol and MySQL ecosystem. The goal of TiDB is to provide users with one-stop OLTP, OLAP, and HTAP solutions, which are suitable for various application scenarios such as high availability, strict requirements for strong consistency, and large data scale. The TiDB distributed database splits the overall architecture into multiple modules that can communicate with each other. The architecture diagram is as follows: TiDB Server The SQL layer exposes the connection endpoints of the MySQL protocol to the outside world, and is responsible for accepting connections from clients, performing SQL parsing and optimization and finally generating a distributed execution plan. The TiDB layer itself is stateless. In practice, you can start several TiDB instances. A unified access address is provided externally through load-balance components (such as LVS, HAProxy, or F5), and client connections can be evenly distributed on to these TiDB instances. The TiDB server itself does not store data, but only parses SQL and forwards the actual data read request to the underlying storage node, TiKV (or TiFlash). PD (Placement Driver) Server The metadata management module across a TiDB cluster is responsible for storing the real-time data distribution of each TiKV node and the overall topology of the cluster, providing the TiDB Dashboard management and control interface, and assigning transaction IDs to distributed transactions. Placement Driver (PD) not only stores metadata, but also issues data scheduling commands to specific TiKV nodes based on the real-time data distribution status reported by TiKV nodes, which can be said to be the \"brain\" of the entire cluster. In addition, the PD itself is also composed of at least 3 nodes and has high availability capabilities. It is recommended to deploy an odd number of PD nodes. Storage nodes TiKV Server: In charge of storing data. From the outside, TiKV is a distributed Key-Value storage engine that provides transactions. The basic unit for storing data is Region. Each Region is responsible for storing the data of a Key Range (the block between left-closed and right-open from StartKey to EndKey). Each TiKV node is responsible for multiple Regions. TiKV API provides native support for distributed transactions at the KV key-value pair level, and provides the levels of Snapshot Isolation (SI) by default, which is also the core of TiDB's support for distributed transactions at the SQL level. After the SQL layer of TiDB completes the SQL parsing, it will convert the SQL execution plan into the actual call to the TiKV API. Therefore, the data is stored in"
},
{
"data": "In addition, the TiKV data will be automatically maintained in multiple replicas (the default is three replicas), which naturally supports high availability and automatic failover. TiFlash is a special storage node. Unlike ordinary TiKV nodes, data is stored in columns in TiFlash, and the main function is to accelerate analysis-based scenarios. Key-Value Pair The choice of TiKV is the Key-Value model that provides an ordered traversal method. Two key points of TiKV data storage are: A huge Map (comparable to std::map in C++) that stores Key-Value Pairs. The Key-Value pairs in this Map are sorted by the binary order of the Key, that is, you can seek to the position of a certain Key, and then continuously call the Next method to obtain the Key-Value larger than this Key in an ascending order. Local storage (Rocks DB) In any persistent storage engine, data must be saved on disk after all, and TiKV is not different. However, TiKV does not choose to write data directly to the disk, but stores the data in RocksDB, and RocksDB is responsible for the specific data storage. The reason is that developing a stand-alone storage engine requires a lot of work, especially to make a high-performance stand-alone engine, which may require various meticulous optimizations. RocksDB is a very good stand-alone KV storage engine open sourced by Facebook. It can meet various requirements of TiKV for single engine. Here we can simply consider that RocksDB is a persistent Key-Value Map on a host. Raft protocol TiKV uses the Raft algorithm to ensure that data is not lost and error-free when a single host fails. In short, it is to replicate data to multiple hosts, so that if one host cannot provide services, replicas on other hosts can still provide services. This data replication scheme is reliable and efficient, and can deal with replica failures. Region TiKV divides the Range by Key. A certain segment of consecutive Keys are stored on a storage node. Divide the entire Key-Value space into many segments, each segment is a series of consecutive Keys, called a Region. Try to keep the data saved in each Region within a reasonable size. Currently, the default in TiKV is no more than 96 MB. Each Region can be described by a left-closed and right-open block such as [StartKey, EndKey]. MVCC TiKV implements Multi-Version Concurrency Control (MVCC). Distributed ACID transactions TiKV uses the transaction model used by Google in BigTable: Percolator. In this test, we use three VM nodes to deploy the Kubernetes cluster, including one master node and two worker nodes. Kubelete version is 1.22.0. Deploy the HwameiStor local storage in the Kubernetes cluster Configure a 100G local disk, sdb, for HwameiStor on two worker nodes respectively Create StorageClass TiDB can be deployed on Kubernetes using TiDB Operator. TiDB Operator is an automatic operation and maintenance system for TiDB clusters on Kubernetes. It provides full lifecycle management of TiDB including deployment, upgrade, scaling, backup and recovery, and configuration changes. With TiDB Operator, TiDB can run seamlessly on public cloud or privately deployed Kubernetes"
},
{
"data": "The compatibility between TiDB and TiDB Operator versions is as follows: | TiDB version | Applicable versions of TiDB Operator | | | - | | dev | dev | | TiDB >= 5.4 | 1.3 | | 5.1 <= TiDB < 5.4 | 1.3 (recommended), 1.2 | | 3.0 <= TiDB < 5.1 | 1.3 (recommended), 1.2, 1.1 | | 2.1 <= TiDB < 3.0 | 1.0 (maintenance stopped) | Install TiDB CRDs ```bash kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml ``` Install TiDB Operator ```bash helm repo add pingcap https://charts.pingcap.org/ kubectl create namespace tidb-admin helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.3.2 \\ --set operatorImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-operator:v1.3.2 \\ --set tidbBackupManagerImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-backup-manager:v1.3.2 \\ --set scheduler.kubeSchedulerImageName=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler ``` Check TiDB Operator components ```bash kubectl create namespace tidb-cluster && \\ kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/tidb-cluster.yaml kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com /pingcap/tidb-operator/master/examples/basic/tidb-monitor.yaml ``` ```bash yum -y install mysql-client ``` ```bash kubectl port-forward -n tidb-cluster svc/basic-tidb 4000 > pf4000.out & ``` Create the Hello_world table ```sql create table helloworld (id int unsigned not null autoincrement primary key, v varchar(32)); ``` Check the TiDB version ```sql select tidb_version()\\G; ``` Check the Tikv storage status ```sql select * from informationschema.tikvstore_status\\G; ``` Create a PVC for tidb-tikv and tidb-pd from `storageClass local-storage-hdd-lvm`: ```bash kubectl get po basic-tikv-0 -oyaml ``` ```bash kubectl get po basic-pd-0 -oyaml ``` After the database cluster is deployed, we performed the following tests about basic capabilities. All are successfully passed. Test purpose: In the case of multiple isolation levels, check if the completeness constraints of distributed data operations are supported, such as atomicity, consistency, isolation, and durability (ACID) Test steps: Create the database: testdb Create the table `ttest ( id int AUTOINCREMENT, name varchar(32), PRIMARY KEY (id) )` Run a test script Test result: The completeness constraints of distributed data operations are supported, such as atomicity, consistency, isolation, and durability (ACID), in the case of multiple isolation levels Test purpose: Check if the object isolation can be implemented by using different schemas Test script: ```sql create database if not exists testdb; use testdb create table if not exists t_test ( id bigint, name varchar(200), saletime datetime default currenttimestamp, constraint pkttest primary key (id) ); insert into t_test(id,name) values (1,'a'),(2,'b'),(3,'c'); create user 'readonly'@'%' identified by \"readonly\"; grant select on testdb.* to readonly@'%'; select * from testdb.t_test; update"
},
{
"data": "set name='aaa'; create user 'otheruser'@'%' identified by \"otheruser\"; ``` Test result: Supported to create different schemas to implement the object isolation Test purpose: Check if you can create, delete, and modifiy table data, DML, columns, partition table Test steps: Run the test scripts step by step after connecting the database Test script: ```sql drop table if exists t_test; create table if not exists t_test ( id bigint default '0', name varchar(200) default '' , saletime datetime default currenttimestamp, constraint pkttest primary key (id) ); insert into t_test(id,name) values (1,'a'),(2,'b'),(3,'c'),(4,'d'),(5,'e'); update t_test set name='aaa' where id=1; update t_test set name='bbb' where id=2; delete from t_dml where id=5; alter table t_test modify column name varchar(250); alter table t_test add column col varchar(255); insert into ttest(id,name,col) values(10,'test','newcol'); alter table t_test add column colwithdefault varchar(255) default 'aaaa'; insert into t_test(id,name) values(20,'testdefault'); insert into t_test(id,name,colwithdefault ) values(10,'test','non-default '); alter table t_test drop column colwithdefault; CREATE TABLE employees ( id INT NOT NULL, fname VARCHAR(30), lname VARCHAR(30), hired DATE NOT NULL DEFAULT '1970-01-01', separated DATE NOT NULL DEFAULT '9999-12-31', job_code INT NOT NULL, store_id INT NOT NULL ) ``` Test result: Supported to create, delete, and modifiy table data, DML, columns, partition table Test purpose: Verify different indexes (unique, clustered, partitioned, Bidirectional indexes, Expression-based indexes, hash indexes, etc.) and index rebuild operations. Test script: ```bash alter table ttest add unique index udxt_test (name); ADMIN CHECK TABLE t_test; create index timeidx on ttest(sale_time); alter table ttest drop index timeidx; admin show ddl jobs; admin show ddl job queries 156; create index timeidx on ttest(sale_time); ``` Test result: Supported to create, delete, combine, and list indexes and supported for unique index Test purpose: Check if the statements in distributed databases are supported such as `if`, `case when`, `for loop`, `while loop`, `loop exit when` (up to 5 kinds) Test script: ```sql SELECT CASE id WHEN 1 THEN 'first' WHEN 2 THEN 'second' ELSE 'OTHERS' END AS idnew FROM ttest; SELECT IF(id>2,'int2+','int2-') from t_test; ``` Test result: supported for statements such as `if`, `case when`, `for loop`, `while loop`, and `loop exit when` (up to 5 kinds) Test purpose: Check if execution plan parsing is supported for distributed databases Test script: ```sql explain analyze select * from t_test where id NOT IN (1,2,4); explain analyze select from t_test a where EXISTS (select from t_test b where a.id=b.id and b.id<3); explain analyze SELECT IF(id>2,'int2+','int2-') from t_test; ``` Test result: the execution plan is supported to parse Test purpose: Verify the feature of binding execution plan for distributed databases Test steps: View the current execution plan of sql statements Use the binding feature View the execution plan after the sql statement is binded Delete the binding Test script: ```sql explain select * from employees3 a join employees4 b on a.id = b.id where a.lname='Johnson'; explain select /+ hash_join(a,b) / * from employees3 a join employees4 b on a.id = b.id where a.lname='Johnson'; ``` Test result: It may not be hashjoin when hint is not used, and it must be hashjoin after hint is used. Test purpose: Verify standard functions of distributed databases Test result: Standard database functions are supported Test purpose: Verify the transaction support of distributed databases Test result: Explict and implicit transactions are supported Test purpose: Verify the data types supported by distributed database Test result: Only the UTF-8 mb4 character set is supported now Test purpose: Verify the lock implementation of distributed databases Test result: Described how the lock is implemented, what are blockage conditions in the case of R-R/R-W/W-W, and how the deadlock is handled Test purpose: Verify the transactional isolation levels of distributed databases Test result: Supported for si and rc isolation levels (4.0 GA version) Test purpose: Verify the complex query capabilities of distributed databases Test result: Supported for the distributed complex queries and operations such as inter-node joins, and supported for window functions and hierarchical queries This section describes system security tests. After the database cluster is deployed, all the following tests are passed. Test purpose: Verify the accout permisson management of distributed databases Test script: ```sql select host,user,authentication_string from"
},
{
"data": "create user tidb IDENTIFIED by 'tidb'; select host,user,authentication_string from mysql.user; set password for tidb =password('tidbnew'); select host,user,authenticationstring,Selectpriv from mysql.user; grant select on . to tidb; flush privileges ; select host,user,authenticationstring,Selectpriv from mysql.user; grant all privileges on . to tidb; flush privileges ; select * from mysql.user where user='tidb'; revoke select on . from tidb; flush privileges ; revoke all privileges on . from tidb; flush privileges ; grant select(id) on test.TEST_HOTSPOT to tidb; drop user tidb; ``` Test results: Supported for creating, modifying, and deleting accounts, and configuring passwords, and supported for the separation of security, audit, and data management Based on different accounts, various permission control for database includes: instance, library, table, and column Test purpose: Verify the permission access control of distributed databases, and control the database data by granting basic CRUD (create, read, update, and delete) permissions Test script: ```sql mysql -u root -h 172.17.49.222 -P 4000 drop user tidb; drop user tidb1; create user tidb IDENTIFIED by 'tidb'; grant select on tidb.* to tidb; grant insert on tidb.* to tidb; grant update on tidb.* to tidb; grant delete on tidb.* to tidb; flush privileges; show grants for tidb; exit; mysql -u tidb -h 172.17.49.222 -ptidb -P 4000 -D tidb -e 'select * from aa;' mysql -u tidb -h 172.17.49.222 -ptidb -P 4000 -D tidb -e 'insert into aa values(2);' mysql -u tidb -h 172.17.49.222 -ptidb -P 4000 -D tidb -e 'update aa set id=3;' mysql -u tidb -h 172.17.49.222 -ptidb -P 4000 -D tidb -e 'delete from aa where id=3;' ``` Test result: Database data is controlled by granting the basic CRUD permissions Test purpose: Verify the whitelist feature of distributed databases Test script: ```sql mysql -u root -h 172.17.49.102 -P 4000 drop user tidb; create user tidb@'127.0.0.1' IDENTIFIED by 'tidb'; flush privileges; select * from mysql.user where user='tidb'; mysql -u tidb -h 127.0.0.1 -P 4000 -ptidb mysql -u tidb -h 172.17.49.102 -P 4000 -ptidb ``` Test result: Supported for the IP whitelist feature and supportred for matching actions with IP segments Test purpose: Verify the monitor capability to distributed databases Test script: `kubectl -ntidb-cluster logs tidb-test-pd-2 --tail 22` Test result: Record key actions or misoperations performed by users through the operation and maintenance management console or API This section describes the operation and maintenance test. After the database cluster is deployed, the following operation and maintenance tests are all passed. Test purpose: Verify the tools support for importing and exporting data of distributed databases Test script: ```sql select * from sbtest1 into outfile '/sbtest1.csv'; load data local infile '/sbtest1.csv' into table test100; ``` Test result: Supported for importing and exporting table, schema, and database Test purpose: Get the SQL info by slow query Prerequisite: The SQL execution time shall be longer than the configured threshold for slow query, and the SQL execution is completed Test steps: Adjust the slow query threshold to 100 ms Run SQL View the slow query info from log, system table, or dashboard Test script: ```sql show variables like 'tidbslowlog_threshold'; set tidbslowlog_threshold=100; select querytime, query from informationschema.slowquery where isinternal = false order by query_time desc limit 3; ``` Test result: Can get the slow query info. For details about test data, see ."
}
] |
{
"category": "Runtime",
"file_name": "2022-06-06_tidb-test.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "The current implementation of the Longhorn system backup lacks integration with the volume backup feature. As a result, users are required to manually ensure that all volume backups are up-to-date before initiating the Longhorn system backup. This document proposed to include the volume backup feature in the Longhorn system backup by introducing volume backup policies. By implementing the volume backup policies, users will gain the ability to define how volume data should be backed up during the Longhorn system backup. https://github.com/longhorn/longhorn/issues/5011 Customization: By offering different volume backup policy options, users can choose the one best fit with their requirements. Reduce Manual Efforts: By integrating volume backup into the Longhorn system backup, users no longer have to ensure that all volume backups are up-to-date before initiating the system backup, Enhanced Data Integrity: By aligning the system backup with a new up-to-date volume backups, the restored volume data will be more accurate. Overall, the proposed volume backup policies aim to improve the Longhorn system backup functionality and providing a more robust and customizable system backup solution. `None` When volume backup policy is specified: `if-not-present`: Longhorn will create a backup for volumes that do not have an existing backup. `always`: Longhorn will create a backup for all volumes, regardless of their existing backups. `disabled`: Longhorn will not create any backups for volumes. If a volume backup policy is not specified, the policy will be automatically set to `if-not-present`. This ensures that volumes without any existing backups will be backed up during the Longhorn system backup. As a user, I want the ability to specify the volume backup policy when creating the Longhorn system backup. This will allow me to define how volumes should be backed up according to my scenario. Scenario 1: if-not-present Policy: When I set the volume backup policy to `if-not-present`, I expect Longhorn to create a backup for volumes that do not already have a backup. Scenario 2: always Policy: When I set the volume backup policy to `always`, I expect Longhorn to create backups for all volumes, regardless of whether they already have a"
},
{
"data": "Scenario 3: disabled Policy: When I set the volume backup policy to `disabled`, I expect Longhorn to not create any backups for the volumes. In cases where I don't explicitly specify the volume backup policy during the system backup configuration, I expect Longhorn to automatically apply the `if-not-present` policy as the default. To set the volume backup policy, users can set the volume backup policy when creating the system backup through the UI. Alternatively, users can specify it in the manifest when creating the SystemBackup custom resource using the kubectl command. In scenarios where no specific volume backup policy is provided, Longhorn will automatically set the policy as `if-not-present`. Add a new `volumeBackupPolicy` field to the HTTP request and response payload. Introduce a new `volumeBackupPolicy` field. This field allows user to specify the volume backup policy. Add a new state (phase) called `CreatingVolumeBackups` to track the progress of volume backup creation during the Longhorn system backup. Iterate through each Longhorn volume. If the policy is `if-not-present`, create a volume snapshot and backup only for volumes that do not already have a backup (lastBackup is empty). If the policy is `always`, create a volume snapshot and backup for all volumes, regardless of their existing backups. If the policy is `disabled`, skip the volume backup creation step for all volumes and proceed to the next phase. Wait for all volume backups created by the SystemBackup to finish (completed or error state) before proceeding to the next phase (Generating or Error). Backup will have timeout limit of 24 hours. Any of the backups failure will lead the SystemBackup to and Error state. When the volume backup policy is not provided in the SystemBackup custom resource, automatically set the policy to `if-not-present`. When the volume backup policy is `if-not-present`, the system backup should only create volume backup when there is no existing backup in Volume. When the volume backup policy is `always`, the system backup should create volume backup regardless of the existing backup. When the volume backup policy is `disabled`, the system backup should not create volume backup. `None` `None`"
}
] |
{
"category": "Runtime",
"file_name": "20230526-volume-backup-policy-for-longhorn-system-backup.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "sidebar_position: 4 sidebar_label: \"CRD and CR\" `CRD` is the abbreviation of `Custom Resource Definition`, and is a resource type natively provided by `Kubernetes`. It is the definition of Custom Resource (CR) to describe what a custom resource is. A CRD can register a new resource with the `Kubernetes` cluster to extend the capabilities of the `Kubernetes` cluster. With `CRD`, you can define the abstraction of the underlying infrastructure, customize resource types based on business needs, and use the existing resources and capabilities of `Kubernetes` to define higher-level abstractions through a Lego-like building blocks. `CR` is the abbreviation of `Custom Resource`. In practice, it is an instance of `CRD`, a resource description that matches with the field format in `CRD`. We all know that `Kubernetes` has powerful scalability, but only `CRD` is not useful. It also needs the support of controller (`Custom Controller`) to reflect the value of `CRD`. `Custom Controller` can listen `CRUD` events of `CR` to implement custom business logic. In `Kubernetes`, `CRDs + Controllers = Everything`. See also the official documentation provided by Kubernetes:"
}
] |
{
"category": "Runtime",
"file_name": "crd_and_cr.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. Examples of behavior that contributes to creating a positive environment include: Using welcoming and inclusive language Being respectful of differing viewpoints and experiences Gracefully accepting constructive criticism Focusing on what is best for the community Showing empathy towards other community members Examples of unacceptable behavior by participants include: The use of sexualized language or imagery and unwelcome sexual attention or advances Trolling, insulting/derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or electronic address, without explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at [email protected]. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. This Code of Conduct is adapted from the , version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html For answers to common questions about this code of conduct, see https://www.contributor-covenant.org/faq"
}
] |
{
"category": "Runtime",
"file_name": "CODE_OF_CONDUCT.md",
"project_name": "Curve",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "When kube-router is ran as a Pod within your Kubernetes cluster, it also ships with a number of tools automatically configured for your cluster. These can be used to troubleshoot issues and learn more about how cluster networking is performed. Here's a quick way to get going on a random node in your cluster: ```sh KR_POD=$(basename $(kubectl -n kube-system get pods -l k8s-app=kube-router --output name|head -n1)) kubectl -n kube-system exec -it ${KR_POD} bash ``` Use `kubectl -n kube-system get pods -l k8s-app=kube-router -o wide` to see what nodes are running which pods. This will help if you want to investigate a particular node. Once logged in you will see some help on using the tools in the container. For example: ```console Welcome to kube-router on \"node1.zbrbdl\"! For debugging, the following tools are available: ipvsadm | Gather info about Virtual Services and Real Servers via IPVS. | Examples: | ## Show all options | ipvsadm --help | ## List Services and Endpoints handled by IPVS | ipvsadm -ln | ## Show traffic rate information | ipvsadm -ln --rate | ## Show cumulative traffic | ipvsadm -ln --stats gobgp | Get BGP related information from your nodes. | | Tab-completion is ready to use, just type \"gobgp <TAB>\" | to see the subcommands available. | | By default gobgp will query the Node this Pod is running | on, i.e. \"node1.zbrbdl\". To query a different node use | \"gobgp --host node02.mydomain\" as an example. | | For more examples see: https://github.com/osrg/gobgp/blob/master/docs/sources/cli-command-syntax.md Here's a quick look at what's happening on this Node BGP Server Configuration AS: 64512 Router-ID: 10.10.3.2 Listening Port: 179, Addresses: 0.0.0.0, :: BGP Neighbors Peer AS Up/Down State |#Received Accepted 64512 2d 01:05:07 Establ | 1 1 BGP Route Info Network Next Hop AS_PATH Age Attrs *> 10.2.0.0/24 10.10.3.3 4000 400000 300000 40001 2d 01:05:20 [{Origin: i} {LocalPref: 100}] *> 10.2.1.0/24 10.10.3.2 4000 400000 300000 40001 00:00:36 [{Origin: i}] IPVS Services IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.3.0.1:443 rr persistent 10800 mask 0.0.0.0 -> 10.10.3.2:443 Masq 1 0 0 TCP 10.3.0.10:53 rr -> 10.2.0.2:53 Masq 1 0 0 TCP 10.3.0.15:2379 rr -> 10.10.3.3:2379 Masq 1 45 0 TCP 10.3.0.155:2379 rr -> 10.10.3.3:2379 Masq 1 0 0 UDP 10.3.0.10:53 rr -> 10.2.0.2:53 Masq 1 0 0 ```"
}
] |
{
"category": "Runtime",
"file_name": "pod-toolbox.md",
"project_name": "Kube-router",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Device registration works for carina versions under v0.9.0. When carina manages local disks, it treats them as devices and registed them to kubelet. Whenever there is an new local device or PV, the disk usage will be updated. ```shell $ kubectl get node 10.20.9.154 -o template --template={{.status.capacity}} map[ carina.storage.io/carina-vg-hdd:160 carina.storage.io/carina-vg-ssd:0 cpu:2 ephemeral-storage:208655340Ki hugepages-1Gi:0 hugepages-2Mi:0 memory:3880376Ki pods:110 ] $ kubectl get node 10.20.9.154 -o template --template={{.status.allocatable}} map[ carina.storage.io/carina-vg-hdd:150 carina.storage.io/carina-vg-ssd:0 cpu:2 ephemeral-storage:192296761026 hugepages-1Gi:0 hugepages-2Mi:0 memory:3777976Ki pods:110 ] ``` For each device, carina will records its capacity and allocatable. 10G of disk space is reserved for each device. carina scheduler will do scheduling based on each node's disk usage. Carina also tracks those informaction in an configmap. ```shell $ kubectl get configmap carina-node-storage -n kube-system -o yaml data: node: '[{ \"allocatable.carina.storage.io/carina-vg-hdd\": \"150\", \"allocatable.carina.storage.io/carina-vg-ssd\": \"0\", \"capacity.carina.storage.io/carina-vg-hdd\": \"160\", \"capacity.carina.storage.io/carina-vg-ssd\": \"0\", \"nodeName\": \"10.20.9.154\" }, { \"allocatable.carina.storage.io/carina-vg-hdd\": \"146\", \"allocatable.carina.storage.io/carina-vg-ssd\": \"0\", \"capacity.carina.storage.io/carina-vg-hdd\": \"170\", \"capacity.carina.storage.io/carina-vg-ssd\": \"0\", \"nodeName\": \"10.20.9.153\" }]' ```"
}
] |
{
"category": "Runtime",
"file_name": "device-register.md",
"project_name": "Carina",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Uniform Fixed Clusters menu_order: 30 search_type: Documentation This scenario describes a production deployment of a fixed number of N nodes (N=1 in the simplest case). A uniform fixed cluster has the following characteristics: Recovers automatically from reboots and partitions. All peers have identical configuration. There is a controlled process for adding or removing nodes, however the end user is responsible for ensuring that only one instance of the process is in-flight at a time. While it is possible to automate, the potentially-blocking `weave prime` operation and the need for global serialization make it non-trivial. It is however relatively straightforward for a human to provide the necessary guarantees and exception handling manually, and so this scenario is best suited to deployments which change size infrequently as a planned maintenance event. On each initial peer, at boot, via : weave launch --no-restart $PEERS Where, `--no-restart` disables the Docker restart policy, since this will be handled by systemd. `$PEERS` is obtained from `/etc/sysconfig/weave` as described in the linked systemd documentation. For convenience, this may contain the address of the peer which is being launched, so that you don't have to compute separate lists of 'other' peers tailored to each peer - just supply the same complete list of peer addresses to every peer. Then on any peer run the following to force consensus: weave prime Note: You can run this safely on more than one or even all peers, but it's only strictly necessary to run it on one of them. Once this command completes successfully, IP address allocations can proceed under partition and it is safe to add new peers. If this command waits without exiting, it means that there is an issue (such as a network partition or failed peers) that is preventing a quorum from being reached you will need to [address that](/site/troubleshooting.md) before moving on. On the new peer, at boot, via run: weave launch --no-restart $PEERS Where, `$PEERS` is the new peer plus all other peers in the network, initial and subsequently added, which have not been explicitly removed. It should include peers which are temporarily offline or stopped. For maximum robustness, distribute an updated `/etc/sysconfig/weave` file including the new peer to all existing peers. On the peer to be removed: weave reset Then distribute an updated `/etc/sysconfig/weave` to the remaining peers, omitting the removed peer from `$PEERS`. On each remaining peer: weave forget <removed peer> This final step is not mandatory, but it will eliminate log noise and spurious network traffic by stopping any reconnection attempts."
}
] |
{
"category": "Runtime",
"file_name": "uniform-fixed-cluster.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "This user guide provides the steps to run inclavare-containers with pouch and Occlum. Please refer to to install pouch and refer to to configure the runtime of pouchd. Please refer to to install `rune`. Please refer to to install `shim-rune` and refer to to configure `shim-rune`. Please refer to to build your Occlum container image. Use the environment variable OCCLUMRELEASEENCLAVE to specify your enclave type OCCLUMRELEASEENCLAVE=0: debug enclave OCCLUMRELEASEENCLAVE=1: product enclave Then run pouch with Occlum container images refer to ```shell pouch run -it --rm --runtime=rune \\ -e ENCLAVE_TYPE=intelSgx \\ -e ENCLAVERUNTIMEPATH=/opt/occlum/build/lib/libocclum-pal.so \\ -e ENCLAVERUNTIMEARGS=occlum_instance \\ -e ENCLAVERUNTIMELOGLEVEL=info \\ -e OCCLUMRELEASEENCLAVE=0 \\ occlum-app ``` In addition, pouch supports to configure `annotation` options to run container image. You can run pouch with annotations instead of environment variables. ```shell pouch run -it --rm --runtime=rune \\ --annotation \"enclave.type=intelSgx\" \\ --annotation \"enclave.runtime.path=/opt/occlum/build/lib/libocclum-pal.so\" \\ --annotation \"enclave.runtime.args=occlum_instance\" \\ --annotation \"enclave.runtime.loglevel=info\" \\ -e OCCLUMRELEASEENCLAVE=0 \\ occlum-app ```"
}
] |
{
"category": "Runtime",
"file_name": "running_inclavare_containers_with_pouch_and_occlum.md",
"project_name": "Inclavare Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "The `yaml` Project is released on an as-needed basis. The process is as follows: An issue is proposing a new release with a changelog since the last release All must LGTM this release An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION` The release issue is closed An announcement email is sent to `[email protected]` with the subject `[ANNOUNCE] kubernetes-template-project $VERSION is released`"
}
] |
{
"category": "Runtime",
"file_name": "RELEASE.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This document defines a high level roadmap for CNI development. The list below is not complete, and we advise to get the current project state from the . Targeted for April 2020 More precise specification language Stable SPEC Complete test coverage Conformance test suite for CNI plugins (both reference and 3rd party) Signed release binaries"
}
] |
{
"category": "Runtime",
"file_name": "ROADMAP.md",
"project_name": "Container Network Interface (CNI)",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "```shell sudo zypper install cri-o ``` ```shell sudo dnf module enable cri-o:$VERSION sudo dnf install cri-o ``` For Fedora, we only support setting minor versions. i.e: `VERSION=1.18`, and do not support pinning patch versions: `VERSION=1.18.3` Note: as of 1.24.0, the `cri-o` package no longer depends on `containernetworking-plugins` package. Removing this dependency allows users to install their own CNI plugins without having to remove files first. If users want to use the previously provided CNI plugins, they should also run: ```shell sudo dnf install containernetworking-plugins ``` To install on the following operating systems, set the environment variable ```$OS``` to the appropriate value from the following table: | Operating system | $OS | | - | -- | | Centos 9 Stream | `CentOS9Stream` | | Centos 8 | `CentOS_8` | | Centos 8 Stream | `CentOS8Stream` | | Centos 7 | `CentOS_7` | And then run the following as root: <!-- markdownlint-disable MD013 --> ```shell curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/devel:kubic:libcontainers:stable.repo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo or if you are using a subproject release: curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:${VERSION}.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/${SUBVERSION}:/${VERSION}/$OS/devel:kubic:libcontainers:stable:cri-o:${SUBVERSION}:${VERSION}.repo yum install cri-o ``` <!-- markdownlint-enable MD013 --> Note: as of 1.24.0, the `cri-o` package no longer depends on `containernetworking-plugins` package. Removing this dependency allows users to install their own CNI plugins without having to remove files first. If users want to use the previously provided CNI plugins, they should also run: ```shell yum install containernetworking-plugins ``` Note: this tutorial assumes you have curl and gnupg installed To install on the following operating systems, set the environment variable ```$OS``` to the appropriate value from the following table: | Operating system | $OS | | | -- | | Debian 12 | `Debian_12` | | Debian 11 | `Debian_11` | | Debian 10 | `Debian_10` | | Raspberry Pi OS 11 | `Raspbian_11` | | Raspberry Pi OS 10 | `Raspbian_10` | | Ubuntu 22.04 | `xUbuntu_22.04` | | Ubuntu 21.10 | `xUbuntu_21.10` | | Ubuntu 21.04 | `xUbuntu_21.04` | | Ubuntu 20.10 | `xUbuntu_20.10` | | Ubuntu 20.04 | `xUbuntu_20.04` | | Ubuntu 18.04 | `xUbuntu_18.04` | If installing cri-o-runc (recommended), you'll need to install libseccomp >= 2.4.1. NOTE: This is not available in distros based on Debian 10(buster) or below, so buster backports will need to be enabled: <!-- markdownlint-disable MD013 --> ```shell echo 'deb http://deb.debian.org/debian buster-backports main' > /etc/apt/sources.list.d/backports.list apt update apt install -y -t buster-backports libseccomp2 || apt update -y -t buster-backports libseccomp2 ``` And then run the following as root: ```shell echo \"deb [signed-by=/usr/share/keyrings/libcontainers-archive-keyring.gpg] https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /\" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list echo \"deb [signed-by=/usr/share/keyrings/libcontainers-crio-archive-keyring.gpg] https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /\" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list mkdir -p /usr/share/keyrings curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | gpg --dearmor -o /usr/share/keyrings/libcontainers-archive-keyring.gpg curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/Release.key | gpg --dearmor -o /usr/share/keyrings/libcontainers-crio-archive-keyring.gpg apt-get update apt-get install cri-o cri-o-runc ``` <!-- markdownlint-enable MD013 --> Note: We include cri-o-runc because Ubuntu and Debian include their own packaged version of runc. While this version should work with CRI-O, keeping the packaged versions of CRI-O and runc in sync ensures they work together. If you'd like to use the distribution's runc, you'll have to add the file: ```toml [crio.runtime.runtimes.runc] runtime_path = \"\" runtime_type = \"oci\" runtime_root = \"/run/runc\" ``` to `/etc/crio/crio.conf.d/` Note: as of 1.24.0, the `cri-o` package no longer depends on `containernetworking-plugins` package. Removing this dependency allows users to install their own CNI plugins without having to remove files first. If users want to use the previously provided CNI plugins, they should also run: ```shell apt-get install containernetworking-plugins ```"
}
] |
{
"category": "Runtime",
"file_name": "install-legacy.md",
"project_name": "CRI-O",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Add a remote subscriber to the multicast group. Add a remote subscriber to the multicast group. Only remote subscribers are added via command line. Local subscribers will be automatically populated in the map based on IGMP messages. Remote subscribers are typically other Cilium nodes, identified by internal IP of the node. To add remote subscriber, following information is required: group: multicast group address to which subscriber is added. subscriber-address: subscriber IP address. ``` cilium-dbg bpf multicast subscriber add <group> <subscriber-address> [flags] ``` ``` To add a remote node 10.100.0.1 to multicast group 229.0.0.1, use the following command: cilium-dbg bpf multicast subscriber add 229.0.0.1 10.100.0.1 ``` ``` -h, --help help for add ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage the multicast subscribers."
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_bpf_multicast_subscriber_add.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "English | With a multitude of public cloud providers available, such as Alibaba Cloud, Huawei Cloud, Tencent Cloud, AWS, and more, it can be challenging to use mainstream open-source CNI plugins to operate on these platforms using underlay networks. Instead, one has to rely on proprietary CNI plugins provided by each cloud vendor, leading to a lack of standardized underlay solutions for public clouds. This page introduces , an underlay networking solution designed to work seamlessly in any public cloud environment. A unified CNI solution offers easier management across multiple clouds, particularly in hybrid cloud scenarios. is an underlay and RDMA network solution for the Kubernetes. It enhances the capabilities of Macvlan CNI, IPvlan CNI, SR-IOV CNI fulfills various networking needs, and supports to run on bare metal, virtual machine, and public cloud environments. Spiderpool delivers exceptional network performance. Networking plugin for pod networking in Kubernetes using Elastic Network Interfaces on AWS. aws-vpc-cni is an underlay network solution provided by AWS for public cloud, but it cannot meet complex network requirements. The following is a comparison of some functions between spiderpool and aws-cni. The related functions of Spiderpool will be demonstrated in subsequent chapters | Feature comparison | aws-vpc-cni | Spiderpool + IPvlan | |- | | | | Multiple Underlay NICs | | (Multiple Underlay NICs across subnets) | | Custom routing | | | | Dual CNI collaboration | Supports multiple CNI NIC but does not support routing coordination | support | | network policy | | | | clusterIP | (kube-proxy) | (kube-proxy and ebpf two methods) | | Bandwidth | | | | metrics | | | | Dual stack | IPv4 only, IPv6 only, dual stack is not supported | IPv4 only, IPv6 only, dual stack | | Observability | | (with cilium hubble, kernel>=4.19.57) | | Multi-cluster | | | | Paired with AWS layer 4/7 load balancing | | | | Kernel limit | None | >= 4.2 (IPvlan kernel limit) | | Forwarding principle | underlay pure routing layer 3 forwarding | IPvlan layer 2 | | multicast | | | | Cross vpc access | | | Spiderpool can operate in public cloud environments using the ipvlan underlay CNI and provide features such as node topology and MAC address validity resolution. Here is how it works: When using underlay networks in a public cloud environment, each network interface of a cloud server can only be assigned a limited number of IP addresses. To enable communication when an application runs on a specific cloud server, it needs to obtain the valid IP addresses allocated to different network interfaces within the VPC network. To address this IP allocation requirement, Spiderpool introduces a CRD named `SpiderIPPool`. By configuring the nodeName and multusName fields in `SpiderIPPool`, it enables node topology functionality. Spiderpool leverages the affinity between the IP pool and nodes, as well as the affinity between the IP pool and ipvlan Multus, facilitating the utilization and management of available IP addresses on the"
},
{
"data": "This ensures that applications are assigned valid IP addresses, enabling seamless communication within the VPC network, including communication between Pods and also between Pods and cloud servers. In a public cloud VPC network, network security controls and packet forwarding principles dictate that when network data packets contain MAC and IP addresses unknown to the VPC network, correct forwarding becomes unattainable. This issue arises in scenarios where Macvlan or OVS based underlay CNI plugins generate new MAC addresses for Pod NICs, resulting in communication failures among Pods. To address this challenge, Spiderpool offers a solution in conjunction with . The ipvlan CNI operates at the L3 of the network, eliminating the reliance on L2 broadcasts and avoiding the generation of new MAC addresses. Instead, it maintains consistency with the parent interface. By incorporating ipvlan, the legitimacy of MAC addresses in a public cloud environment can be effectively resolved. The system kernel version must be greater than 4.2 when using ipvlan as the cluster's CNI. is installed. Understand the basics of . In an AWS VPC, a subnet is categorized as a public subnet if it has an outbound route configured with the Internet Gateway as the next hop for destinations 0.0.0.0/0 or ::/0. Otherwise, a subnet is considered a private subnet if it lacks this specific outbound routing configuration. Create a public subnet and multiple private subnets within a VPC, and deploy virtual machines in the private subnets as shown in the following picture: > We will create one public subnet and two private subnets within the same VPC. Each private subnet should be deployed in a different availability zone. A EC2 instance as a jump server will be created in the public subnet for secure access. Additionally, two AWS EC2 instances will be created in the respective different private subnets to set up the Kubernetes cluster. Bind IPv4 and IPv6 addresses to the network interface when creating an instance, as the picture below: Bind to each network interface of the instances in which we can use it to allocate IP address for pod: > IP prefix delegation just like the secondary IP address that could bind a CIDR range for instance. The number of IP prefix delegation can be referenced from . The instance can bind the same number of prefix delegations as the number of secondary IPs that can be bound to the instance's network interface. In this example, we choose to bind 1 network interface and 1 IP prefix delegation to the instance. ```shell | Node | ens5 primary IP | ens5 secondary IPs | ens6 primary IP | ens6 secondary IPs | ||--||--|| | master | 172.31.22.228 | 172.31.16.4-172.31.16.8 | 210.22.16.10 | 210.22.16.11-210.22.16.15 | | worker1 | 180.17.16.17 | 180.17.16.11-180.17.16.15 | 210.22.32.10 | 210.22.32.11-210.22.32.15 | ``` Create an AWS NAT gateway to allow instances in the VPC's private subnets to connect to external services. The NAT gateway serves as an outbound traffic gateway for the"
},
{
"data": "Follow the to create a NAT gateway: > Create a NAT gateway in the public subnet, `public-172-31-0-0`, and configure the route table of the private subnets to set the next-hop of the outbound route 0.0.0.0/0 to NAT gateway. (IPv6 addresses provided by AWS are globally unique and can access the internet directly via the Internet Gateway). Use the configured virtual machines to establish a Kubernetes cluster. The available IP addresses for the nodes and the network topology diagram of the cluster are shown below: Install Spiderpool via helm: ```shell helm repo add spiderpool https://spidernet-io.github.io/spiderpool helm repo update spiderpool helm install spiderpool spiderpool/spiderpool --namespace kube-system --set ipam.enableStatefulSet=false --set multus.multusCNI.defaultCniCRName=\"ipvlan-ens5\" ``` If you are using a cloud server from a Chinese mainland cloud provider, you can enhance image pulling speed by specifying the parameter `--set global.imageRegistryOverride=ghcr.m.daocloud.io`. Spiderpool allows for fixed IP addresses for application replicas with a controller type of `StatefulSet`. However, in the underlay network scenario of public clouds, cloud instances are limited to using specific IP addresses. When StatefulSet replicas migrate to different nodes, the original fixed IP becomes invalid and unavailable on the new node, causing network unavailability for the new Pods. To address this issue, set `ipam.enableStatefulSet` to `false` to disable this feature. Specify the name of the NetworkAttachmentDefinition instance for the default CNI used by Multus via `multus.multusCNI.defaultCniCRName`. If the `multus.multusCNI.defaultCniCRName` option is provided, an empty NetworkAttachmentDefinition instance will be automatically generated upon installation. Otherwise, Multus will attempt to create a NetworkAttachmentDefinition instance based on the first CNI configuration found in the /etc/cni/net.d directory. If no suitable configuration is found, a NetworkAttachmentDefinition instance named `default` will be created to complete the installation of Multus. To simplify the creation of JSON-formatted Multus CNI configurations, Spiderpool offers the SpiderMultusConfig CR to automatically manage Multus NetworkAttachmentDefinition CRs. Based on the network interface configuration created during the process of setting up the AWS EC2 instances, here is an example configuration of SpiderMultusConfig for each network interface used to run ipvlan CNI: ```shell IPVLANMASTERINTERFACE=\"ens5\" IPVLANMULTUSNAME=\"ipvlan-$IPVLANMASTERINTERFACE\" cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: ${IPVLANMULTUSNAME} namespace: kube-system spec: cniType: ipvlan ipvlan: master: ${IPVLANMASTERINTERFACE} EOF ``` This case uses the given configuration to create one ipvlan SpiderMultusConfig instances. This resource will automatically generate corresponding Multus NetworkAttachmentDefinition CR for the host's `eth5` network interface. ```bash ~# kubectl get spidermultusconfigs.spiderpool.spidernet.io -A NAMESPACE NAME AGE kube-system ipvlan-ens5 8d ~# kubectl get network-attachment-definitions.k8s.cni.cncf.io -A NAMESPACE NAME AGE kube-system ipvlan-ens5 8d ``` The Spiderpool's CRD, `SpiderIPPool`, introduces the following fields: `nodeName`, `multusName`, and `ips`: `nodeName`: when `nodeName` is not empty, Pods are scheduled on a specific node and attempt to acquire an IP address from the corresponding SpiderIPPool. If the Pod's node matches the specified `nodeName`, it successfully obtains an IP. Otherwise, it cannot obtain an IP from that SpiderIPPool. When `nodeName` is empty, Spiderpool does not impose any allocation restrictions on the Pod. `multusName`Spiderpool integrates with Multus CNI to cope with cases involving multiple network interface cards. When `multusName` is not empty, SpiderIPPool utilizes the corresponding Multus CR instance to configure the network for the Pod. If the Multus CR specified by `multusName` does not exist, Spiderpool cannot assign a Multus CR to the"
},
{
"data": "When `multusName` is empty, Spiderpool does not impose any restrictions on the Multus CR used by the Pod. `spec.ips`: based on the information provided about the network interfaces and IP prefix delegation addresses of the AWS EC2 instances, the specified range of values must fall within the auxiliary private IP range of the host associated with the specified `nodeName`. Each value should correspond to a unique instance network interface. Taking into account the network interfaces and associated IP prefix delegation information for each instance in the , the following YAML is used to create IPv4 and IPv6 SpiderIPPool resources for network interface `ens5` on each node. These pools will provide IP addresses for Pods on different nodes: ```shell ~# cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: master-v4 spec: subnet: 172.31.16.0/20 ips: 172.31.28.16-172.31.28.31 gateway: 172.31.16.1 default: true nodeName: [\"master\"] multusName: [\"kube-system/ipvlan-ens5\"] apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: master-v6 spec: subnet: 2406:da1e:c4:ed01::/64 ips: 2406:da1e:c4:ed01:c57d::0-2406:da1e:c4:ed01:c57d::f gateway: 2406:da1e:c4:ed01::1 default: true nodeName: [\"master\"] multusName: [\"kube-system/ipvlan-ens5\"] apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: worker1-v4 spec: subnet: 172.31.32.0/24 ips: 172.31.32.176-172.31.32.191 gateway: 172.31.32.1 default: true nodeName: [\"worker1\"] multusName: [\"kube-system/ipvlan-ens5\"] apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: worker1-v6 spec: subnet: 2406:da1e:c4:ed02::/64 ips: 2406:da1e:c4:ed02:7a2e::0-2406:da1e:c4:ed02:7a2e::f gateway: 2406:da1e:c4:ed02::1 default: true nodeName: [\"worker1\"] multusName: [\"kube-system/ipvlan-ens5\"] EOF ``` The following YAML example creates a Deployment application with the following configuration: `v1.multus-cni.io/default-network`: specify the CNI configuration for the application. In this example, the application is configured to use the ipvlan configuration associated with the `ens5` interface of the host machine. The subnet is selected automatically according to the default SpiderIPPool resource. ```shell cat <<EOF | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: nginx-lb spec: selector: matchLabels: run: nginx-lb replicas: 2 template: metadata: annotations: v1.multus-cni.io/default-network: \"kube-system/ipvlan-ens5\" labels: run: nginx-lb spec: containers: name: nginx-lb image: nginx ports: containerPort: 80 EOF ``` By checking the running status of the Pods, you can observe that one Pod is running on each node, and the Pods are assigned the IP prefix delegation addresses of the first network interface of their respective host machines: ```shell ~# kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-lb-64fbbb5fd8-q5wjm 1/1 Running 0 10s 172.31.32.184 worker1 <none> <none> nginx-lb-64fbbb5fd8-wkzf6 1/1 Running 0 10s 172.31.28.31 master <none> <none> ``` Test communication between Pods and their hosts: export NODEMASTERIP=172.31.18.11 export NODEWORKER1IP=172.31.32.18 ~# kubectl exec -it nginx-lb-64fbbb5fd8-wkzf6 -- ping -c 1 ${NODEMASTERIP} ~# kubectl exec -it nginx-lb-64fbbb5fd8-q5wjm -- ping -c 1 ${NODEWORKER1IP} Test communication between Pods across different nodes and subnets: ~# kubectl exec -it nginx-lb-64fbbb5fd8-wkzf6 -- ping -c 1 172.31.32.184 ~# kubectl exec -it nginx-lb-64fbbb5fd8-wkzf6 -- ping6 -c 1 2406:da1e:c4:ed02:7a2e::d Test communication between Pods and ClusterIP: ~# kubectl exec -it nginx-lb-64fbbb5fd8-wkzf6 -- curl -I ${CLUSTER_IP} With the created in the previous section, our VPC's private network can now be accessed from the internet. ```bash kubectl exec -it nginx-lb-64fbbb5fd8-wkzf6 -- curl -I www.baidu.com ``` The AWS Load Balancer product offers two modes: NLB (Network Load Balancer) and ALB (Application Load Balancer), corresponding to Layer 4 and Layer 7,"
},
{
"data": "The aws-load-balancer-controller is an AWS-provided component that integrates Kubernetes with AWS Load Balancer, enabling Kubernetes Service LoadBalancer and Ingress functionality. We will use this component to facilitate load balancing ingress access with AWS infrastructure. The installation demo is based on version `v2.6`. You can follow the steps below and refer to the for aws-load-balancer-controller deployment: Configure `providerID` for cluster nodes. It is necessary to set the `providerID` for each Node in Kubernetes. You can achieve this in either of the following ways: Find the Instance ID for each instance directly in the AWS EC2 dashboard. Use the AWS CLI to query the Instance ID: `aws ec2 describe-instances --query 'Reservations[].Instances[].{Instance:InstanceId}'`. Add necessary IAM role policy for AWS EC2 instances > 1. The aws-load-balancer-controller runs on each node and requires access to AWS NLB/ALB APIs. Therefore, it needs authorization to make requests related to NLB/ALB through AWS IAM. As we are deploying a self-managed cluster, we need to leverage the IAM Role of the nodes themselves to grant this authorization. For more details, refer to the . > 2. `curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.6.0/docs/install/iam_policy.json` > 3. Create a new policy in the AWS IAM Dashboard by using the obtained JSON content and associate it with the IAM Role of your virtual machine instance. Create a public subnet for the availability zone where your AWS EC2 instances are located and apply an auto-discoverable tag. For ALB, you need at least two subnets across different availability zones. For NLB, at least one subnet is required. Refer to the for more details. To enable LB with public access, add the `kubernetes.io/role/elb:1` tag to the public subnet in the availability zone where the instances reside. Regarding cross-VPC access for LB, create a private subnet and apply the `kubernetes.io/role/internal-elb:1` tag. Use the to create the necessary subnets: > - To create a public subnet for an internet-exposed load balancer, go to the AWS VPC Dashboard, select \"Create subnet\" in the Subnets section, and choose the same availability zone as the EC2 instance, and associate the subnet with the Main route table (make sure the default 0.0.0.0/0 route in the Main route table has the Internet Gateway as the next hop; if not, create this route rule). > - Create a new route table in the AWS VPC Dashboard and configure the 0.0.0.0/0 route with the NAT Gateway as the next hop, and the ::/0 route with the Internet Gateway as the next hop. > - To create a private subnet for LB with cross-VPC access, go to the AWS VPC Dashboard Subnets section, select \"Create subnet,\" choose the same availability zone as the EC2 instance, and associate it with the route table created in the previous step. Install aws-load-balancer-controller v2.6 using Helm. ```shell helm repo add eks https://aws.github.io/eks-charts kubectl apply -k \"github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master\" helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=<cluster-name> ``` Check if aws-load-balancer-controller has been installed already ```shell ~# kubectl get po -n kube-system | grep aws-load-balancer-controller NAME READY STATUS RESTARTS AGE aws-load-balancer-controller-5984487f57-q6qcq 1/1 Running 0 30s aws-load-balancer-controller-5984487f57-wdkxl 1/1 Running 0 30s ``` To provide access to the application created in the previous section , we will create a Kubernetes Service of type"
},
{
"data": "If you have a dual-stack requirement, add the annotation `service.beta.kubernetes.io/aws-load-balancer-ip-address-type: dualstack`: ```shell cat <<EOF | kubectl create -f - apiVersion: v1 kind: Service metadata: name: nginx-svc-lb labels: run: nginx-lb annotations: service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserveclientip.enabled=true spec: type: LoadBalancer ports: port: 80 protocol: TCP selector: run: nginx-lb EOF ``` As shown in the AWS EC2 Load Balancing dashboard, an NLB has been created and is accessible. NLB also supports creating LB in instance mode by just modifying the annotation `service.beta.kubernetes.io/aws-load-balancer-nlb-target-type`. However, instance mode does not support node drift when using `service.spec.externalTraffic=Local`, so it is not recommended. Use the annotation `service.beta.kubernetes.io/load-balancer-source-ranges` to restrict the source IP addresses that can access the NLB. This feature is associated with the annotation `service.beta.kubernetes.io/aws-load-balancer-ip-address-type`. If the default mode is IPv4, the value is `0.0.0.0/0`. For dualstack, the default is `0.0.0.0/0, ::/0`. Use the annotation `service.beta.kubernetes.io/aws-load-balancer-scheme` to specify whether the NLB should be exposed for public access or restricted to cross-VPC communication. The default value is `internal` for cross-VPC communication. The annotation `service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserveclientip.enabled=true` enables client source IP preservation ability. Next, we will create a Kubernetes Ingress resource. If you have a dual-stack requirement, add the annotation `alb.ingress.kubernetes.io/ip-address-type: dualstack`: ```shell apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress spec: selector: matchLabels: run: nginx-ingress replicas: 2 template: metadata: annotations: v1.multus-cni.io/default-network: \"kube-system/ipvlan-ens5\" labels: run: nginx-ingress spec: containers: name: nginx-ingress image: nginx ports: containerPort: 80 apiVersion: v1 kind: Service metadata: name: nginx-svc-ingress labels: run: nginx-ingress spec: type: NodePort ports: port: 80 protocol: TCP selector: run: nginx-ingress apiVersion: apps/v1 kind: Deployment metadata: name: echoserver spec: selector: matchLabels: app: echoserver replicas: 2 template: metadata: annotations: v1.multus-cni.io/default-network: \"kube-system/ipvlan-ens5\" labels: app: echoserver spec: containers: image: k8s.gcr.io/e2e-test-images/echoserver:2.5 imagePullPolicy: Always name: echoserver ports: containerPort: 8080 apiVersion: v1 kind: Service metadata: name: echoserver spec: ports: port: 80 targetPort: 8080 protocol: TCP type: NodePort selector: app: echoserver apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: k8s-app-ingress annotations: alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/scheme: internet-facing spec: ingressClassName: alb rules: http: paths: path: / pathType: Exact backend: service: name: nginx-svc-ingress port: number: 80 http: paths: path: /echo pathType: Exact backend: service: name: echoserver port: number: 80 ``` As shown in the AWS EC2 Load Balancing dashboard, an ALB has been created and is accessible. ALB also supports creating LB in instance mode by just modifying the annotation `alb.ingress.kubernetes.io/target-type`. However, instance mode does not support node drift when using `service.spec.externalTraffic=Local`, so it is not recommended. When using ALB in instance mode, specify the service as NodePort mode. Use the annotation `alb.ingress.kubernetes.io/inbound-cidrs` to restrict the source IP addresses that can access the NLB. This feature is associated with the annotation `alb.ingress.kubernetes.io/ip-address-type`. If the default mode is IPv4, the value is `0.0.0.0/0`. For dualstack, the default is `0.0.0.0/0, ::/0`. Use the annotation `alb.ingress.kubernetes.io/scheme` to specify whether the ALB should be exposed for public access or restricted to cross-VPC communication. The default value is `internal` for cross-VPC communication. To integrate multiple Ingress resources and share the same entry point, configure the annotation `alb.ingress.kubernetes.io/group.name` to specify a name. Ingress resources without this annotation are treated as an \"implicit IngressGroup\" composed by the Ingress itself. To specify the host for the Ingress, refer to to enable it."
}
] |
{
"category": "Runtime",
"file_name": "get-started-aws.md",
"project_name": "Spiderpool",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "name: Bug report about: Create a report to help us improve title: '' labels: community, triage assignees: '' All GitHub issues are addressed on a best-effort basis at MinIO's sole discretion. There are no Service Level Agreements (SLA) or Objectives (SLO). Remember our when engaging with MinIO Engineers and the larger community. For urgent issues (e.g. production down, etc.), subscribe to for direct to engineering support. <! Provide a general summary of the issue in the Title above --> <! If you're describing a bug, tell us what should happen --> <! If you're suggesting a change/improvement, tell us how it should work --> <! If describing a bug, tell us what happens instead of the expected behavior --> <! If suggesting a change/improvement, explain the difference from current behavior --> <! Not obligatory, but suggest a fix/reason for the bug, --> <! or ideas how to implement the addition or change --> <! Provide a link to a live example, or an unambiguous set of steps to --> <! reproduce this bug. Include code to reproduce, if relevant --> <! and make sure you have followed https://github.com/minio/minio/tree/release/docs/debugging to capture relevant logs --> <! How has this issue affected you? What are you trying to accomplish? --> <! Providing context helps us come up with a solution that is most useful in the real world --> <!-- Is this issue a regression? (Yes / No) --> <!-- If Yes, optionally please include minio version or commit id or PR# that caused this regression, if you have these details. --> <! Include as many relevant details about the environment you experienced the bug in --> Version used (`minio --version`): Server setup and configuration: Operating System and version (`uname -a`):"
}
] |
{
"category": "Runtime",
"file_name": "ISSUE_TEMPLATE.md",
"project_name": "MinIO",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This repository contains all the scripts, source code and data. The benchmarking is divided into two groups: Kuasar and Kata. In order to control variables consistently, we use: same versions of cloud-hypervisor and guest-kernel same container image: docker.io/library/ubuntu:latest all pods are created using the hostNetwork mode, which means that CNI plugins are not used the container root filesystem storage driver uses overlayfs Performance Metrics: memory overhead & startup time memory overhead: Start 1/5/10/20/30/50 podsmeasure the Pss of Kuasar's sandboxer process and the total Pss of all `containerd-shim-kata-v2` processes. The Pss is obtained from the smaps_rollup file under the `/proc` dictionary. Measure three times and take the average value. startup time: The container startup time (including pod startup time) measured end-to-end through CRI. The testing is divided into two groups, one launching a single pod and the other launching 30 pods in parallel. Every group runs 500 times and obtains CDF data. | | version | ||-| | Contained | v1.7.0 | | Kata | v2.5.2 | | Operation System | centOS 8 | | Cloud-Hypervisor | v28.2 | | Guest-kernel | 6.1.6 | ```bash ENABLECRISANDBOXES=1 ./contained -l=info ``` ```bash ~/kuasar/vmm/target/x86_64-unknown-linux-musl/release/vmm-sandboxer --listen /run/kuasar-sandboxer.sock --dir /var/lib/kuasar ``` There are a bunch of scripts in `./scripts`. They assume you are in the `./scripts` directory when you run them. ```bash cd ./scripts ``` Time tests ```bash sh boot-serial-kuasar-time.sh ``` In `./data/raw` we could get the `.dat` file as the result of the script we run just now. For example `boot-serial-1000-kuasar-time.dat`. Then we use the command below to generate cdf file: ```bash ./utilgencdf.py ../data/raw/boot-serial-1000-kuasar-time.dat ../data/boot-serial-1000-kuasar-time-cdf.dat ``` Memory tests ```bash sh kuasar-mem-test.sh ``` Measure three times, record the result and take the average value. We use gnuplot to generate graph. Make sure that your `.dat` files and `.gpl` scripts are in the same dictionary. Or you can modify the `.gpl` scripts to specify the path and name of the `.dat` files. Run the `.gpl` script, and you will get the PDF file. Figure1: Cumulative distribution of wall-clock times for starting container in serial, for Kata and Kuasar. The boot time of Kata is about 850~950 ms while Kuasar is only 300~360 ms, which less than half of Kata. Figure2: Cumulative distribution of wall-clock times for starting containers in groups of 50 in parallel, for Kata and Kuasar. The boot time of Kata is about 1600~1800 ms while Kuasar is only 930~1050 ms, which nearly half of Kata. Figure3: Memory overhead (Pss) for Kuasar's sandboxer process and all the processes of Kata-shim. With the number of pods increases, the memory overhead of Kuasar's sandboxer grows very slowly.When the number of pods comes to 50, Pss is only about 15 MB. On the other hand, the average Pss of each pods for Kata is about 18 MB, which significantly higher than Kuasar."
}
] |
{
"category": "Runtime",
"file_name": "Benchmark.md",
"project_name": "Kuasar",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Cilium upgrade helper CLI to help upgrade cilium ``` -h, --help help for preflight ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI - Prepare for DNS Polling upgrades to cilium 1.4 - Migrate KVStore-backed identities to kubernetes CRD-backed identities - Validate Cilium Network Policies deployed in the cluster"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_preflight.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Validating the Kubernetes Swagger API ``` goos: linux goarch: amd64 pkg: github.com/go-openapi/validate cpu: AMD Ryzen 7 5800X 8-Core Processor BenchmarkKubernetesSpec/validatingkubernetes_API-16 1 8549863982 ns/op 7067424936 B/op 59583275 allocs/op ``` ``` go test -bench Spec goos: linux goarch: amd64 pkg: github.com/go-openapi/validate cpu: AMD Ryzen 7 5800X 8-Core Processor BenchmarkKubernetesSpec/validatingkubernetes_API-16 1 4064535557 ns/op 3379715592 B/op 25320330 allocs/op ``` ``` goos: linux goarch: amd64 pkg: github.com/go-openapi/validate cpu: AMD Ryzen 7 5800X 8-Core Processor BenchmarkKubernetesSpec/validatingkubernetes_API-16 1 3758414145 ns/op 2593881496 B/op 17111373 allocs/op ```"
}
] |
{
"category": "Runtime",
"file_name": "BENCHMARK.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "layout: global title: Metrics System On Kubernetes For general information about the metrics of Alluxio, refer to . To deploy Alluxio on Kubernetes, refer to . This documentation focus on how to configure and get metrics of different metrics sinks from Alluxio deployed on Kubernetes. The metrics are exposed through the web ports of different components. The web ports of Alluxio masters and workers are opened by default. Alluxio standalone Fuse web port is not opened by default. It can be opened by setting `alluxio.fuse.web.enabled` to true. You can send an HTTP request to an Alluxio process to get a snapshot of the metrics in JSON format. ```shell $ kubectl exec <COMPONENTHOSTNAME> -c <CONTAINERNAME> -- curl 127.0.0.1:<COMPONENTWEBPORT>/metrics/json/ ``` ```shell $ kubectl exec <alluxio-master-x> -c alluxio-master -- curl 127.0.0.1:19999/metrics/json/ ``` ```shell $ kubectl exec <alluxio-worker-xxxxx> -c alluxio-worker -- curl 127.0.0.1:30000/metrics/json/ ``` ```shell $ kubectl exec <alluxio-master-x> -c alluxio-job-master -- curl 127.0.0.1:20002/metrics/json/ ``` ```shell $ kubectl exec <alluxio-worker-xxxxx> -c alluxio-job-worker -- curl 127.0.0.1:30003/metrics/json/ ``` ```shell $ kubectl exec <alluxio-fuse-xxxxx> -- curl 127.0.0.1:49999/metrics/json/ ``` is a monitoring tool that can help to monitor Alluxio metrics changes. `PrometheusMetricsServlet` needs to be enabled for Prometheus in Alluxio. Set the following properties in helm chart `value.yaml` to enable Prometheus metrics sink: ```yaml metrics: enabled: true PrometheusMetricsServlet: true podAnnotations: prometheus.io/scrape: \"true\" prometheus.io/masterWebPort: \"<MASTERWEBPORT>\" prometheus.io/jobMasterWebPort: \"<JOBMASTERWEB_PORT>\" prometheus.io/workerWebPort: \"<WORKERWEBPORT>\" prometheus.io/jobWorkerWebPort: \"<JOBWORKERWEB_PORT>\" prometheus.io/fuseWebPort: \"<FUSEWEBPORT>\" prometheus.io/path: \"/metrics/prometheus/\" ``` Note that similar to HTTP JSON Sink, fuse web port needs to be opened for accessing metrics by setting `alluxio.fuse.web.enabled` to true. For a Prometheus client to get the metrics from Alluxio, configure the `prometheus.yml` of the client. For example, to read the master metrics: ```yaml scrape_configs: job_name: 'alluxio master' kubernetessdconfigs: role: pod namespaces: names: alluxio # Only look at pods in namespace named `alluxio` relabel_configs: sourcelabels: [metakubernetespodlabel_role] action: keep regex: alluxio-master sourcelabels: [metakubernetespodannotationprometheusio_scrape] action: keep regex: true sourcelabels: [metakubernetespodannotationprometheusio_path] action: replace targetlabel: metricspath regex: (.+) sourcelabels: [address, metakubernetespodannotationprometheusio_masterWebPort] action: replace regex: ([^:]+)(?::\\d+)?;(\\d+) replacement: $1:$2 targetlabel: address_ action: labelmap regex:"
},
{
"data": "sourcelabels: [metakubernetes_namespace] action: replace target_label: namespace sourcelabels: [metakubernetespodname] action: replace targetlabel: podname sourcelabels: [metakubernetespodnode_name] action: replace target_label: node sourcelabels: [metakubernetespodlabel_release] action: replace targetlabel: clustername ``` To read other components' metrics, use the respective pod role label and web port label. An example configuration reading worker metrics ```yaml scrape_configs: job_name: 'alluxio worker' kubernetessdconfigs: role: pod namespaces: names: alluxio # Only look at pods in namespace named `alluxio` relabel_configs: sourcelabels: [metakubernetespodlabel_role] action: keep regex: alluxio-worker sourcelabels: [metakubernetespodannotationprometheusio_scrape] action: keep regex: true sourcelabels: [metakubernetespodannotationprometheusio_path] action: replace targetlabel: metricspath regex: (.+) sourcelabels: [address, metakubernetespodannotationprometheusio_workerWebPort] action: replace regex: ([^:]+)(?::\\d+)?;(\\d+) replacement: $1:$2 targetlabel: address_ action: labelmap regex: metakubernetespodlabel(.+) sourcelabels: [metakubernetes_namespace] action: replace target_label: namespace sourcelabels: [metakubernetespodname] action: replace targetlabel: podname sourcelabels: [metakubernetespodnode_name] action: replace target_label: node sourcelabels: [metakubernetespodlabel_release] action: replace targetlabel: clustername ``` An example configuration reading job master metrics ```yaml scrape_configs: job_name: 'alluxio job worker' kubernetessdconfigs: role: pod namespaces: names: alluxio # Only look at pods in namespace named `alluxio` relabel_configs: sourcelabels: [metakubernetespodlabel_role] action: keep regex: alluxio-master sourcelabels: [metakubernetespodannotationprometheusio_scrape] action: keep regex: true sourcelabels: [metakubernetespodannotationprometheusio_path] action: replace targetlabel: metricspath regex: (.+) sourcelabels: [address, metakubernetespodannotationprometheusio_jobMasterWebPort] action: replace regex: ([^:]+)(?::\\d+)?;(\\d+) replacement: $1:$2 targetlabel: address_ action: labelmap regex: metakubernetespodlabel(.+) sourcelabels: [metakubernetes_namespace] action: replace target_label: namespace sourcelabels: [metakubernetespodname] action: replace targetlabel: podname sourcelabels: [metakubernetespodnode_name] action: replace target_label: node sourcelabels: [metakubernetespodlabel_release] action: replace targetlabel: clustername ``` An example configuration reading job worker metrics ```yaml scrape_configs: job_name: 'alluxio job worker' kubernetessdconfigs: role: pod namespaces: names: alluxio # Only look at pods in namespace named `alluxio` relabel_configs: sourcelabels: [metakubernetespodlabel_role] action: keep regex: alluxio-worker sourcelabels: [metakubernetespodannotationprometheusio_scrape] action: keep regex: true sourcelabels: [metakubernetespodannotationprometheusio_path] targetlabel: metricspath regex: (.+) sourcelabels: [address, metakubernetespodannotationprometheusio_jobWorkerWebPort] action: replace regex: ([^:]+)(?::\\d+)?;(\\d+) replacement: $1:$2 targetlabel: address_ action: labelmap regex: metakubernetespodlabel(.+) sourcelabels: [metakubernetes_namespace] action: replace target_label: namespace sourcelabels: [metakubernetespodname] action: replace targetlabel: podname sourcelabels: [metakubernetespodnode_name] action: replace target_label: node sourcelabels: [metakubernetespodlabel_release] action: replace targetlabel: clustername ``` You can send an HTTP request to the Prometheus endpoint of an Alluxio process to get a snapshot of the metrics in Prometheus format. ```shell $ kubectl exec <COMPONENTHOSTNAME> -c <CONTAINERNAME> -- curl 127.0.0.1:<COMPONEMTWEBPORT>/metrics/prometheus/ ``` ```shell $ kubectl exec <alluxio-master-x> -c alluxio-master -- curl 127.0.0.1:19999/metrics/prometheus/ ``` ```shell $ kubectl exec <alluxio-worker-xxxxx> -c alluxio-worker -- curl 127.0.0.1:30000/metrics/prometheus/ ``` ```shell $ kubectl exec <alluxio-master-x> -c alluxio-job-master -- curl 127.0.0.1:20002/metrics/prometheus/ ``` ```shell $ kubectl exec <alluxio-worker-xxxxx> -c alluxio-job-worker -- curl 127.0.0.1:30003/metrics/prometheus/ ``` ```shell $ kubectl exec <alluxio-fuse-xxxxx> -- curl 127.0.0.1:49999/metrics/prometheus/ ```"
}
] |
{
"category": "Runtime",
"file_name": "Metrics-On-Kubernetes.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "sidebar_position: 2 sidebar_label: \"Use Disk Volume\" HwameiStor provides another type of data volume known as raw disk data volume. This volume is based on the raw disk present on the node and can be directly mounted for container use. As a result, this type of data volume offers more efficient data read and write performance, thereby fully unleashing the performance of the disk. Create a nginx application and use `hwameistor-disk-volume` volume using the following command: ```console $ cat << EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: nginx namespace: default spec: containers: name: nginx image: docker.io/library/nginx:latest imagePullPolicy: IfNotPresent volumeMounts: name: data mountPath: /data ports: containerPort: 80 volumes: name: data persistentVolumeClaim: claimName: hwameistor-disk-volume EOF ```"
}
] |
{
"category": "Runtime",
"file_name": "disk.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "The changes below were introduced to the Incus API after the 1.0 API was finalized. They are all backward compatible and can be detected by client tools by looking at the `api_extensions` field in `GET /1.0`. A `storage.zfsremovesnapshots` daemon configuration key was introduced. It's a Boolean that defaults to `false` and that when set to `true` instructs Incus to remove any needed snapshot when attempting to restore another. This is needed as ZFS will only let you restore the latest snapshot. A `boot.hostshutdowntimeout` container configuration key was introduced. It's an integer which indicates how long Incus should wait for the container to stop before killing it. Its value is only used on clean Incus daemon shutdown. It defaults to 30s. A `boot.stop.priority` container configuration key was introduced. It's an integer which indicates the priority of a container during shutdown. Containers will shutdown starting with the highest priority level. Containers with the same priority will shutdown in parallel. It defaults to 0. A number of new syscalls related container configuration keys were introduced. `security.syscalls.blacklist_default` <!-- wokeignore:rule=blacklist --> `security.syscalls.blacklist_compat` <!-- wokeignore:rule=blacklist --> `security.syscalls.blacklist` <!-- wokeignore:rule=blacklist --> `security.syscalls.whitelist` <!-- wokeignore:rule=whitelist --> See for how to use them. This indicates support for PKI authentication mode. In this mode, the client and server both must use certificates issued by the same PKI. See for details. A `lastusedat` field was added to the `GET /1.0/containers/<name>` endpoint. It is a timestamp of the last time the container was started. If a container has been created but not started yet, `lastusedat` field will be `1970-01-01T00:00:00Z` Add support for the ETag header on all relevant endpoints. This adds the following HTTP header on answers to GET: ETag (SHA-256 of user modifiable content) And adds support for the following HTTP header on PUT requests: If-Match (ETag value retrieved through previous GET) This makes it possible to GET an Incus object, modify it and PUT it without risking to hit a race condition where Incus or another client modified the object in the meantime. Add support for the HTTP PATCH method. PATCH allows for partial update of an object in place of PUT. Add support for USB hotplug. To use Incus API with all Web Browsers (via SPAs) you must send credentials (certificate) with each XHR (in order for this to happen, you should set flag to each XHR Request). Some browsers like Firefox and Safari can't accept server response without `Access-Control-Allow-Credentials: true` header. To ensure that the server will return a response with that header, set `core.httpsallowedcredentials=true`. This adds support for a `compression_algorithm` property when creating an image (`POST /1.0/images`). Setting this property overrides the server default value (`images.compression_algorithm`). This allows for creating and listing directories via the Incus API, and exports the file type via the X-Incus-type header, which can be either `file` or `directory` right now. This adds support for retrieving CPU time for a running container. Introduces a new server property `storage.zfsuserefquota` which instructs Incus to set the `refquota` property instead of `quota` when setting a size limit on a container. Incus will also then use `usedbydataset` in place of `used` when being queried about disk utilization. This effectively controls whether disk usage by snapshots should be considered as part of the container's disk space usage. Adds a new `storage.lvmmountoptions` daemon configuration option which defaults to `discard` and allows the user to set addition mount options for the file system used by the LVM LV. Network management API for Incus. This includes: Addition of the `managed` property on"
},
{
"data": "entries All the network configuration options (see for details) `POST /1.0/networks` (see for details) `PUT /1.0/networks/<entry>` (see for details) `PATCH /1.0/networks/<entry>` (see for details) `DELETE /1.0/networks/<entry>` (see for details) `ipv4.address` property on `nic` type devices (when `nictype` is `bridged`) `ipv6.address` property on `nic` type devices (when `nictype` is `bridged`) `security.mac_filtering` property on `nic` type devices (when `nictype` is `bridged`) Adds a new `used_by` field to profile entries listing the containers that are using it. When a container is created in push mode, the client serves as a proxy between the source and target server. This is useful in cases where the target server is behind a NAT or firewall and cannot directly communicate with the source server and operate in pull mode. Introduces a new Boolean `record-output`, parameter to `/1.0/containers/<name>/exec` which when set to `true` and combined with with `wait-for-websocket` set to `false`, will record stdout and stderr to disk and make them available through the logs interface. The URL to the recorded output is included in the operation metadata once the command is done running. That output will expire similarly to other log files, typically after 48 hours. Adds the following to the REST API: ETag header on GET of a certificate PUT of certificate entries PATCH of certificate entries Adds support `/1.0/containers/<name>/exec` for forwarding signals sent to the client to the processes executing in the container. Currently SIGTERM and SIGHUP are forwarded. Further signals that can be forwarded might be added later. Enables adding GPUs to a container. Introduces a new `image` configuration key space. Read-only, includes the properties of the parent image. Transfer progress is now exported as part of the operation, on both sending and receiving ends. This shows up as a `fs_progress` attribute in the operation metadata. Enables setting the `security.idmap.isolated` and `security.idmap.isolated`, `security.idmap.size`, and `raw.id_map` fields. Add two new keys, `ipv4.firewall` and `ipv6.firewall` which if set to `false` will turn off the generation of `iptables` FORWARDING rules. NAT rules will still be added so long as the matching `ipv4.nat` or `ipv6.nat` key is set to `true`. Rules necessary for `dnsmasq` to work (DHCP/DNS) will always be applied if `dnsmasq` is enabled on the bridge. Introduces `ipv4.routes` and `ipv6.routes` which allow routing additional subnets to an Incus bridge. Storage management API for Incus. This includes: `GET /1.0/storage-pools` `POST /1.0/storage-pools` (see for details) `GET /1.0/storage-pools/<name>` (see for details) `POST /1.0/storage-pools/<name>` (see for details) `PUT /1.0/storage-pools/<name>` (see for details) `PATCH /1.0/storage-pools/<name>` (see for details) `DELETE /1.0/storage-pools/<name>` (see for details) `GET /1.0/storage-pools/<name>/volumes` (see for details) `GET /1.0/storage-pools/<name>/volumes/<volume_type>` (see for details) `POST /1.0/storage-pools/<name>/volumes/<volume_type>` (see for details) `GET /1.0/storage-pools/<pool>/volumes/<volume_type>/<name>` (see for details) `POST /1.0/storage-pools/<pool>/volumes/<volume_type>/<name>` (see for details) `PUT /1.0/storage-pools/<pool>/volumes/<volume_type>/<name>` (see for details) `PATCH /1.0/storage-pools/<pool>/volumes/<volume_type>/<name>` (see for details) `DELETE /1.0/storage-pools/<pool>/volumes/<volume_type>/<name>` (see for details) All storage configuration options (see for details) Implements `DELETE` in `/1.0/containers/<name>/files` Implements the `X-Incus-write` header which can be one of `overwrite` or `append`. Introduces `ipv4.dhcp.expiry` and `ipv6.dhcp.expiry` allowing to set the DHCP lease expiry time. Introduces the ability to rename a volume group by setting `storage.lvm.vg_name`. Introduces the ability to rename a thin pool name by setting `storage.thinpool_name`. This adds a new `vlan` property to `macvlan` network devices. When set, this will instruct Incus to attach to the specified VLAN. Incus will look for an existing interface for that VLAN on the host. If one can't be found it will create one itself and then use that as the macvlan parent. Adds a new `aliases` field to `POST /1.0/images` allowing for aliases to be set at image creation/import"
},
{
"data": "This introduces a new `live` attribute in `POST /1.0/containers/<name>`. Setting it to `false` tells Incus not to attempt running state transfer. Introduces a new Boolean `container_only` attribute. When set to `true` only the container will be copied or moved. Introduces a new Boolean `storagezfsclone_copy` property for ZFS storage pools. When set to `false` copying a container will be done through `zfs send` and receive. This will make the target container independent of its source container thus avoiding the need to keep dependent snapshots in the ZFS pool around. However, this also entails less efficient storage usage for the affected pool. The default value for this property is `true`, i.e. space-efficient snapshots will be used unless explicitly set to `false`. Introduces the ability to rename the `unix-block`/`unix-char` device inside container by setting `path`, and the `source` attribute is added to specify the device on host. If `source` is set without a `path`, we should assume that `path` will be the same as `source`. If `path` is set without `source` and `major`/`minor` isn't set, we should assume that `source` will be the same as `path`. So at least one of them must be set. When `rsync` has to be invoked to transfer storage entities setting `rsync.bwlimit` places an upper limit on the amount of socket I/O allowed. This introduces a new `tunnel.NAME.interface` option for networks. This key control what host network interface is used for a VXLAN tunnel. This introduces the `btrfs.mount_options` property for Btrfs storage pools. This key controls what mount options will be used for the Btrfs storage pool. This adds descriptions to entities like containers, snapshots, networks, storage pools and volumes. This allows forcing a refresh for an existing image. This introduces the ability to resize logical volumes by setting the `size` property in the containers root disk device. This introduces a new `security.idmap.base` allowing the user to skip the map auto-selection process for isolated containers and specify what host UID/GID to use as the base. This adds support for transferring symlinks through the file API. X-Incus-type can now be `symlink` with the request content being the target path. This adds the `target` field to `POST /1.0/containers/<name>` which can be used to have the source Incus host connect to the target during migration. Allows use of `vlan` property with `physical` network devices. When set, this will instruct Incus to attach to the specified VLAN on the `parent` interface. Incus will look for an existing interface for that `parent` and VLAN on the host. If one can't be found it will create one itself. Then, Incus will directly attach this interface to the container. This enabled the storage API to delete storage volumes for images from a specific storage pool. This adds support for editing a container `metadata.yaml` and related templates via API, by accessing URLs under `/1.0/containers/<name>/metadata`. It can be used to edit a container before publishing an image from it. This enables migrating stateful container snapshots to new containers. This adds a Ceph storage driver. This adds the ability to specify the Ceph user. This adds the `instance_type` field to the container creation request. Its value is expanded to Incus resource limits. This records the actual source passed to Incus during storage pool creation. This introduces the `ceph.osd.force_reuse` property for the Ceph storage driver. When set to `true` Incus will reuse an OSD storage pool that is already in use by another Incus instance. This adds support for Btrfs as a storage volume file system, in addition to `ext4` and"
},
{
"data": "This adds support for querying an Incus daemon for the system resources it has available. This adds support for setting process limits such as maximum number of open files for the container via `nofile`. The format is `limits.kernel.[limit name]`. This adds support for renaming custom storage volumes. This adds support for SR-IOV enabled network devices. This adds support to interact with the container console device and console log. A new `security.guestapi` container configuration key was introduced. The key controls whether the `/dev/incus` interface is made available to the container. If set to `false`, this effectively prevents the container from interacting with the Incus daemon. This adds support for optimized memory transfer during live migration. This adds support to use InfiniBand network devices. This adds a WebSocket API to the `/dev/incus` socket. When connecting to `/1.0/events` over the `/dev/incus` socket, you will now be getting a stream of events over WebSocket. This adds a new `proxy` device type to containers, allowing forwarding of connections between the host and container. Introduces a new `ipv4.dhcp.gateway` network configuration key to set an alternate gateway. This makes it possible to retrieve symlinks using the file API. Adds a new `/1.0/networks/NAME/leases` API endpoint to query the lease database on bridges which run an Incus-managed DHCP server. This adds support for the `required` property for Unix devices. This add the ability to copy and move custom storage volumes locally in the same and between storage pools. Adds a `description` field to all operations. Clustering API for Incus. This includes the following new endpoints (see for details): `GET /1.0/cluster` `UPDATE /1.0/cluster` `GET /1.0/cluster/members` `GET /1.0/cluster/members/<name>` `POST /1.0/cluster/members/<name>` `DELETE /1.0/cluster/members/<name>` The following existing endpoints have been modified: `POST /1.0/containers` accepts a new `target` query parameter `POST /1.0/storage-pools` accepts a new `target` query parameter `GET /1.0/storage-pool/<name>` accepts a new `target` query parameter `POST /1.0/storage-pool/<pool>/volumes/<type>` accepts a new `target` query parameter `GET /1.0/storage-pool/<pool>/volumes/<type>/<name>` accepts a new `target` query parameter `POST /1.0/storage-pool/<pool>/volumes/<type>/<name>` accepts a new `target` query parameter `PUT /1.0/storage-pool/<pool>/volumes/<type>/<name>` accepts a new `target` query parameter `PATCH /1.0/storage-pool/<pool>/volumes/<type>/<name>` accepts a new `target` query parameter `DELETE /1.0/storage-pool/<pool>/volumes/<type>/<name>` accepts a new `target` query parameter `POST /1.0/networks` accepts a new `target` query parameter `GET /1.0/networks/<name>` accepts a new `target` query parameter This adds a new `lifecycle` message type to the events API. This adds the ability to copy and move custom storage volumes between remote. Adds a `nvidia_runtime` configuration option for containers, setting this to `true` will have the NVIDIA runtime and CUDA libraries passed to the container. This adds a new `propagation` option to the disk device type, allowing the configuration of kernel mount propagation. Add container backup support. This includes the following new endpoints (see for details): `GET /1.0/containers/<name>/backups` `POST /1.0/containers/<name>/backups` `GET /1.0/containers/<name>/backups/<name>` `POST /1.0/containers/<name>/backups/<name>` `DELETE /1.0/containers/<name>/backups/<name>` `GET /1.0/containers/<name>/backups/<name>/export` The following existing endpoint has been modified: `POST /1.0/containers` accepts the new source type `backup` Adds a `security.guestapi.images` configuration option for containers which controls the availability of a `/1.0/images/FINGERPRINT/export` API over `/dev/incus`. This can be used by a container running nested Incus to access raw images from the host. This enables copying or moving containers between storage pools on the same Incus instance. Add support for both Unix sockets and abstract Unix sockets in proxy devices. They can be used by specifying the address as `unix:/path/to/unix.sock` (normal socket) or `unix:@/tmp/unix.sock` (abstract socket). Supported connections are now: `TCP <-> TCP` `UNIX <-> UNIX` `TCP <-> UNIX` `UNIX <-> TCP` Add support for UDP in proxy"
},
{
"data": "Supported connections are now: `TCP <-> TCP` `UNIX <-> UNIX` `TCP <-> UNIX` `UNIX <-> TCP` `UDP <-> UDP` `TCP <-> UDP` `UNIX <-> UDP` This makes `GET /1.0/cluster` return information about which storage pools and networks are required to be created by joining nodes and which node-specific configuration keys they are required to use when creating them. Likewise the `PUT /1.0/cluster` endpoint now accepts the same format to pass information about storage pools and networks to be automatically created before attempting to join a cluster. Adds support for forwarding traffic for multiple ports. Forwarding is allowed between a range of ports if the port range is equal for source and target (for example `1.2.3.4 0-1000 -> 5.6.7.8 1000-2000`) and between a range of source ports and a single target port (for example `1.2.3.4 0-1000 -> 5.6.7.8 1000`). Adds support for retrieving a network's state. This adds the following new endpoint (see for details): `GET /1.0/networks/<name>/state` This adds support for GID, UID, and mode properties for non-abstract Unix sockets. Enables setting the `security.protection.delete` field which prevents containers from being deleted if set to `true`. Snapshots are not affected by this setting. Adds `security.uid` and `security.gid` for the proxy devices, allowing privilege dropping and effectively changing the UID/GID used for connections to Unix sockets too. This adds a new `core.debug_address` configuration option to start a debugging HTTP server. That server currently includes a `pprof` API and replaces the old `cpu-profile`, `memory-profile` and `print-goroutines` debug options. Adds a `proxy_protocol` key to the proxy device which controls the use of the HAProxy PROXY protocol header. Adds a `bridge.hwaddr` key to control the MAC address of the bridge. This adds optimized UDP/TCP proxying. If the configuration allows, proxying will be done via `iptables` instead of proxy devices. This introduces the `ipv4.nat.order` and `ipv6.nat.order` configuration keys for Incus bridges. Those keys control whether to put the Incus rules before or after any pre-existing rules in the chain. This introduces a new `recursion=2` mode for `GET /1.0/containers` which allows for the retrieval of all container structs, including the state, snapshots and backup structs. This effectively allows for to get all it needs in one query. This introduces a new `backups.compression_algorithm` configuration key which allows configuration of backup compression. This introduces a few extra configuration keys when using `nvidia.runtime` and the `libnvidia-container` library. Those keys translate pretty much directly to the matching NVIDIA container environment variables: `nvidia.driver.capabilities` => `NVIDIADRIVERCAPABILITIES` `nvidia.require.cuda` => `NVIDIAREQUIRECUDA` `nvidia.require.driver` => `NVIDIAREQUIREDRIVER` Add support for storage volume snapshots. They work like container snapshots, only for volumes. This adds the following new endpoint (see for details): `GET /1.0/storage-pools/<pool>/volumes/<type>/<name>/snapshots` `POST /1.0/storage-pools/<pool>/volumes/<type>/<name>/snapshots` `GET /1.0/storage-pools/<pool>/volumes/<type>/<volume>/snapshots/<name>` `PUT /1.0/storage-pools/<pool>/volumes/<type>/<volume>/snapshots/<name>` `POST /1.0/storage-pools/<pool>/volumes/<type>/<volume>/snapshots/<name>` `DELETE /1.0/storage-pools/<pool>/volumes/<type>/<volume>/snapshots/<name>` Introduces a new `security.unmapped` Boolean on storage volumes. Setting it to `true` will flush the current map on the volume and prevent any further idmap tracking and remapping on the volume. This can be used to share data between isolated containers after attaching it to the container which requires write access. Add a new project API, supporting creation, update and deletion of projects. Projects can hold containers, profiles or images at this point and let you get a separate view of your Incus resources by switching to it. This adds a new `tunnel.NAME.ttl` network configuration option which makes it possible to raise the TTL on VXLAN tunnels. This adds support for incremental container copy. When copying a container using the `--refresh` flag, only the missing or outdated files will be copied over. Should the target container not exist yet, a normal copy operation is"
},
{
"data": "As the name implies, the `vendorid` field on USB devices attached to containers has now been made optional, allowing for all USB devices to be passed to a container (similar to what's done for GPUs). This adds support for snapshot scheduling. It introduces three new configuration keys: `snapshots.schedule`, `snapshots.schedule.stopped`, and `snapshots.pattern`. Snapshots can be created automatically up to every minute. Snapshot schedule can be configured by a comma-separated list of schedule aliases. Available aliases are `<@hourly> <@daily> <@midnight> <@weekly> <@monthly> <@annually> <@yearly> <@startup>` for instances, and `<@hourly> <@daily> <@midnight> <@weekly> <@monthly> <@annually> <@yearly>` for storage volumes. Introduces a `project` field to the container source JSON object, allowing for copy/move of containers between projects. This adds support for configuring a server network address which differs from the REST API client network address. When bootstrapping a new cluster, clients can set the new `cluster.https_address` configuration key to specify the address of the initial server. When joining a new server, clients can set the `core.https_address` configuration key of the joining server to the REST API address the joining server should listen at, and set the `server_address` key in the `PUT /1.0/cluster` API to the address the joining server should use for clustering traffic (the value of `server_address` will be automatically copied to the `cluster.https_address` configuration key of the joining server). Enable image replication across the nodes in the cluster. A new `cluster.imagesminimalreplica` configuration key was introduced can be used to specify to the minimal numbers of nodes for image replication. Enables setting the `security.protection.shift` option which prevents containers from having their file system shifted. This adds support for snapshot expiration. The task is run minutely. The configuration option `snapshots.expiry` takes an expression in the form of `1M 2H 3d 4w 5m 6y` (1 minute, 2 hours, 3 days, 4 weeks, 5 months, 6 years), however not all parts have to be used. Snapshots which are then created will be given an expiry date based on the expression. This expiry date, defined by `expires_at`, can be manually edited using the API or . Snapshots with a valid expiry date will be removed when the task in run. Expiry can be disabled by setting `expires_at` to an empty string or `0001-01-01T00:00:00Z` (zero time). This is the default if `snapshots.expiry` is not set. This adds the following new endpoint (see for details): `PUT /1.0/containers/<name>/snapshots/<name>` Adds `expires_at` to container creation, allowing for override of a snapshot's expiry at creation time. Introduces a `Location` field in the leases list. This is used when querying a cluster to show what node a particular lease was found on. Add Socket field to CPU resources in case we get out of order socket information. Add a new GPU struct to the server resources, listing all usable GPUs on the system. Shows the NUMA node for all CPUs and GPUs. Exposes the state of optional kernel features through the server environment. This introduces a new internal `volatile.idmap.current` key which is used to track the current mapping for the container. This effectively gives us: `volatile.last_state.idmap` => On-disk idmap `volatile.idmap.current` => Current kernel map `volatile.idmap.next` => Next on-disk idmap This is required to implement environments where the on-disk map isn't changed but the kernel map is (e.g. `idmapped mounts`). Expose the location of the generation of API events. This allows migrating storage volumes including their snapshots. This introduces the `ipv4.nat.address` and `ipv6.nat.address` configuration keys for Incus bridges. Those keys control the source address used for outbound traffic from the bridge. This introduces the `ipv4.routes` and `ipv6.routes` properties on `nic` type devices. This allows adding static routes on host to container's"
},
{
"data": "This makes it possible to do a normal `POST /1.0/containers` to copy a container between cluster nodes with Incus internally detecting whether a migration is required. If the kernel supports `seccomp`-based syscall interception Incus can be notified by a container that a registered syscall has been performed. Incus can then decide to trigger various actions. This introduces the `lxc_features` section output from the command via the `GET /1.0` route. It outputs the result of checks for key features being present in the underlying LXC library. This introduces the `ipvlan` `nic` device type. This introduces VLAN (`vlan`) and MAC filtering (`security.mac_filtering`) support for SR-IOV devices. Add support for CephFS as a storage pool driver. This can only be used for custom volumes, images and containers should be on Ceph (RBD) instead. This introduces container IP filtering (`security.ipv4filtering` and `security.ipv6filtering`) support for `bridged` NIC devices. Rework the resources API at `/1.0/resources`, especially: CPU Fix reporting to track sockets, cores and threads Track NUMA node per core Track base and turbo frequency per socket Track current frequency per core Add CPU cache information Export the CPU architecture Show online/offline status of threads Memory Add huge-pages tracking Track memory consumption per NUMA node too GPU Split DRM information to separate struct Export device names and nodes in DRM struct Export device name and node in NVIDIA struct Add SR-IOV VF tracking Adds support for specifying `User`, `Group` and `Cwd` during `POST /1.0/containers/NAME/exec`. Adds the `security.syscalls.intercept.*` configuration keys to control what system calls will be intercepted by Incus and processed with elevated permissions. Adds the `shift` property on `disk` devices which controls the use of the `idmapped mounts` overlay. Introduces a new `security.shifted` Boolean on storage volumes. Setting it to `true` will allow multiple isolated containers to attach the same storage volume while keeping the file system writable from all of them. This makes use of `idmapped mounts` as an overlay file system. Export InfiniBand character device information (`issm`, `umad`, `uverb`) as part of the resources API. This introduces two new configuration keys `storage.images_volume` and `storage.backups_volume` to allow for a storage volume on an existing pool be used for storing the daemon-wide images and backups artifacts. This introduces the concept of instances, of which currently the only type is `container`. This introduces support for a new Type field on images, indicating what type of images they are. Extends the disk resource API struct to include: Proper detection of SATA devices (type) Device path Drive RPM Block size Firmware version Serial number This adds a new `roles` attribute to cluster entries, exposing a list of roles that the member serves in the cluster. This allows for editing of the expiry date on images. Adds a `FirmwareVersion` field to network card entries. This adds support for a `compression_algorithm` property when creating a backup (`POST /1.0/containers/<name>/backups`). Setting this property overrides the server default value (`backups.compression_algorithm`). This adds support for an optional argument (`ceph.osd.datapoolname`) when creating storage pools using Ceph RBD, when this argument is used the pool will store it's actual data in the pool specified with `datapoolname` while keeping the metadata in the pool specified by `pool_name`. Adds the `security.syscalls.intercept.mount`, `security.syscalls.intercept.mount.allowed`, and `security.syscalls.intercept.mount.shift` configuration keys to control whether and how the `mount` system call will be intercepted by Incus and processed with elevated permissions. Adds support for importing/exporting of images/backups using SquashFS file system format. This adds support for passing in raw mount options for disk devices. This introduces the `routed` `nic` device type. Adds the `security.syscalls.intercept.mount.fuse`"
},
{
"data": "It can be used to redirect file-system mounts to their fuse implementation. To this end, set e.g. `security.syscalls.intercept.mount.fuse=ext4=fuse2fs`. This allows for existing a Ceph RBD or CephFS to be directly connected to an Incus container. Add virtual machine support. Allows a list of profiles to be applied to an image when launching a new container. This adds a new `architecture` attribute to cluster members which indicates a cluster member's architecture. Add a new `device_id` field in the disk entries on the resources API. This adds the ability to use LVM stripes on normal volumes and thin pool volumes. Adds a `boot.priority` property on NIC and disk devices to control the boot order. Adds support for Unix char and block device hotplugging. Adds support for filtering the result of a GET request for instances and images. Adds support for the `network` property on a NIC device to allow a NIC to be linked to a managed network. This allows it to inherit some of the network's settings and allows better validation of IP settings. Support specifying a custom values for database voters and standbys. The new `cluster.maxvoters` and `cluster.maxstandby` configuration keys were introduced to specify to the ideal number of database voter and standbys. Adds the `Firewall` property to the `ServerEnvironment` struct indicating the firewall driver being used. Introduces the ability to create a storage pool from an existing non-empty volume group. This option should be used with care, as Incus can then not guarantee that volume name conflicts won't occur with non-Incus created volumes in the same volume group. This could also potentially lead to Incus deleting a non-Incus volume should name conflicts occur. When mount syscall interception is enabled and `hugetlbfs` is specified as an allowed file system type Incus will mount a separate `hugetlbfs` instance for the container with the UID and GID mount options set to the container's root UID and GID. This ensures that processes in the container can use huge pages. This allows to limit the number of huge pages a container can use through the `hugetlb` cgroup. This means the `hugetlb` cgroup needs to be available. Note, that limiting huge pages is recommended when intercepting the mount syscall for the `hugetlbfs` file system to avoid allowing the container to exhaust the host's huge pages resources. This introduces the `ipv4.gateway` and `ipv6.gateway` NIC configuration keys that can take a value of either `auto` or `none`. The default value for the key if unspecified is `auto`. This will cause the current behavior of a default gateway being added inside the container and the same gateway address being added to the host-side interface. If the value is set to `none` then no default gateway nor will the address be added to the host-side interface. This allows multiple routed NIC devices to be added to a container. This introduces support for the `restricted` configuration key on project, which can prevent the use of security-sensitive features in a project. This allows custom volume snapshots to expiry. Expiry dates can be set individually, or by setting the `snapshots.expiry` configuration key on the parent custom volume which then automatically applies to all created snapshots. This adds support for custom volume snapshot scheduling. It introduces two new configuration keys: `snapshots.schedule` and `snapshots.pattern`. Snapshots can be created automatically up to every minute. This allows for checking client certificates trusted by the provided CA (`server.ca`). It can be enabled by setting `core.trustcacertificates` to `true`. If enabled, it will perform the check, and bypass the trusted password if"
},
{
"data": "An exception will be made if the connecting client certificate is in the provided CRL (`ca.crl`). In this case, it will ask for the password. This adds a new `size` field to the output of `/1.0/instances/<name>/snapshots/<snapshot>` which represents the disk usage of the snapshot. This adds a writable endpoint for cluster members, allowing the editing of their roles. This introduces the `ipv4.hostaddress` and `ipv6.hostaddress` NIC configuration keys that can be used to control the host-side `veth` interface's IP addresses. This can be useful when using multiple routed NICs at the same time and needing a predictable next-hop address to use. This also alters the behavior of `ipv4.gateway` and `ipv6.gateway` NIC configuration keys. When they are set to `auto` the container will have its default gateway set to the value of `ipv4.hostaddress` or `ipv6.hostaddress` respectively. The default values are: `ipv4.host_address`: `169.254.0.1` `ipv6.host_address`: `fe80::1` This is backward compatible with the previous default behavior. This introduces the `ipv4.gateway` and `ipv6.gateway` NIC configuration keys that can take a value of either `auto` or `none`. The default value for the key if unspecified is `auto`. This will cause the current behavior of a default gateway being added inside the container and the same gateway address being added to the host-side interface. If the value is set to `none` then no default gateway nor will the address be added to the host-side interface. This allows multiple IPVLAN NIC devices to be added to a container. This adds USB and PCI devices to the output of `/1.0/resources`. This indicates that the `numa_node` field is now recorded per-thread rather than per core as some hardware apparently puts threads in different NUMA domains. Exposes the `die_id` information on each core. This introduces two new fields in `/1.0`, `os` and `os_version`. Those are taken from the OS-release data on the system. This introduces the `ipv4.hosttable` and `ipv6.hosttable` NIC configuration keys that can be used to add static routes for the instance's IPs to a custom policy routing table by ID. This introduces the `ipv4.hosttable` and `ipv6.hosttable` NIC configuration keys that can be used to add static routes for the instance's IPs to a custom policy routing table by ID. This introduces the `mode` NIC configuration key that can be used to switch the `ipvlan` mode into either `l2` or `l3s`. If not specified, the default value is `l3s` (which is the old behavior). In `l2` mode the `ipv4.address` and `ipv6.address` keys will accept addresses in either CIDR or singular formats. If singular format is used, the default subnet size is taken to be /24 and /64 for IPv4 and IPv6 respectively. In `l2` mode the `ipv4.gateway` and `ipv6.gateway` keys accept only a singular IP address. This adds system information to the output of `/1.0/resources`. This adds the push and relay modes to image copy. It also introduces the following new endpoint: `POST 1.0/images/<fingerprint>/export` This introduces the `dns.search` configuration option on networks. This introduces `limits.ingress`, `limits.egress` and `limits.max` for routed NICs. This introduces the `vlan` and `vlan.tagged` settings for `bridged` NICs. `vlan` specifies the non-tagged VLAN to join, and `vlan.tagged` is a comma-delimited list of tagged VLANs to join. This adds a `bridge` and `bond` section to the `/1.0/networks/NAME/state` API. Those contain additional state information relevant to those particular types. Bond: Mode Transmit hash Up delay Down delay MII frequency MII state Lower devices Bridge: ID Forward delay STP mode Default VLAN VLAN filtering Upper devices Add an `Isolated` property on CPU threads to indicate if the thread is physically `Online` but is configured not to accept"
},
{
"data": "This extension indicates that `UsedBy` should now be consistent with suitable `?project=` and `?target=` when appropriate. The 5 entities that have `UsedBy` are: Profiles Projects Networks Storage pools Storage volumes This adds support for creating and attaching custom block volumes to instances. It introduces the new `--type` flag when creating custom storage volumes, and accepts the values `fs` and `block`. This extension adds a new `failure_domain` field to the `PUT /1.0/cluster/<node>` API, which can be used to set the failure domain of a node. A number of new syscalls related container configuration keys were updated. `security.syscalls.deny_default` `security.syscalls.deny_compat` `security.syscalls.deny` `security.syscalls.allow` Expose available mediated device profiles and devices in `/1.0/resources`. This extends the `/1.0/console` endpoint to take a `?type=` argument, which can be set to `console` (default) or `vga` (the new type added by this extension). When doing a `POST` to `/1.0/<instance name>/console?type=vga` the data WebSocket returned by the operation in the metadata field will be a bidirectional proxy attached to a SPICE Unix socket of the target virtual machine. Add `limits.disk` to the available project configuration keys. If set, it limits the total amount of disk space that instances volumes, custom volumes and images volumes can use in the project. Adds support for additional network type `macvlan` and adds `parent` configuration key for this network type to specify which parent interface should be used for creating NIC device interfaces on top of. Also adds `network` configuration key support for `macvlan` NICs to allow them to specify the associated network of the same type that they should use as the basis for the NIC device. Adds support for additional network type `sriov` and adds `parent` configuration key for this network type to specify which parent interface should be used for creating NIC device interfaces on top of. Also adds `network` configuration key support for `sriov` NICs to allow them to specify the associated network of the same type that they should use as the basis for the NIC device. This adds support to intercept the `bpf` syscall in containers. Specifically, it allows to manage device cgroup `bpf` programs. Adds support for additional network type `ovn` with the ability to specify a `bridge` type network as the `parent`. Introduces a new NIC device type of `ovn` which allows the `network` configuration key to specify which `ovn` type network they should connect to. Also introduces two new global configuration keys that apply to all `ovn` networks and NIC devices: `network.ovn.integration_bridge` - the OVS integration bridge to use. `network.ovn.northbound_connection` - the OVN northbound database connection string. Adds the `features.networks` configuration key to projects and the ability for a project to hold networks. Adds the `restricted.networks.uplinks` project configuration key to indicate (as a comma-delimited list) which networks the networks created inside the project can use as their uplink network. Add custom volume backup support. This includes the following new endpoints (see for details): `GET /1.0/storage-pools/<pool>/<type>/<volume>/backups` `POST /1.0/storage-pools/<pool>/<type>/<volume>/backups` `GET /1.0/storage-pools/<pool>/<type>/<volume>/backups/<name>` `POST /1.0/storage-pools/<pool>/<type>/<volume>/backups/<name>` `DELETE /1.0/storage-pools/<pool>/<type>/<volume>/backups/<name>` `GET /1.0/storage-pools/<pool>/<type>/<volume>/backups/<name>/export` The following existing endpoint has been modified: `POST /1.0/storage-pools/<pool>/<type>/<volume>` accepts the new source type `backup` Adds `Name` field to `InstanceBackupArgs` to allow specifying a different instance name when restoring a backup. Adds `Name` and `PoolName` fields to `StoragePoolVolumeBackupArgs` to allow specifying a different volume name when restoring a custom volume backup. Adds `rsync.compression` configuration key to storage pools. This key can be used to disable compression in `rsync` while migrating storage pools. Adds support for additional network type `physical` that can be used as an uplink for `ovn`"
},
{
"data": "The interface specified by `parent` on the `physical` network will be connected to the `ovn` network's gateway. Adds support for `ovn` networks to use external subnets from uplink networks. Introduces the `ipv4.routes` and `ipv6.routes` setting on `physical` networks that defines the external routes allowed to be used in child OVN networks in their `ipv4.routes.external` and `ipv6.routes.external` settings. Introduces the `restricted.networks.subnets` project setting that specifies which external subnets are allowed to be used by OVN networks inside the project (if not set then all routes defined on the uplink network are allowed). Adds support for `ipv4.nat` and `ipv6.nat` settings on `ovn` networks. When creating the network if these settings are unspecified, and an equivalent IP address is being generated for the subnet, then the appropriate NAT setting will added set to `true`. If the setting is missing then the value is taken as `false`. Removes the settings `ipv4.routes.external` and `ipv6.routes.external` from `ovn` networks. The equivalent settings on the `ovn` NIC type can be used instead for this, rather than having to specify them both at the network and NIC level. This introduces the `tpm` device type. This introduces `rebase` as a value for `zfs.clone_copy` causing Incus to track down any `image` dataset in the ancestry line and then perform send/receive on top of that. This adds support for virtual GPUs. It introduces the `mdev` configuration key for GPU devices which takes a supported `mdev` type, e.g. `i915-GVTgV54`. This adds the `IOMMUGroup` field for PCI entries in the resources API. Adds the `usb_address` field to the network card entries in the resources API. Adds the `usbaddress` and `pciaddress` fields to the disk entries in the resources API. Adds `ovn.ingress_mode` setting for `physical` networks. Sets the method that OVN NIC external IPs will be advertised on uplink network. Either `l2proxy` (proxy ARP/NDP) or `routed`. Adds `ipv4.dhcp` and `ipv6.dhcp` settings for `ovn` networks. Allows DHCP (and RA for IPv6) to be disabled. Defaults to on. Adds `ipv4.routes.anycast` and `ipv6.routes.anycast` Boolean settings for `physical` networks. Defaults to `false`. Allows OVN networks using physical network as uplink to relax external subnet/route overlap detection when used with `ovn.ingress_mode=routed`. Adds `limits.instances` to the available project configuration keys. If set, it limits the total number of instances (VMs and containers) that can be used in the project. This adds a `vlan` section to the `/1.0/networks/NAME/state` API. Those contain additional state information relevant to VLAN interfaces: `lower_device` `vid` This adds the `security.port_isolation` field for bridged NIC instances. Adds the following endpoint for bulk state change (see for details): `PUT /1.0/instances` This adds an optional `gvrp` property to `macvlan` and `physical` networks, and to `ipvlan`, `macvlan`, `routed` and `physical` NIC devices. When set, this specifies whether the VLAN should be registered using GARP VLAN Registration Protocol. Defaults to `false`. This adds a `pool` field to the `POST /1.0/instances/NAME` API, allowing for easy move of an instance root disk between pools. This adds support for SR-IOV enabled GPUs. It introduces the `sriov` GPU type property. This introduces the `pci` device type. Add new `/1.0/storage-pools/POOL/volumes/VOLUME/state` API endpoint to get usage data on a volume. This adds the concept of network ACLs to API under the API endpoint prefix `/1.0/network-acls`. Add a new `migration.stateful` configuration key. This introduces the `size.state` device configuration key on `disk` devices. Adds a new `ceph.rbd.features` configuration key on storage pools to control the RBD features used for new volumes. Adds new `backups.compressionalgorithm` and `images.compressionalgorithm` configuration keys which allows configuration of backup and image compression per-project. Add new"
},
{
"data": "configuration key to projects, allowing for set number of days after which an unused cached remote image will be flushed. Adds a new `restricted` property to certificates in the API as well as `projects` holding a list of project names that the certificate has access to. Adds a new `security.acls` property to OVN networks and OVN NICs, allowing Network ACLs to be applied. Adds new `images.autoupdatecached` and `images.autoupdateinterval` configuration keys which allows configuration of images auto update in projects Adds new `restricted.cluster.target` configuration key to project which prevent the user from using --target to specify what cluster member to place a workload on or the ability to move a workload between members. Adds new `images.default_architecture` global configuration key and matching per-project key which lets user tell Incus what architecture to go with when no specific one is specified as part of the image request. Adds new `security.acls.default.{in,e}gress.action` and `security.acls.default.{in,e}gress.logged` configuration keys for OVN networks and NICs. This replaces the removed ACL `default.action` and `default.logged` keys. This adds support for NVIDIA MIG. It introduces the `mig` GPU type and associated configuration keys. Adds an API endpoint to get current resource allocations in a project. Accessible at API `GET /1.0/projects/<name>/state`. Adds a new `security.acls` configuration key to `bridge` networks, allowing Network ACLs to be applied. Also adds `security.acls.default.{in,e}gress.action` and `security.acls.default.{in,e}gress.logged` configuration keys for specifying the default behavior for unmatched traffic. Warning API for Incus. This includes the following endpoints (see for details): `GET /1.0/warnings` `GET /1.0/warnings/<uuid>` `PUT /1.0/warnings/<uuid>` `DELETE /1.0/warnings/<uuid>` Adds new `restricted.backups` and `restricted.snapshots` configuration keys to project which prevents the user from creation of backups and snapshots. Adds `POST /1.0/cluster/members` API endpoint for requesting a join token used when adding new cluster members without using the trust password. Adds an editable description to the cluster members. This introduces support for `core.httpstrustedproxy` which has Incus parse a HAProxy style connection header on such connections and if present, will rewrite the request's source address to that provided by the proxy server. Adds `PUT /1.0/cluster/certificate` endpoint for updating the cluster certificate across the whole cluster This adds support for copy/move custom storage volumes between projects. This modifies the `driver` output for the `/1.0` endpoint to only include drivers which are actually supported and operational on the server (as opposed to being included in Incus but not operational on the server). This adds supported storage driver info to server environment info. Adds a new address field to `lifecycle` requestor. Add a new `USBAddress` (`usb_address`) field to `ResourcesGPUCard` (GPU entries) in the resources API. Adds `POST /1.0/cluster/members/<name>/state` endpoint for evacuating and restoring cluster members. It also adds the configuration keys `cluster.evacuate` and `volatile.evacuate.origin` for setting the evacuation method (`auto`, `stop` or `migrate`) and the origin of any migrated instance respectively. This introduces the `ipv4.nat.address` and `ipv6.nat.address` configuration keys for Incus `ovn` networks. Those keys control the source address used for outbound traffic from the OVN virtual network. These keys can only be specified when the OVN network's uplink network has `ovn.ingress_mode=routed`. This introduces support for Incus acting as a BGP router to advertise routes to `bridge` and `ovn` networks. This comes with the addition to global configuration of: `core.bgp_address` `core.bgp_asn` `core.bgp_routerid` The following network configurations keys (`bridge` and `physical`): `bgp.peers.<name>.address` `bgp.peers.<name>.asn` `bgp.peers.<name>.password` The `nexthop` configuration keys (`bridge`): `bgp.ipv4.nexthop` `bgp.ipv6.nexthop` And the following NIC-specific configuration keys (`bridged` NIC type): `ipv4.routes.external` `ipv6.routes.external` This introduces the networking address forward functionality. Allowing for `bridge` and `ovn` networks to define external IP addresses that can be forwarded to internal IP(s) inside their respective networks. Adds support for refresh during volume"
},
{
"data": "This adds the received and sent errors as well as inbound and outbound dropped packets to the network counters. This adds metrics to Incus. It returns metrics of running instances using the OpenMetrics format. This includes the following endpoints: `GET /1.0/metrics` Adds a new `project` field to `POST /1.0/images` allowing for the source project to be set at image copy time. Adds new `config` property to cluster members with configurable key/value pairs. This adds network peering to allow traffic to flow between OVN networks without leaving the OVN subsystem. Adds new `linux.sysctl.*` configuration keys allowing users to modify certain kernel parameters within containers. Introduces a built-in DNS server and zones API to provide DNS records for Incus instances. This introduces the following server configuration key: `core.dns_address` The following network configuration key: `dns.zone.forward` `dns.zone.reverse.ipv4` `dns.zone.reverse.ipv6` And the following project configuration key: `restricted.networks.zones` A new REST API is also introduced to manage DNS zones: `/1.0/network-zones` (GET, POST) `/1.0/network-zones/<name>` (GET, PUT, PATCH, DELETE) Adds new `acceleration` configuration key to OVN NICs which can be used for enabling hardware offloading. It takes the values `none` or `sriov`. This adds support for renewing a client's own trust certificate. This adds a `project` field to the `POST /1.0/instances/NAME` API, allowing for easy move of an instance between projects. This adds support for moving storage volume between projects. This adds a new `cloud-init` configuration key namespace which contains the following keys: `cloud-init.vendor-data` `cloud-init.user-data` `cloud-init.network-config` It also adds a new endpoint `/1.0/devices` to `/dev/incus` which shows an instance's devices. This introduces `network.nat` as a configuration option on network zones (DNS). It defaults to the current behavior of generating records for all instances NICs but if set to `false`, it will instruct Incus to only generate records for externally reachable addresses. Adds new `database-leader` role which is assigned to cluster leader. This adds support for displaying instances from all projects. Add support for grouping cluster members. This introduces the following new endpoints: `/1.0/cluster/groups` (GET, POST) `/1.0/cluster/groups/<name>` (GET, POST, PUT, PATCH, DELETE) The following project restriction is added: `restricted.cluster.groups` Adds a new `ceph.rbd.du` Boolean on Ceph storage pools which allows disabling the use of the potentially slow `rbd du` calls. This introduces a new `recursion=1` mode for `GET /1.0/instances/{name}` which allows for the retrieval of all instance structs, including the state, snapshots and backup structs. This adds a new `security.agent.metrics` Boolean which defaults to `true`. When set to `false`, it doesn't connect to the `incus-agent` for metrics and other state information, but relies on stats from QEMU. Adds support for the new MIG UUID format used by NVIDIA `470+` drivers (for example, `MIG-74c6a31a-fde5-5c61-973b-70e12346c202`), the `MIG-` prefix can be omitted This extension supersedes old `mig.gi` and `mig.ci` parameters which are kept for compatibility with old drivers and cannot be set together. Expose the project an API event belongs to. This adds `live-migrate` as a configuration option to `cluster.evacuate`, which forces live-migration of instances during cluster evacuation. Adds `allow_inconsistent` field to instance source on `POST /1.0/instances`. If `true`, `rsync` will ignore the `Partial transfer due to vanished source files` (code 24) error when creating an instance from a copy. This adds an `ovn` section to the `/1.0/networks/NAME/state` API which contains additional state information relevant to OVN networks: chassis Adds support for filtering the result of a GET request for storage volumes. This extension adds on to the image properties to include image restrictions/host requirements. These requirements help determine the compatibility between an instance and the host system. Introduces the ability to disable zpool export when unmounting pool by setting"
},
{
"data": "This extends the network zones (DNS) API to add the ability to create and manage custom records. This adds: `GET /1.0/network-zones/ZONE/records` `POST /1.0/network-zones/ZONE/records` `GET /1.0/network-zones/ZONE/records/RECORD` `PUT /1.0/network-zones/ZONE/records/RECORD` `PATCH /1.0/network-zones/ZONE/records/RECORD` `DELETE /1.0/network-zones/ZONE/records/RECORD` This adds support for listing network zones across all projects through the `all-projects` parameter on the `GET /1.0/network-zones`API. Adds ability to set the `reservation`/`refreservation` ZFS property along with `quota`/`refquota`. Adds a new `GET /1.0/networks-acls/NAME/log` API to retrieve ACL firewall logs. Introduces a new `zfs.blocksize` property for ZFS storage volumes which allows to set volume block size. This is used to detect whether Incus was fixed to output used CPU time in seconds rather than as milliseconds. Adds a `@never` option to `snapshots.schedule` which allows disabling inheritance. This adds token-based certificate addition to the trust store as a safer alternative to a trust password. It adds the `token` field to `POST /1.0/certificates`. This adds the ability to disable the `routed` NIC IP neighbor probing for availability on the parent network. Adds the `ipv4.neighborprobe` and `ipv6.neighborprobe` NIC settings. Defaulting to `true` if not specified. This adds support for `event-hub` cluster member role and the `ServerEventMode` environment field. If set to `true`, on VM start-up the `incus-agent` will apply NIC configuration to change the names and MTU of the instance NIC devices. Adds new `restricted.container.intercept` configuration key to allow usually safe system call interception options. Introduces a new `core.metrics_authentication` server configuration option to allow for the `/1.0/metrics` endpoint to be generally available without client authentication. Adds ability to copy image to a project different from the source. This adds support for listing images across all projects through the `all-projects` parameter on the `GET /1.0/images`API. Adds `allow_inconsistent` field to `POST /1.0/instances/<name>`. Set to `true` to allow inconsistent copying between cluster members. Introduces a new `ovn-chassis` cluster role which allows for specifying what cluster member should act as an OVN chassis. Adds the `security.syscalls.intercept.sched_setscheduler` to allow advanced process priority management in containers. Introduces the ability to specify the thin pool metadata volume size via `storage.thinpoolmetadatasize`. If this is not specified then the default is to let LVM pick an appropriate thin pool metadata volume size. This adds `total` field to the `GET /1.0/storage-pools/{name}/volumes/{type}/{volume}/state` API. Implements HEAD on `/1.0/instances/NAME/file`. This introduces the `instances.nic.host_name` server configuration key that can take a value of either `random` or `mac`. The default value for the key if unspecified is `random`. If it is set to random then use the random host interface names. If it's set to `mac`, then generate a name in the form `inc1122334455`. Adds ability to modify the set of profiles when image is copied. Adds the `security.syscalls.intercept.sysinfo` to allow the `sysinfo` syscall to be populated with cgroup-based resource usage information. This introduces a `mode` field to the evacuation request which allows for overriding the evacuation mode traditionally set through `cluster.evacuate`. Adds a new VPD struct to the PCI resource entries. This struct extracts vendor provided data including the full product name and additional key/value configuration pairs. Introduces a `raw.qemu.conf` configuration key to override select sections of the generated `qemu.conf`. Add support for `fscache`/`cachefilesd` on CephFS pools through a new `cephfs.fscache` configuration option. This introduces the networking load balancer functionality. Allowing `ovn` networks to define port(s) on external IP addresses that can be forwarded to one or more internal IP(s) inside their respective networks. This introduces a bidirectional `vsock` interface which allows the `incus-agent` and the Incus server to communicate better. This introduces a new `Ready` state for instances which can be set using `/dev/incus`. This introduces a new"
},
{
"data": "configuration key to control the BGP hold time for a particular peer. This introduces the ability to list storage volumes from all projects. This introduces a new `incusmemoryOOMkillstotal` metric to the `/1.0/metrics` API. It reports the number of times the out of memory killer (`OOM`) has been triggered. This introduces the storage bucket API. It allows the management of S3 object storage buckets for storage pools. This updates the storage bucket API to return initial admin credentials at bucket creation time. This introduces a new `incuscpueffective_total` metric to the `/1.0/metrics` API. It reports the total number of effective CPUs. Adds the `restricted.networks.access` project configuration key to indicate (as a comma-delimited list) which networks can be accessed inside the project. If not specified, all networks are accessible (assuming it is also allowed by the `restricted.devices.nic` setting, described below). This also introduces a change whereby network access is controlled by the project's `restricted.devices.nic` setting: If `restricted.devices.nic` is set to `managed` (the default if not specified), only managed networks are accessible. If `restricted.devices.nic` is set to `allow`, all networks are accessible (dependent on the `restricted.networks.access` setting). If `restricted.devices.nic` is set to `block`, no networks are accessible. This introduces the ability to use storage buckets on local storage pools by setting the new `core.storagebucketsaddress` global configuration setting. This adds support for sending life cycle and logging events to a Loki server. It adds the following global configuration keys: `loki.api.ca_cert`: CA certificate which can be used when sending events to the Loki server `loki.api.url`: URL to the Loki server (protocol, name or IP and port) `loki.auth.username` and `loki.auth.password`: Used if Loki is behind a reverse proxy with basic authentication enabled `loki.labels`: Comma-separated list of values which are to be used as labels for Loki events. `loki.loglevel`: Minimum log level for events sent to the Loki server. `loki.types`: Types of events which are to be sent to the Loki server (`lifecycle` and/or `logging`). This adds ACME support, which allows or other ACME services to issue certificates. It adds the following global configuration keys: `acme.domain`: The domain for which the certificate should be issued. `acme.email`: The email address used for the account of the ACME service. `acme.ca_url`: The directory URL of the ACME service, defaults to `https://acme-v02.api.letsencrypt.org/directory`. It also adds the following endpoint, which is required for the HTTP-01 challenge: `/.well-known/acme-challenge/<token>` This adds internal metrics to the list of metrics. These include: Total running operations Total active warnings Daemon uptime in seconds Go memory stats Number of goroutines This adds an expiry to cluster join tokens which defaults to 3 hours, but can be changed by setting the `cluster.jointokenexpiry` configuration key. This adds an expiry to remote add join tokens. It can be set in the `core.remotetokenexpiry` configuration key, and default to no expiry. This change adds support for storing the creation date and time of storage volumes and their snapshots. This adds the `CreatedAt` field to the `StorageVolume` and `StorageVolumeSnapshot` API types. This adds CPU hotplugging for VMs. Hotplugging is disabled when using CPU pinning, because this would require hotplugging NUMA devices as well, which is not possible. This adds support for the `features.networks.zones` project feature, which changes which project network zones are associated with when they are created. Previously network zones were tied to the value of `features.networks`, meaning they were created in the same project as networks were. Now this has been decoupled from `features.networks` to allow projects that share a network in the default project (i.e those with"
},
{
"data": "to have their own project level DNS zones that give a project oriented \"view\" of the addresses on that shared network (which only includes addresses from instances in their project). This also introduces a change to the network `dns.zone.forward` setting, which now accepts a comma-separated of DNS zone names (a maximum of one per project) in order to associate a shared network with multiple zones. No change to the `dns.zone.reverse.*` settings have been made, they still only allow a single DNS zone to be set. However the resulting zone content that is generated now includes `PTR` records covering addresses from all projects that are referencing that network via one of their forward zones. Existing projects that have `features.networks=true` will have `features.networks.zones=true` set automatically, but new projects will need to specify this explicitly. Adds a `txqueuelen` key to control the `txqueuelen` parameter of the NIC device. Adds `GET /1.0/cluster/members/<member>/state` API endpoint and associated `ClusterMemberState` API response type. Adds support for a Starlark scriptlet to be provided to Incus to allow customized logic that controls placement of new instances in a cluster. The Starlark scriptlet is provided to Incus via the new global configuration option `instances.placement.scriptlet`. Adds support for a `source.wipe` Boolean on the storage pool, indicating that Incus should wipe partition headers off the requested disk rather than potentially fail due to pre-existing file systems. This adds support for using ZFS block {spellexception}`filesystem` volumes allowing the use of different file systems on top of ZFS. This adds the following new configuration options for ZFS storage pools: `volume.zfs.block_mode` `volume.block.mount_options` `volume.block.filesystem` Adds support for instance generation ID. The VM or container generation ID will change whenever the instance's place in time moves backwards. As of now, the generation ID is only exposed through to VM type instances. This allows for the VM guest OS to reinitialize any state it needs to avoid duplicating potential state that has already occurred: `volatile.uuid.generation` This introduces a new `io.cache` property to disk devices which can be used to override the VM caching behavior. Adds support for AMD SEV (Secure Encrypted Virtualization) that can be used to encrypt the memory of a guest VM. This adds the following new configuration options for SEV encryption: `security.sev` : (bool) is SEV enabled for this VM `security.sev.policy.es` : (bool) is SEV-ES enabled for this VM `security.sev.session.dh` : (string) guest owner's `base64`-encoded Diffie-Hellman key `security.sev.session.data` : (string) guest owner's `base64`-encoded session blob This allows growing loop file backed storage pools by changing the `size` setting of the pool. This adds support for performing VM QEMU to QEMU live migration for both shared storage (clustered Ceph) and non-shared storage pools. This also adds the `CRIUTypeVMQEMU` value of `3` for the migration `CRIUType` `protobuf` field. This adds support for nesting an `ovn` NIC inside another `ovn` NIC on the same instance. This allows for an OVN logical switch port to be tunneled inside another OVN NIC using VLAN tagging. This feature is configured by specifying the parent NIC name using the `nested` property and the VLAN ID to use for tunneling with the `vlan` property. This adds support for OpenID Connect (OIDC) authentication. This adds the following new configuration keys: `oidc.issuer` `oidc.client.id` `oidc.audience` This adds the ability to set an `ovn` network into \"layer 3 only\" mode. This mode can be enabled at IPv4 or IPv6 level using `ipv4.l3only` and `ipv6.l3only` configuration options respectively. With this mode enabled the following changes are made to the network: The virtual router's internal port address will be configured with a single host netmask (e.g. /32 for IPv4 or /128 for"
},
{
"data": "Static routes for active instance NIC addresses will be added to the virtual router. A discard route for the entire internal subnet will be added to the virtual router to prevent packets destined for inactive addresses from escaping to the uplink network. The DHCPv4 server will be configured to indicate that a netmask of 255.255.255.255 be used for instance configuration. This updates the `ovnnicacceleration` API extension. The `acceleration` configuration key for OVN NICs can now takes the value `vdpa` to support Virtual Data Path Acceleration (VDPA). This adds cluster healing which automatically evacuates offline cluster members. This adds the following new configuration key: `cluster.healing_threshold` The configuration key takes an integer, and can be disabled by setting it to 0 (default). If set, the value represents the threshold after which an offline cluster member is to be evacuated. In case the value is lower than `cluster.offline_threshold`, that value will be used instead. When the offline cluster member is evacuated, only remote-backed instances will be migrated. Local instances will be ignored as there is no way of migrating them once the cluster member is offline. This extension adds a new `total` field to `InstanceStateDisk` and `InstanceStateMemory`, both part of the instance's state API. Add current user details to the main API endpoint. This introduces: `authusername` `authusermethod` Introduce a new `security.csm` configuration key to control the use of `CSM` (Compatibility Support Module) to allow legacy operating systems to be run in Incus VMs. This extension adds the ability to rebuild an instance with the same origin image, alternate image or as empty. A new `POST /1.0/instances/<name>/rebuild?project=<project>` API endpoint has been added as well as a new CLI command . This adds the possibility to place a set of CPUs in a desired set of NUMA nodes. This adds the following new configuration key: `limits.cpu.nodes` : (string) comma-separated list of NUMA node IDs or NUMA node ID ranges to place the CPUs (chosen with a dynamic value of `limits.cpu`) in. This adds the possibility to import ISO images as custom storage volumes. This adds the `--type` flag to . This adds the possibility to list an Incus deployment's network allocations. Through the command and the `--project <PROJECT> | --all-projects` flags, you can list all the used IP addresses, hardware addresses (for instances), resource URIs and whether it uses NAT for each `instance`, `network`, `network forward` and `network load-balancer`. This implements a new `zfs.delegate` volume Boolean for volumes on a ZFS storage driver. When enabled and a suitable system is in use (requires ZFS 2.2 or higher), the ZFS dataset will be delegated to the container, allowing for its use through the `zfs` command line tool. This allows copying storage volume snapshots to and from remotes. This introduces support for the `all-projects` query parameter for the GET API calls to both `/1.0/operations` and `/1.0/operations?recursion=1`. This parameter allows bypassing the project name filter. Adds the `GET /1.0/metadata/configuration` API endpoint to retrieve the generated metadata configuration in a JSON format. The JSON structure adopts the structure ```\"configs\" > `ENTITY` > `ENTITYSECTION` > \"keys\" > [<CONFIGOPTION0>, <CONFIGOPTION_1>, ...]```. Check the list of {doc}`configuration options </config-options>` to see which configuration options are included. This introduces a syslog socket that can receive syslog formatted log messages. These can be viewed in the events API and `incus monitor`, and can be forwarded to Loki. To enable this feature, set `core.syslog_socket` to `true`. This adds the fields `Name` and `Project` to `lifecycle` events. This introduces a new per-NIC"
},
{
"data": "option that works with both cgroup1 and cgroup2 unlike the deprecated `limits.network.priority` instance setting, which only worked with cgroup1. This API extension provides the capability to set initial volume configurations for instance root devices. Initial volume configurations are prefixed with `initial.` and can be specified either through profiles or directly during instance initialization using the `--device` flag. Note that these configuration are applied only at the time of instance creation and subsequent modifications have no effect on existing devices. This API extension indicates that the `/1.0/operations/{id}/wait` endpoint exists on the server. This indicates to the client that the endpoint can be used to wait for an operation to complete rather than waiting for an operation event via the `/1.0/events` endpoint. This extension adds a new image restriction, `requirements.privileged` which when `false` indicates that an image cannot be run in a privileged container. This extension adds support for copying and moving custom storage volumes within a cluster with a single API call. Calling `POST /1.0/storage-pools/<pool>/custom?target=<target>` will copy the custom volume specified in the `source` part of the request. Calling `POST /1.0/storage-pools/<pool>/custom/<volume>?target=<target>` will move the custom volume from the source, specified in the `source` part of the request, to the target. This introduces a new `io.bus` property to disk devices which can be used to override the bus the disk is attached to. This introduces the configuration keys `cephfs.createmissing`, `cephfs.osdpgnum`, `cephfs.metapool` and `cephfs.osd_pool` to be used when adding a `cephfs` storage pool to instruct Incus to create the necessary entities for the storage pool, if they do not exist. This API extension provides the ability to use flags `--profile`, `--no-profile`, `--device`, and `--config` when moving an instance between projects and/or storage pools. This introduces new server configuration keys to provide the SSL CA and client key pair to access the OVN databases. The new configuration keys are `network.ovn.cacert`, `network.ovn.clientcert` and `network.ovn.client_key`. Adds a `description` field to certificate. Adds a new `virtio-blk` value for `io.bus` on `disk` devices which allows for the attached disk to be connected to the `virtio-blk` bus. Adds a new `loki.instance` server configuration key to customize the `instance` field in Loki events. This can be used to expose the name of the cluster rather than the individual system name sending the event as that's usually already covered by the `location` field. Adds a new `start` field to the `POST /1.0/instances` API which when set to `true` will have the instance automatically start upon creation. In this scenario, the creation and startup is part of a single background operation. This introduces new options for the `cluster.evacuate` option: `stateful-stop` has the instance store its state to disk to be resume on restore. `force-stop` has the instance immediately stopped without waiting for it to shut down. This introduces a new `boot.hostshutdownaction` instance configuration key which can be used to override the default `stop` behavior on system shutdown. It supports the value `stop`, `stateful-stop` and `force-stop`. This introduces a new `agent:config` disk `source` which can be used to expose an ISO to the VM guest containing the agent and its configuration. Adds a new `LogicalRouter` field to the `NetworkStateOVN` struct which is part of the `GET /1.0/networks/NAME/state` API. This is used to get the OVN logical router name. This adds `uid`, `gid` and `mode` fields to the image metadata template entries. Add storage bucket backup support. This includes the following new endpoints (see for details): `GET /1.0/storage-pools/<pool>/buckets/<bucket>/backups` `POST /1.0/storage-pools/<pool>/buckets/<bucket>/backups` `GET /1.0/storage-pools/<pool>/buckets/<bucket>/backups/<name>` `POST /1.0/storage-pools/<pool>/buckets/<bucket>/backups/<name>` `DELETE /1.0/storage-pools/<pool>/buckets/<bucket>/backups/<name>` `GET /1.0/storage-pools/<pool>/buckets/<bucket>/backups/<name>/export` This adds a new `lvmcluster` storage driver which makes use of LVM shared VG through"
},
{
"data": "With this, it's possible to have a single shared LVM pool across multiple servers so long as they all see the same backing device(s). This adds a new configuration key `security.shared` to custom block volumes. If unset or `false`, the custom block volume cannot be attached to multiple instances. This feature was added to prevent data loss which can happen when custom block volumes are attached to multiple instances at once. This adds the ability to use a signed `JSON Web Token` (`JWT`) instead of using the TLS client certificate directly. In this scenario, the client derives a `JWT` from their own TLS client certificate providing it as a `bearer` token through the `Authorization` HTTP header. The `JWT` must have the certificate's fingerprint as its `Subject` and must be signed by the client's private key. This introduces a new `oidc.claim` server configuration key which can be used to specify what OpenID Connect claim to use as the username. This adds a new configuration key `serial` for device type `usb`. Feature has been added, to make it possible to distinguish between devices with identical `vendorid` and `productid`. This adds `balanced` as a new value for `limits.cpu.nodes`. When set to `balanced`, Incus will attempt to select the least busy NUMA node at startup time for the instance, trying to keep the load spread across NUMA nodes on the system. This extension adds a new image restriction, `requirements.nesting` which when `true` indicates that an image cannot be run without nesting. Adds the concept of network integrations and initial support for OVN Interconnection. New API: `/1.0/network-integrations` (GET, POST) `/1.0/network-integrations/NAME` (GET, PUT, PATCH, DELETE, POST) Each integration is made of: name description type (only `ovn` for now) configuration `ovn.northbound_connection` (database connection string for the OVN Interconnection database) `ovn.ca_cert` (optional, SSL CA certificate for the OVN Interconnection database) `ovn.client_cert` (optional, SSL client certificate to connect to the OVN Interconnection database) `ovn.client_key` (optional, SSL client key to connect to the OVN Interconnection database) `ovn.transit.pattern` (Pongo2 template to generate the transit switch name) Those integrations attach to network peers through some new fields: `type` (`local` for current behavior, `remote` for integrations) `target_integration` (reference to the integration) This extends `limits.memory.swap` to allow for a total limit in bytes. This adds the ability for `bridge.external_interfaces` to create a parent interface using a `interface/parent/vlan` syntax. This adds support for `mirror`, `raidz1` and `raidz2` ZFS `vdev` types by extending storage `source` configuration. A `migration.stateful` configuration key was introduced. It's a Boolean flag set to true whenever the container is in a stateful mode during the start, stop, and snapshot functions. This makes it less likely for users to run into CRIU errors when copying containers to another system. This adds support for listing profiles across all projects through the `all-projects` parameter on the `GET /1.0/profiles`API. This allows the instance scriptlet to fetch a list of instances given an optional Project or Location filter. This allows the instance scriptlet to fetch a list of cluster members given an optional cluster group. This allows the instance scriptlet to fetch a project given name of a project. This adds support for stateless rules in network ACLs. This adds a `started_at` timestamp to the instance state API. This adds support for listing networks across all projects through the `all-projects` parameter on the `GET /1.0/networks`API. This adds support for listing network ACLs across all projects through the `all-projects` parameter on the `GET /1.0/network-acls`API. This adds support for listing storage buckets across all projects through the `all-projects` parameter on the `GET /1.0/storage-pools/POOL/buckets`API. Add a new Load section to the resources API."
}
] |
{
"category": "Runtime",
"file_name": "api-extensions.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This article aims to help community developers to get the full picture of the Curve project and better participate in the development and evolution of the Curve project. This article will describe how to better participate in the development of the Curve project from the perspective of community participants. The premise of participating in an open source project is to understand it, especially for a large and complex project such as Curve, it is difficult to get started. Here are some information to help those who are interested in the Curve project: Through the study of the above materials, I believe that you already have an overall understanding of the Curve project, and there may also be some curiosity and doubts. At this time, you can deploy a Curve experience environment, which is conducive to a more intuitive perception of the Curve system. If you encounter a problem or want a new feature, you can track the relevant code. In the process, it is easy to understand the relevant modules. This is lots of contributors completed their first contributions. The Curve community has multiple , and there will be an online community meeting every two weeks. The regular meeting will synchronize the recent progress of the Curve project and answer your questions. You are also welcome to participate in the Curve community regular meeting. We will communicate at the meeting, so that we can quickly answer your questions, improve our understanding of the Curve project and synchronize what we are doing at this stage. After you have a certain understanding of the Curve project, if you are interested, you can choose the starting point to participate in the Curve. You can choose from the following aspects: Start by selecting a interested one from the of the Curve project. You can pay attention to issues with the tag which we have assessed as relatively good starters. Based on a certain understanding of the Curve project, you can also choose from . Selecting form and of Curve Operation and maintenance tools . Here are In addition to the existing issues, you are also welcome to submit issues that you have discovered or new features you hope for and resolve them. You can pay attention to the TODO in the existing Curve code, most of which are code logic optimization and features to be supplemented, choose the ones you are interested in and raise relevant issues to follow up and try to solve. For commit messages: A good commit message needs to contain the following elements: What is your change? (required) Why this change was made? (required) What effect does the commit have? (Optional) The footer is optional and is used to record issues that can be closed due to these changes, such as `Close issue-12345` Explain what changes have been made to the submitted PR: performance optimization? Fix bugs? add feature? and why. Finally, describe the impact of the following modifications, including performance and so on. Of course, for some simple modifications, the reasons and effects of the modifications can be"
},
{
"data": "Try to follow the following principles in the message: Summarize the function and role of PR Use short sentences and simple verbs Avoid long compound words and abbreviations Please follow the following format as much as possible when submitting: ``` [type]<scope>:<description> <BLANK LINE> [body] <BLANK LINE> [footer] ``` type can be one of the following types: build: Affects system builds and external dependencies ci: Affects continuous inheritance related functions docs: Documentation related changes feat: Add new features fix: A bug fix perf: Performance improvement refactor: Refactor related code without adding functionality or fixing bugs style: Modifications that do not affect the meaning of the code, only modify the code style test: Modifications related to unit testing The first line indicates that the title should be kept within 70 characters as much as possible, explaining the modified module and content, multiple modules can be represented by `*`, and the modified module is explained in the text stage. The footer is optional and is used to record issues that can be closed due to these changes, such as `Close #12345` After you find a point of interest, you can discuss it through an issue. If it is a small bug-fix or a feature point, you can start development after a brief discussion. Even if it is a simple problem, it is recommended to communicate with us first, so as to avoid a deviation in the understanding of the problem or a problem with the solution, and useless efforts are made. If the work to be done is more complicated, you need to write a detailed design document and submit it to curve/docs. The existing design solutions in this directory are for your reference. A good design plan should clearly write the following points: Background description what problem to solve Scheme design (scheme description, correctness and feasibility), preferably including the comparison of several schemes Compatibility with existing systems Concrete realization of the scheme It is also recommended to communicate with us before starting the plan writing. If you are not sure about the feasibility of the plan, you can provide a simple plan description first, and then refine and develop it after our evaluation. The curve docs path is in , please use markdown format about the docs contribution except ppt. Document contribution don't need to trigger CI. please add ```[skipci]``` in your github pr title. Once you've finished writing the code, you can submit a PR. Of course, if the development has not been completed, you can submit PR first in some cases. For example, if you want to let the community take a look at the general solution, you can raise the price of PR after completing the code framework. Please use clangd for your code completions. More details is in . Curve Ci use ```cpplint``` check what your changed. install ``cpplint``` (need root) ```bash $ pip install cpplint ``` check your changed local ```bash $ cpplint --filter=-build/c++11 --quiet --recursive your_path ``` For PR we have the following requirements: The Curve coding standard strictly follows the , but we use 4 spaces to indent, Clang-format will more helpful for"
},
{
"data": "Of course, CI will check what your changed. The code must have test cases, excluding documentation, unit tests (incremental lines cover more than 80%, and incremental branches cover more than 70%); integration tests (merge statistics with unit tests, and meet the unit test coverage requirements). Please fill in the description of the PR as detailed as possible, associate with the relevant issues, and the PR commit message can clearly see the resolved issues. After submitting to the Curve master branch, Curve CI will be triggered automatically. It is necessary to ensure that the CI is passed, and the Jenkins username and password of the CI is netease/netease, if the CI fails to run, you can log in to the Jenkins platform to view the reason for the failure. After the CI is passed, the review can start, and each PR needs to get at least two LGTMs of Committer/Maintainer before merging. PR code requires a certain amount of comments to make the code easy to understand, and all comments and review comments and replies are required to be in English. Please make sure your changed can pass the compile locally. We usually merge only one commit after you rebase. Trigger CI please comment ```cicheck```. CI checkpoints: The changed code style check(cpplint). Branch test coverage testing. Unit tests(If failed, run the failed tests locally). Chaos tests. Repush will trigger CI, If github page have no reaction. Please wait. If CI is not stabled, repeatedly comment ```cicheck``` will trigger CI again. At present, the Curve community has multiple communication channels, please choose an appropriate and efficient communication method according to your needs: : For questions related to the function and performance of the Curve project, please choose this channel first, which is convenient for continuous tracking of problems and timely responses from R&D students. : It is mainly organized according to the common problems in the Curve User Group, covering the issues that most users care about, including Curve's product positioning, application scenarios, application status and open source goals. Check out the Frequently Asked Questions to learn more about the Curve project. : The Curve forum is also an important communication channel, where the Curve team will publish technical articles and discuss technical issues with everyone. If you have questions about the Curve project, you can post a discussion topic here. Curve WeChat public account: OpenCurveYou can subscribe to Curve's WeChat official account. We will publish articles every week. In addition to technical articles, there will be Curve's application progress and big news, so that everyone can understand the current situation of Curve. If you have any questions or want to know about the published content, you can give feedback through private messages. slack: cloud-native.slack.comchannel #project_curve Curve User Group: In order to facilitate instant communication with everyone, the Curve User Group is currently a WeChat group. Due to the large number of people in the group, it is necessary to add OpenCurve_bot WeChat first, and then invite into the group. In the user group, everyone can freely communicate about Curve and storage-related topics, and get immediate feedback on problems. <img src=\"docs/images/curve-wechat.jpeg\" style=\"zoom: 75%;\" />"
}
] |
{
"category": "Runtime",
"file_name": "developers_guide.md",
"project_name": "Curve",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "work in progress Reduce the barrier of entry. Create a stable community. Release virtual kubelet 1.0. Core. Split out providers from the virtual kubelet core tree. Today the provider dependencies within the virtual kubelet cause more harm than good. Interface. Stablize the virtual kubelet interface so minimal changes to no changes will be needed in the future. Developer. Reduce the barrier of entry to create with virtual kubelet. Community. Grow and nurture the community for virtual kubelet core. Includes compiling use-cases past the cloud providers interests. IoT use-cases will be a big focus. Improve our e2e testing in virtual kubelet core. Add options & integration points to include any provider. Create a baseline for how quickly virtual kubelet can process requests. Test at high scale throughput. Work with sig-architecture in Kubernetes to develop a conformance test profile for virtual kubelet. Explore what it means to use virtual kubelet in an IoT Edge usecases and work with wg-io-edge in Kubernetes to develop a standard within virtual kubelet to enable those usecases. Create provider agnostic tools to scale into virtual kubelet. tba"
}
] |
{
"category": "Runtime",
"file_name": "virtual-kubelet-2019.md",
"project_name": "Virtual Kubelet",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: CephClient CRD Rook allows creation and updating clients through the custom resource definitions (CRDs). For more information about user management and capabilities see the . Use Client CRD in case you want to integrate Rook with applications that are using LibRBD directly. For example for OpenStack deployment with Ceph backend use Client CRD to create OpenStack services users. The Client CRD is not needed for Flex or CSI driver users. The drivers create the needed users automatically. This guide assumes you have created a Rook cluster as explained in the main . To get you started, here is a simple example of a CRD to configure a Ceph client with capabilities. ```yaml apiVersion: ceph.rook.io/v1 kind: CephClient metadata: name: example namespace: rook-ceph spec: caps: mon: 'profile rbd, allow r' osd: 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd-read-only pool=images' ``` To use `CephClient` to connect to a Ceph cluster: Once your `CephClient` has been processed by Rook, it will be updated to include your secret: ```console kubectl -n rook-ceph get cephclient example -o jsonpath='{.status.info.secretName}' ``` Extract Ceph cluster credentials from the generated secret (note that the subkey will be your original client name): ```console kubectl --namespace rook-ceph get secret rook-ceph-client-example -o jsonpath=\"{.data.example}\" | base64 -d ``` The base64 encoded value that is returned is the password for your ceph client. To send writes to the cluster, you must retrieve the mons in use: ```console kubectl --namespace rook-ceph get configmap rook-ceph-mon-endpoints -o jsonpath='{.data.data}' | sed 's/.=//g'` ``` This command should produce a line that looks somewhat like this: ```console 10.107.72.122:6789,10.103.244.218:6789,10.99.33.227:6789 ``` If you choose to generate files for Ceph to use you will need to generate the following files: General configuration file (ex. `ceph.conf`) Keyring file (ex. `ceph.keyring`) Examples of the files follow: `ceph.conf` ```ini [global] mon_host=10.107.72.122:6789,10.103.244.218:6789,10.99.33.227:6789 log file = /tmp/ceph-$pid.log ``` `ceph.keyring` ```ini [client.example] key = < key, decoded from k8s secret> caps mon = 'allow r' caps osd = 'profile rbd pool=<your pool>, profile rb pool=<another pool>' ``` With the files we've created, you should be able to query the cluster by setting Ceph ENV variables and running `ceph status`: ```console export CEPH_CONF=/libsqliteceph/ceph.conf; export CEPH_KEYRING=/libsqliteceph/ceph.keyring; export CEPH_ARGS=--id example; ceph status ``` With this config, the ceph tools (`ceph` CLI, in-program access, etc) can connect to and utilize the Ceph cluster. The Ceph project contains a that interacts with RADOS directly, called . First, on your workload ensure that you have the appropriate packages installed that make `libcephsqlite.so` available: Without the appropriate package (or a from-scratch build of SQLite), you will be unable to load `libcephsqlite.so`. After creating a `CephClient` similar to and retrieving it's credentials, you may set the following ENV variables: ```console export CEPH_CONF=/libsqliteceph/ceph.conf; export CEPH_KEYRING=/libsqliteceph/ceph.keyring; export CEPH_ARGS=--id sqlitevfs ``` Then start your SQLite database: ```console sqlite> .load libcephsqlite.so sqlite> .open file:///poolname:/test.db?vfs=ceph sqlite> ``` If those lines complete without error, you have successfully set up SQLite to access Ceph. See for more information on the VFS and database URL format."
}
] |
{
"category": "Runtime",
"file_name": "ceph-client-crd.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "A Rook Ceph cluster. Ideally a ceph-object-realm resource would have been started up already. The resource described in this design document represents the zone group in the . When the storage admin is ready to create a multisite zone group for object storage, the admin will name the zone group in the metadata section on the configuration file. In the config, the admin must configure the realm the zone group is in. The first ceph-object-zone-group resource created in a realm is designated as the master zone group in the Ceph cluster. This example `ceph-object-zone-group.yaml`, names a zone group `my-zonegroup`. ```yaml apiVersion: ceph.rook.io/v1alpha1 kind: CephObjectZoneGroup metadata: name: zone-group-a namespace: rook-ceph spec: realm: my-realm ``` Now create the ceph-object-zone-group. ```bash kubectl create -f ceph-object-zone-group.yaml ``` At this point the Rook operator recognizes that a new ceph-object-zone-group resource needs to be configured. The operator will start creating the resource to start the ceph-object-zone-group. After these steps the admin should start up: A with the name of the zone group in the `zoneGroup` section. A referring to the newly started up ceph-object-zone resource. A , with the same name as the `realm` field, if it has not already been started up already. The order in which these resources are created is not important. Once all of the resources in #2 are started up, the operator will create a zone group on the Rook Ceph cluster and the ceph-object-zone-group resource will be running. The realm named in the `realm` section must be the same as the ceph-object-realm resource the zone group is a part of. When resource is deleted, zone group are not deleted from the cluster. Zone group deletion must be done through toolboxes. At the moment creating an ceph-object-zone-group realm resource only handles Day 1 initial configuration for the realm. Changes made to the resource's configuration or deletion of the resource are not reflected on the Ceph cluster. To be clear, when the ceph-object-zone group resource is deleted or modified, the zone group is not deleted from the Ceph cluster. Zone Group deletion must be done through the toolbox. The Rook toolbox can modify the Ceph Multisite state via the radosgw-admin command. The following command, run via the toolbox, deletes the zone group. ```bash ``` The ceph-object-zone-group settings are exposed to Rook as a Custom Resource Definition (CRD). The CRD is the Kubernetes-native means by which the Rook operator can watch for new resources. The name of the resource provided in the `metadata` section becomes the name of the zone group. The following variables can be configured in the ceph-zone-group resource. `realm`: The realm named in the `realm` section of the ceph-realm resource the zone group is a part of. ```yaml apiVersion: ceph.rook.io/v1alpha1 kind: CephObjectZoneGroup metadata: name: zone-group-b namespace: rook-ceph spec: realm: my-realm ```"
}
] |
{
"category": "Runtime",
"file_name": "zone-group.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "The flag `--pod-manifest` can be passed to rkt's and subcommands. It allows users to specify a to run as a pod. A pod manifest completely specifies how the pod will be run, overriding any configuration present in the individual images of the apps in the pod. Thus, by encoding how a rkt pod should be executed in a file, it can then be saved in version control and users don't need to deal with very long CLI arguments everytime they want to run a complicated pod. The most convenient way to generate a pod manifest is running a pod by using the rkt CLI (without `--pod-manifest`), exporting the resulting pod manifest, and then tweaking it until satisfied with it. For example: ```bash $ sudo rkt run coreos.com/etcd:v2.0.10 kinvolk.io/aci/busybox:1.24 -- -c 'while true; do date; sleep 1; done' ... ^]^]Container rkt-07f1cfdc-950b-4b6e-a2c0-8fb1ed37f98b terminated by signal KILL. $ rkt cat-manifest 07f1cfdc > pod-manifest.json ``` The resulting pod manifest file is: ```json { \"acVersion\": \"1.30.0\", \"acKind\": \"PodManifest\", \"apps\": [ { \"name\": \"etcd\", \"image\": { \"name\": \"coreos.com/etcd\", \"id\": \"sha512-c03b055d02e51e36f44a2be436eb77d5b0fbbbe37c00851188d8798912e8508a\", \"labels\": [ { \"name\": \"os\", \"value\": \"linux\" }, { \"name\": \"arch\", \"value\": \"amd64\" }, { \"name\": \"version\", \"value\": \"v2.0.10\" } ] }, \"app\": { \"exec\": [ \"/etcd\" ], \"user\": \"0\", \"group\": \"0\" } }, { \"name\": \"busybox\", \"image\": { \"name\": \"kinvolk.io/aci/busybox\", \"id\": \"sha512-140375b2a2bd836559a7c978f36762b75b80a7665e5d922db055d1792d6a4182\", \"labels\": [ { \"name\": \"version\", \"value\": \"1.24\" }, { \"name\": \"os\", \"value\": \"linux\" }, { \"name\": \"arch\", \"value\": \"amd64\" } ] }, \"app\": { \"exec\": [ \"sh\", \"-c\", \"while true; do date; sleep 1; done\" ], \"user\": \"0\", \"group\": \"0\", \"ports\": [ { \"name\": \"nc\", \"protocol\": \"tcp\", \"port\": 1024, \"count\": 1, \"socketActivated\": false } ] } } ], \"volumes\": null, \"isolators\": null, \"annotations\": [ { \"name\": \"coreos.com/rkt/stage1/mutable\", \"value\": \"false\" } ], \"ports\": [] } ``` From there, you can edit the pod manifest following its . For example, we can add a memory isolator to etcd: ```json ... \"exec\": [ \"/etcd\" ], \"isolators\": [ { \"name\": \"resource/memory\", \"value\": {\"limit\": \"1G\"} } ], ... ``` Then, we can just run rkt with that pod manifest: ``` $ sudo rkt run --pod-manifest=pod-manifest.json ... ``` Note Images used by a pod manifest must be store in the local store, `--pod-manifest` won't do discovery or fetching. Another option is running rkt with different CLI arguments until we have a configuration we like, and then just save the resulting pod manifest to use it later."
}
] |
{
"category": "Runtime",
"file_name": "pod-manifest.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "For optimal production setup MinIO recommends Linux kernel version 4.x and later. | Item | Specification | |:-|:--| | Maximum number of servers per cluster | no-limit | | Minimum number of servers | 02 | | Minimum number of drives per server when server count is 1 | 02 | | Minimum number of drives per server when server count is 2 or 3 | 01 | | Minimum number of drives per server when server count is 4 | 01 | | Maximum number of drives per server | no-limit | | Read quorum | N/2 | | Write quorum | N/2+1 | | Item | Specification | |:--|:--| | Maximum number of buckets | unlimited (we recommend not beyond 500000 buckets) - see NOTE: | | Maximum number of objects per bucket | no-limit | | Maximum object size | 50 TiB | | Minimum object size | 0 B | | Maximum object size per PUT operation | 5 TiB | | Maximum number of parts per upload | 10,000 | | Part size range | 5 MiB to 5 TiB. Last part can be 0 B to 5 TiB | | Maximum number of parts returned per list parts request | 10000 | | Maximum number of objects returned per list objects request | 1000 | | Maximum number of multipart uploads returned per list multipart uploads request | 1000 | | Maximum length for bucket names | 63 | | Maximum length for object names | 1024 | | Maximum length for '/' separated object name segment | 255 | | Maximum number of versions per object | 10000 (can be configured to higher values but we do not recommend beyond 10000) | NOTE: While MinIO does not implement an upper boundary on buckets, your cluster's hardware has natural limits that depend on the workload and its scaling patterns. We strongly recommend for architecture and sizing guidance for your production use case. We found the following APIs to be redundant or less useful outside of AWS S3. If you have a different view on any of the APIs we missed, please consider opening a with relevant details on why MinIO must implement them. BucketACL (Use instead) BucketCORS (CORS enabled by default on all buckets for all HTTP verbs, you can optionally restrict the CORS domains) BucketWebsite (Use or ) BucketAnalytics, BucketMetrics, BucketLogging (Use APIs) ObjectACL (Use instead) Object name restrictions on MinIO are governed by OS and filesystem limitations. For example object names that contain characters `^*|\\/&\";` are unsupported on Windows platform or any other file systems that do not support filenames with special characters. This list is non exhaustive, it depends on the operating system and filesystem under use - please consult your operating system vendor for a more comprehensive list of special characters. MinIO recommends using Linux operating system for production workloads. Objects must not have conflicting objects as parent objects, applications using this behavior should change their behavior and use non-conflicting unique keys, for example situations such as following conflicting key patterns are not supported. ``` PUT <bucketname>/a/b/1.txt PUT <bucketname>/a/b ``` ``` PUT <bucketname>/a/b PUT <bucketname>/a/b/1.txt ```"
}
] |
{
"category": "Runtime",
"file_name": "minio-limits.md",
"project_name": "MinIO",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Installing Weave Net menu_order: 30 search_type: Documentation Install Weave Net from the command line on its own or if you are using Docker, Kubernetes or Mesosphere as a Docker or a CNI plugin. * * * * * * *"
}
] |
{
"category": "Runtime",
"file_name": "install.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "This document further describes the developer workflow and how issues are managed as introduced in . Please read first before proceeding. <!-- toc --> - - - - - - - - - <!-- /toc --> The purpose of this workflow is to formalize a lightweight set of processes that will optimize issue triage and management which will lead to better release predictability and community responsiveness for support and feature enhancements. Additionally, Antrea must prioritize issues to ensure interlock alignment and compatibility with other projects including Kubernetes. The processes described here will aid in accomplishing these goals. Creating new issues and PRs is covered in detail in . We use `good first issue` and `help wanted` labels to indicate issues we would like contribution on. These two labels were borrowed from the Kubernetes project and represent the same context as described in [Help Wanted and Good First Issue Labels](https://www.kubernetes.dev/docs/guide/help-wanted/). We do not yet support the automation mentioned in the Kubernetes help guild. To summarize: `good first issue` -- issues intended for first time contributors. Members should keep an eye out for these pull requests and shepherd it through our processes. `help wanted` -- issues that represent clearly laid out tasks that are generally tractable for new contributors. The solution has already been designed and requires no further discussion from the community. This label indicates we need additional contributors to help move this task along. When new issues or PRs are created, the maintainers must triage the issue to ensure the information is valid, complete, and properly categorized and prioritized. An issue is triaged in the following way: Ensure the issue is not a duplicate. Do a quick search against existing issues to determine if the issue has been or is currently being worked on. If you suspect the issue is a duplicate, apply the label. Ensure that the issue has captured all the information required for the given issue . If information or context is needed, apply the `triage/needs-information`. Apply any missing labels. An issue can relate to more than one area. Apply a label. This may require further discussion during the community meeting if the priority cannot be determined. If undetermined, do not apply a priority. Issues with unassigned priorities will be selected for review. Apply a label if known. This may require further discussion, a research spike or review by the assigned contributor who will be working on this issue. This is only an estimate of the complexity and size of the issue. Once an issue has been triaged, a comment should be left for original submitter to respond to any applied triage labels. If all triage labels have been addressed and the issue is ready to be worked, apply the label `ready-to-work` so the issue can be assigned to a milestone and worked by a contributor. If it is determined an issue will not be resolved or not fixed, apply the `triage/unresolved` label and leave a reason in a comment for the original submitter. Unresolved issues can be closed after giving the original submitter an opportunity to appeal the reason supplied. A PR is triaged in the following way: Automation will ensure that the submitter has signed the . Automation will run CI tests against the submission to ensure compliance. Apply label to the submission. (TODO: we plan to automate this with a GitHub action and apply size based on lines of code). Ensure that the PR references an existing issue (exceptions to this should be"
},
{
"data": "If the PR is missing this or needs any additional information, note it in the comment and apply the `triage/needs-information` label. The PR should have the same `area/<area>`, `kind/<kind>`, and `lifecycle/<lifecycle>` labels as that of the referenced issue. (TODO: we plan to automate this with a GitHub action and apply labels automatically) When starting work on an issue, assign the issue to yourself if it has not already been assigned and apply the `lifecycle/active` label to signal that the issue is actively being worked on. Making code changes is covered in detail in . If the issue kind is a `kind/bug`, ensure that the issue can be reproduced. If not, assign the `triage/not-reproducible` and request feedback from the original submitter. This section describes the label metadata we use to track issues and PRs. For a definitive list of all GitHub labels used within this project, please see . An issue kind describes the kind of contribution being requested or submitted. In some cases, the kind will also influence how the issue or PR is triaged and worked. A `kind/api-change` label categorizes an issue or PR as related to adding, removing, or otherwise changing an API. All API changes must be reviewed by maintainers in addition to the standard code review and approval workflow. To create an API change issue or PR: label your issue or PR with `kind/api-change` describe in the issue or PR body which API you are changing, making sure to include API endpoint and schema (endpoint, Version, APIGroup, etc.) Is this a breaking change? Can new or older clients opt-in to this API? Is there a fallback? What are the implications of not supporting this API version? How is an upgrade handled? If automatically, we need to ensure proper tests are created. If we require a manual upgrade procedure, this needs to be noted so that the release notes and docs can be updated appropriately. Before starting any work on an API change it is important that you have proper review and approval from the project maintainers. A `kind/bug` label categorizes an issue or PR as related to a bug. Any problem encountered when building, configuring, or running Antrea could be a potential case for submitting a bug. To create a bug issue or bug fix PR: label your issue or PR with `kind/bug` describe your bug in the issue or PR body making sure to include: version of Antrea version of Kubernetes version of OS and any relevant environment or system configuration steps and/or configuration to reproduce the bug any tests that demonstrate the presence of the bug please attach any relevant logs or diagnostic output A `kind/cleanup` label categorizes an issue or PR as related to cleaning up code, process, or technical debt. To create a cleanup issue or PR: label your issue or PR with `kind/cleanup` describe your cleanup in the issue or PR body being sure to include what is being cleaned for what reason it is being cleaned (technical debt, deprecation, etc.) Examples of a cleanup include: Adding comments to describe code execution Making code easier to read and follow Removing dead code related to deprecated features or implementations A `kind/feature` label categorizes an issue or PR as related to a new"
},
{
"data": "To create a feature issue or PR: label your issue or PR with `kind/feature` describe your proposed feature in the issue or PR body being sure to include a use case for the new feature list acceptance tests for the new feature describe any dependencies for the new feature depending on the size and impact of the feature a design proposal may need to be submitted the feature may need to be discussed in the community meeting Before you begin work on your feature it is import to ensure that you have proper review and approval from the project maintainers. Examples of a new feature include: Adding a new set of metrics for enabling additional telemetry. Adding additional supported transport layer protocol options for network policy. Adding support for IPsec. A `kind/deprecation` label categorizes an issue or PR as related to feature marked for deprecation. To create a deprecation issue or PR: label your issue or PR with `kind/deprecation` title the issue or PR with the feature you are deprecating describe the deprecation in the issue or PR body making sure to: explain why the feature is being deprecated discuss time-to-live for the feature and when deprecation will take place discuss any impacts to existing APIs A `kind/task` label categorizes an issue or PR as related to a \"routine\" maintenance task for the project, e.g. upgrading a software dependency or enabling a new CI job. To create a task issue or PR: label your issue or PR with `kind/task` describe your task in the issue or PR body, being sure to include the reason for the task and the possible impacts of the change A `kind/design` label categorizes issue or PR as related to design. A design issue or PR is for discussing larger architectural and design proposals. Approval of a design proposal may result in multiple additional feature, api-change, or cleanup issues being created to implement the design. To create a design issue: label your issue or PR with `kind/design` describe the design in the issue or PR body Before creating additional issues or PRs that implement the proposed design it is important to get feedback and approval from the maintainers. Design feedback could include some of the following: needs additional detail no, this problem should be solved in another way this is desirable but we need help completing other issues or PRs first; then we will consider this design A `kind/documentation` label categorizes issue or PR as related to a documentation. To create a documentation issue or PR: label your issue or PR with `kind/documentation` title the issue with a short description of what you are documenting provide a brief summary in the issue or PR body of what you are documenting. In some cases, it might be useful to include a checklist of changed documentation files to indicate your progress. A `kind/failing-test` label categorizes issue or PR as related to a consistently or frequently failing test. To create a failing test issue or PR: label your issue or PR with `kind/failing-test` TODO: As more automation is used in the continuous integration pipeline, we will be able to automatically generate an issue for failing tests. A `kind/support` label categorizes issue as related to a support request. To create a support issue or PR: label your issue or PR with `kind/support` title the issue or PR with a short description of your support request answer all of the questions in the support issue template to provide comprehensive information about your cluster that will be useful in identifying and resolving the issue, you may want to consider producing a and uploading it to a publicly-accessible location. Be aware that the generated support bundle includes a lot of information, including logs, so please ensure that you do not share anything"
},
{
"data": "Area labels begin with `area/<area>` and identify areas of interest or functionality to which an issue relates. An issue or PR could have multiple areas. These labels are used to sort issues and PRs into categories such as: operating systems cloud platform, functional area, operating or legal area (i.e., licensing), etc. A list of areas is maintained in . An area may be changed, added or deleted during issue or PR triage. Size labels begin with `size/<size>` and estimate the relative complexity or work required to resolve an issue or PR. TODO: For submitted PRs, the size can be automatically calculated and the appropriate label assigned. Size labels are specified according to lines of code; however, some issues may not relate to lines of code submission such as documentation. In those cases, use the labels to apply an equivalent complexity or size to the task at hand. Size labels include: `size/XS` -- denotes a extra small issue, or PR that changes 0-9 lines, ignoring generated files `size/S` -- denotes a small issue, or PR that changes 10-29 lines, ignoring generated files `size/M` -- denotes a medium issue, or PR that changes 30-99 lines, ignoring generated files `size/L` -- denotes a large issue, or PR that changes 100-499 lines, ignoring generated files `size/XL` -- denotes a very large issue, or PR that changes 500+ lines, ignoring generated files Size labels are defined in . As soon as new issues are submitted, they must be triaged until they are ready to work. The maintainers may apply the following labels during the issue triage process: `triage/duplicate` -- indicates an issue is a duplicate of other open issue `triage/needs-information` -- indicates an issue needs more information in order to work on it `triage/not-reproducible` -- indicates an issue can not be reproduced as described `triage/unresolved` -- indicates an issue that can not or will not be resolved Triage labels are defined in . To track the state of an issue, the following labels will be assigned. `lifecycle/active` -- indicates that an issue or PR is actively being worked on by a contributor `lifecycle/frozen` -- indicates that an issue or PR should not be auto-closed due to staleness `lifecycle/stale` -- denotes an issue or PR has remained open with no activity and has become stale The following schedule will be used to determine an issue's lifecycle: after 180 days of inactivity, an issue will be automatically marked as `lifecycle/stale` after an extra 180 days of inactivity, an issue will be automatically closed any issue marked as `lifecycle/frozen` will prevent automatic transitions to stale and prevent auto-closure commenting on an issue will remove the `lifecycle/stale` label Issue lifecycle management ensures that the project backlog remains fresh and relevant. Project maintainers and contributors will need to revisit issues to periodically assess their relevance and progress. TODO: Additional CI automation (GitHub actions) will be used to automatically apply and manage some of these lifecycle labels. Lifecycle labels are defined in . A priority label signifies the overall priority that should be given to an issue or PR. Priorities are considered during backlog grooming and help to determine the number of features included in a milestone. `priority/awaiting-more-evidence` -- lowest priority. Possibly useful, but not yet enough support to actually get it done. `priority/backlog` -- higher priority than priority/awaiting-more-evidence. `priority/critical-urgent` -- highest priority. Must be actively worked on as someone's top priority right now. `priority/important-longterm` -- important over the long term, but may not be staffed and/or may need multiple releases to complete. `priority/import-soon` -- must be staffed and worked on either currently, or very soon, ideally in time for the"
}
] |
{
"category": "Runtime",
"file_name": "issue-management.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "!!! note Deprecated in Rook v1.11 due to lack of usage and maintainership. Use the MachineDisruptionBudget to ensure that OCP 4.x style fencing does not cause data unavailability and loss. Openshift uses `Machines` and `MachineSets` from the to dynamically provisions nodes. Fencing is a remediation method that reboots/deletes `Machine` CRDs to solve problems with automatically provisioned nodes. Once detects that a node is `NotReady` (or some other configured condition), it will remove the associated `Machine` which will cause the node to be deleted. The `MachineSet` controller will then replace the `Machine` via the machine-api. The exception is on baremetal platforms where fencing will reboot the underlying `BareMetalHost` object instead of deleting the `Machine`. Fencing does not use the eviction api. It is for `Machine`s and not `Pod`s. Hopefully not. On cloud platforms, the OSDs can be rescheduled on new nodes along with their backing PVs, and on baremetal where the local PVs are tied to a node, fencing will simply reboot the node instead of destroying it. We need to ensure that only one node can be fenced at a time and that Ceph is fully recovered (has PGs clean) before any fencing is initiated. The available pattern for limiting fencing is the MachineDisruptionBudget which allows us to specify maxUnavailable. However, this wont be sufficient to ensure that Ceph has recovered before fencing is initiated as MachineHealthCheck does not check anything other than the node state. Therefore, we will control how many nodes match the MDB by dynamically adding and removing labels as well as dynamically updating the MDB. By manipulating the MDB into a state where desiredHealthy > currentHealthy, we can disable fencing on the nodes the MDB points to. We will implement two controllers `machinedisruptionbudget-controller` and the `machine-controller` to be implemented through the controller pattern describere . Each controller watches a set of object kinds and reconciles one. The bottom line is that fencing is blocked if the PG state in not active+clean, but fencing continues on `Machine`s without the label which indicates that OSD resources are running"
},
{
"data": "This controller watches ceph PGs and CephClusters. We will ensure the reconciler is enqueued every 60s. It ensures that each CephCluster has a MDB created, and the MDB's value of maxUnvailable reflects the health of the Ceph Cluster's PGs. If all PGs are clean, maxUnavailable = 1. else, maxUnavailable = 0. We can share a ceph health cache with the other controller-runtime reconcilers that have to watch the PG \"cleanliness\". The MDB will target `Machine`s selected by a label maintained by the `machine-controller`. The label is `fencegroup.rook.io/<cluster-name>`. This controller watches OSDs and `Machine`s. It ensures that each `Machine` with OSDs from a `CephCluster` have the label `fencegroup.rook.io/<cluster-name>`, and those that do not have running OSDs do not have label. This will ensure that no `Machine` without running OSDs will be protected by the MDB. We assume that the controllers will be able to reconcile multiple times in < 5 minutes as we know that fencing will happen only after a configurable timeout. The default timeout is 5 minutes. This is important because the MDB must be reconciled based on an accurate ceph health state in that time. Node needs to be fenced, the OSDs on the node are down too Node has NotReady condition. Some Ceph PGs are not active+clean. machinedisruptionbudget-controller sets maxUnavailable to 0 on the MachineDisruptionBudget. MachineHealthCheck sees NotReady and attempts to fence after 5 minutes, but can't due to MDB machine-controller notices all OSDs on the affected node are down and removes the node from the MDB. MDB no longer covers the affected node, and MachineHealthCheck fences it. Node needs to be fenced, but the OSDs on the node are up Node has NotReady condition. Ceph PGs are all active+clean so maxUnavailable remains 1 on the MDB. MachineHealthCheck fences the Node. Ceph resources on the node go down. Some Ceph PGs are now not active+clean. machinedisruptionbudget-controller sets maxUnavailable to 0 on the MachineDisruptionBudget. If another labeled node needs to be fenced, it will only happen after the Ceph PGs become active+clean again when the OSDs are rescheduled and backfilled."
}
] |
{
"category": "Runtime",
"file_name": "ceph-openshift-fencing-mitigation.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This document lists the organizations that use Kuasar on production or testing environment and projects that adopts Kuasar as their upstream. Please make a pull request on this document if any organization or project would like to be added or removed. | Adopter name | Adopter Type | Usage Scenario | |||-| | Menging Software | Service provider | FaaS platform based on WebAssembly | | Huawei Cloud Native | Service provider | Provide Kubernetes experience with kuasar | | iSulad | OSS project | Use Kuasar as low-level container runtime |"
}
] |
{
"category": "Runtime",
"file_name": "ADOPTERS.md",
"project_name": "Kuasar",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Kata Containers creates a VM in which to run one or more containers. It does this by launching a to create the VM. The hypervisor needs two assets for this task: a Linux kernel and a small root filesystem image to boot the VM. The is passed to the hypervisor and used to boot the VM. The default kernel provided in Kata Containers is highly optimized for kernel boot time and minimal memory footprint, providing only those services required by a container workload. It is based on the latest Linux LTS (Long Term Support) . The hypervisor uses an image file which provides a minimal root filesystem used by the guest kernel to boot the VM and host the Kata Container. Kata Containers supports both initrd and rootfs based minimal guest images. The provide both an image and an initrd, both of which are created using the tool. Notes: Although initrd and rootfs based images are supported, not all support both types of image. The guest image is unrelated to the image used in a container workload. For example, if a user creates a container that runs a shell in a BusyBox image, they will run that shell in a BusyBox environment. However, the guest image running inside the VM that is used to host that BusyBox image could be running Clear Linux, Ubuntu, Fedora or any other distribution potentially. The `osbuilder` tool provides which can be built into either initrd or rootfs guest images. If you are using a [packaged version of Kata Containers](../../install), you can see image details by running the script as `root` and looking at the \"Image details\" section of the output. The default packaged rootfs image, sometimes referred to as the _mini O/S_, is a highly optimized container bootstrap system. If this image type is , when the user runs the : The will launch the configured . The hypervisor will boot the mini-OS image using the . The kernel will start the init daemon as PID 1 (`systemd`) inside the VM root environment. `systemd`, running inside the mini-OS context, will launch the in the root context of the VM. The agent will create a new container environment, setting its root filesystem to that requested by the user (Ubuntu in ). The agent will then execute the command (`sh(1)` in ) inside the new"
},
{
"data": "The table below summarises the default mini O/S showing the environments that are created, the services running in those environments (for all platforms) and the root filesystem used by each service: | Process | Environment | systemd service? | rootfs | User accessible | Notes | |-|-|-|-|-|-| | systemd | VM root | n/a | | | The init daemon, running as PID 1 | | | VM root | yes | | | Runs as a systemd service | | `chronyd` | VM root | yes | | | Used to synchronise the time with the host | | container workload (`sh(1)` in ) | VM container | no | User specified (Ubuntu in ) | | Managed by the agent | See also the . Notes: The \"User accessible\" column shows how an administrator can access the environment. The container workload is running inside a full container environment which itself is running within a VM environment. See the for details of the default distribution for platforms other than Intel x86_64. The initrd image is a compressed `cpio(1)` archive, created from a rootfs which is loaded into memory and used as part of the Linux startup process. During startup, the kernel unpacks it into a special instance of a `tmpfs` mount that becomes the initial root filesystem. If this image type is , when the user runs the : The will launch the configured . The hypervisor will boot the mini-OS image using the . The kernel will start the init daemon as PID 1 (the ) inside the VM root environment. The will create a new container environment, setting its root filesystem to that requested by the user (`ubuntu` in ). The agent will then execute the command (`sh(1)` in ) inside the new container. The table below summarises the default mini O/S showing the environments that are created, the processes running in those environments (for all platforms) and the root filesystem used by each service: | Process | Environment | rootfs | User accessible | Notes | |-|-|-|-|-| | | VM root | | | Runs as the init daemon (PID 1) | | container workload | VM container | User specified (Ubuntu in this example) | | Managed by the agent | Notes: The \"User accessible\" column shows how an administrator can access the environment. It is possible to use a standard init daemon such as systemd with an initrd image if this is desirable. See also the . | Image type | Default distro | Init daemon | Reason | Notes | |-|-|-|-|-| | | (for x86_64 systems)| systemd | Minimal and highly optimized | systemd offers flexibility | | | | Kata (as no systemd support) | Security hardened and tiny C library | See also: The tool This is used to build all default image types. The The `default-image-name` and `default-initrd-name` options specify the default distributions for each image type."
}
] |
{
"category": "Runtime",
"file_name": "guest-assets.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "(projects-work)= If you have more projects than just the `default` project, you must make sure to use or address the correct project when working with Incus. ```{note} If you have projects that are {ref}`confined to specific users <projects-confined>`, only users with full access to Incus can see all projects. Users without full access can only see information for the projects to which they have access. ``` To list all projects (that you have permission to see), enter the following command: incus project list By default, the output is presented as a list: ```{terminal} :input: incus project list :scroll: +-+--+-+--+--+-++++ | NAME | IMAGES | PROFILES | STORAGE VOLUMES | STORAGE BUCKETS | NETWORKS | NETWORK ZONES | DESCRIPTION | USED BY | +-+--+-+--+--+-++++ | default | YES | YES | YES | YES | YES | YES | Default Incus project | 19 | +-+--+-+--+--+-++++ | my-project (current) | YES | NO | NO | NO | YES | YES | | 0 | +-+--+-+--+--+-++++ ``` You can request a different output format by adding the `--format` flag. See for more information. By default, all commands that you issue in Incus affect the project that you are currently using. To see which project you are in, use the command. To switch to a different project, enter the following command: incus project switch <project_name> Instead of switching to a different project, you can target a specific project when running a command. Many Incus commands support the `--project` flag to run an action in a different project. ```{note} You can target only projects that you have permission for. ``` The following sections give some typical examples where you would typically target a project instead of switching to it. To list the instances in a specific project, add the `--project` flag to the command. For example: incus list --project my-project To move an instance from one project to another, enter the following command: incus move <instancename> <newinstancename> --project <sourceproject> --target-project <target_project> You can keep the same instance name if no instance with that name exists in the target project. For example, to move the instance `my-instance` from the `default` project to `my-project` and keep the instance name, enter the following command: incus move my-instance my-instance --project default --target-project my-project If you create a project with the default settings, profiles are isolated in the project ( is set to `true`). Therefore, the project does not have access to the default profile (which is part of the `default` project), and you will see an error similar to the following when trying to create an instance: ```{terminal} :input: incus launch images:ubuntu/22.04 my-instance Creating my-instance Error: Failed instance creation: Failed creating instance record: Failed initializing instance: Failed getting root disk: No root device could be found ``` To fix this, you can copy the contents of the `default` project's default profile into the current project's default profile. To do so, enter the following command: incus profile show default --project default | incus profile edit default"
}
] |
{
"category": "Runtime",
"file_name": "projects_work.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "In this tutorial you'll deploy a simple application in Kubernetes. We'll start by invoking a trivial Kanister action, then incrementally use more of Kanister's features to manage the application's data. Kubernetes `1.16` or higher. For cluster version lower than `1.16`, we recommend installing Kanister version `0.62.0` or lower. installed and setup installed and initialized using the command [helm init]{.title-ref} docker A running Kanister controller. See `install`{.interpreted-text role=\"ref\"} Access to an S3 bucket and credentials. This tutorial begins by deploying a sample application. The application is contrived, but useful for demonstrating Kanister\\'s features. The application appends the current time to a log file every second. The application\\'s container includes the aws command-line client which we\\'ll use later in the tutorial. The application is installed in the `default` namespace. ``` yaml $ cat <<EOF | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: time-logger spec: replicas: 1 template: metadata: labels: app: time-logger spec: containers: name: test-container image: containerlabs/aws-sdk command: [\"sh\", \"-c\"] args: [\"while true; do for x in $(seq 1200); do date >> /var/log/time.log; sleep 1; done; truncate /var/log/time.log --size 0; done\"] EOF ``` Kanister CustomResources are created in the same namespace as the Kanister controller. The first Kanister CustomResource we\\'re going to deploy is a Blueprint. Blueprints are a set of instructions that tell the controller how to perform actions on an application. An action consists of one or more phases. Each phase invokes a . All Kanister functions accept a list of strings. The `args` field in a Blueprint\\'s phase is rendered and passed into the specified Function. For more on CustomResources in Kanister, see . The Blueprint we\\'ll create has a single action called `backup`. The action `backup` has a single phase named `backupToS3`. `backupToS3` invokes the Kanister function `KubeExec`, which is similar to invoking `kubectl exec ...`. At this stage, we\\'ll use `KubeExec` to echo our time log\\'s name and to specify the container with our log. ``` yaml $ cat <<EOF | kubectl create -f - apiVersion: cr.kanister.io/v1alpha1 kind: Blueprint metadata: name: time-log-bp namespace: kanister actions: backup: phases: func: KubeExec name: backupToS3 args: namespace: \"{{ .Deployment.Namespace }}\" pod: \"{{ index .Deployment.Pods 0 }}\" container: test-container command: sh -c echo /var/log/time.log EOF ``` Once we create a Blueprint, we can see its events by using the following command: ``` yaml $ kubectl --namespace kanister describe Blueprint time-log-bp Events: Type Reason Age From Message - - - Normal Added 4m Kanister Controller Added blueprint time-log-bp ``` When a blueprint resource is created, it goes through a [validating webhook](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook) controller that validates the resource. Refer to documentation for more details. The next CustomResource we\\'ll deploy is an ActionSet. An ActionSet is created each time you want to execute any Kanister actions. The ActionSet contains all the runtime information the controller needs during execution. It may contain multiple actions, each acting on a different Kubernetes object. The ActionSet we\\'re about to create in this tutorial specifies the `time-logger` Deployment we created earlier and selects the `backup` action inside our Blueprint. ``` yaml $ cat <<EOF | kubectl create -f - apiVersion: cr.kanister.io/v1alpha1 kind: ActionSet metadata: generateName: s3backup- namespace: kanister spec: actions: name: backup blueprint: time-log-bp object: kind: Deployment name: time-logger namespace: default EOF ``` The controller watches its namespace for any ActionSets we create. Once it sees a new ActionSet, it will start executing each action. Since our example is pretty simple, it\\'s probably done by the time you finished reading"
},
{
"data": "Let\\'s look at the updated status of the ActionSet and tail the controller logs. ``` bash $ kubectl --namespace kanister get actionsets.cr.kanister.io -o yaml $ kubectl --namespace kanister get pod -l app=kanister-operator ``` ActionSet\\'s `Status.Progress.RunningPhase` field can be used to figure out the phase being run currently, for a particular action. Once the ActionSet has completed, this value is set to `\"\"`. During execution, Kanister controller emits events to the respective ActionSets. The execution transitions of an ActionSet can be seen by using the following command: ``` bash $ kubectl --namespace kanister describe actionset <ActionSet Name> Events: Type Reason Age From Message - - - Normal Started Action 23s Kanister Controller Executing action backup Normal Started Phase 23s Kanister Controller Executing phase backupToS3 Normal Update Complete 19s Kanister Controller Updated ActionSet 'ActionSet Name' Status->complete Normal Ended Phase 19s Kanister Controller Completed phase backupToS3 ``` In case of an action failure, the Kanister controller will emit failure events to both the ActionSet and its associated Blueprint. Congrats on running your first Kanister action! We were able to get data out of time-logger, but if we want to really protect time-logger\\'s precious log, you\\'ll need to back it up outside Kubernetes. We\\'ll choose where to store the log based on values in a ConfigMap. ConfigMaps are referenced in an ActionSet, which are fetched by the controller and made available to Blueprints through parameter templating. For more on templating in Kanister, see `templates`{.interpreted-text role=\"ref\"}. In this section of the tutorial, we\\'re going to use a ConfigMap to choose where to backup our time log. We\\'ll name our ConfigMap and consume it through argument templating in the Blueprint. We\\'ll map the name to a ConfigMap reference in the ActionSet. We create the ConfigMap with an S3 path where we\\'ll eventually push our time log. Please change the bucket path in the following ConfigMap to something you have access to. ``` yaml $ cat <<EOF | kubectl create -f - apiVersion: v1 kind: ConfigMap metadata: name: s3-location namespace: kanister data: path: s3://time-log-test-bucket/tutorial EOF ``` We modify the Blueprint to consume the path from the ConfigMap. We give it a name `location` in the `configMapNames` section. We can access the values in the map through Argument templating. For now we\\'ll just print the path name to stdout, but eventually we\\'ll backup the time log to that path. ``` yaml cat <<EOF | kubectl apply -f - apiVersion: cr.kanister.io/v1alpha1 kind: Blueprint metadata: name: time-log-bp namespace: kanister actions: backup: configMapNames: location phases: func: KubeExec name: backupToS3 args: namespace: \"{{ .Deployment.Namespace }}\" pod: \"{{ index .Deployment.Pods 0 }}\" container: test-container command: sh -c | echo /var/log/time.log echo \"{{ .ConfigMaps.location.Data.path }}\" EOF ``` We create a new ActionSet that maps the name in the Blueprint, `location`, to a reference to the ConfigMap we just created. ``` yaml $ cat <<EOF | kubectl create -f - apiVersion: cr.kanister.io/v1alpha1 kind: ActionSet metadata: generateName: s3backup- namespace: kanister spec: actions: name: backup blueprint: time-log-bp object: kind: Deployment name: time-logger namespace: default configMaps: location: name: s3-location namespace: kanister EOF ``` You can check the controller logs to see if your bucket path rendered successfully. In order for us to actually push the time log to S3, we\\'ll need to use AWS credentials. In Kubernetes, credentials are stored in secrets. Kanister supports Secrets in the same way it supports ConfigMaps. The secret is named and rendered in the Blueprint. The name to reference mapping is created in the"
},
{
"data": "In our example, we\\'ll need to use secrets to push the time log to S3. ::: tip WARNING Secrets may contain sensitive information. It is up to the author of each Blueprint to guarantee that secrets are not logged. ::: This step requires a bit of homework. You\\'ll need to create aws credentials that have read/write access to the bucket you specified in the ConfigMap. Base64 credentials and put them below. ``` bash echo -n \"YOUR_KEY\" | base64 ``` ``` yaml apiVersion: v1 kind: Secret metadata: name: aws-creds namespace: kanister type: Opaque data: awsaccesskey_id: XXXX awssecretaccess_key: XXXX ``` Give the secret the name `aws` in the Blueprint the secret in the `secretNames` section. We can then consume it through templates and assign it to bash variables. Because we now have access to the bucket in the ConfigMap, we can also push the log to S3. In this Secret, we store the credentials as binary data. We can use the templating engine `toString` and `quote` functions, courtesy of sprig. For more on this templating, see `templates`{.interpreted-text role=\"ref\"} ``` yaml cat <<EOF | kubectl apply -f - apiVersion: cr.kanister.io/v1alpha1 kind: Blueprint metadata: name: time-log-bp namespace: kanister actions: backup: configMapNames: location secretNames: aws phases: func: KubeExec name: backupToS3 args: namespace: \"{{ .Deployment.Namespace }}\" pod: \"{{ index .Deployment.Pods 0 }}\" container: test-container command: sh -c | AWSACCESSKEYID={{ .Secrets.aws.Data.awsaccesskeyid | toString }} \\ AWSSECRETACCESSKEY={{ .Secrets.aws.Data.awssecretaccesskey | toString }} \\ aws s3 cp /var/log/time.log {{ .ConfigMaps.location.Data.path | quote }} EOF ``` Create a new ActionSet that has the name-to-Secret reference in its action\\'s `secrets` field. ``` yaml cat <<EOF | kubectl create -f - apiVersion: cr.kanister.io/v1alpha1 kind: ActionSet metadata: generateName: s3backup- namespace: kanister spec: actions: name: backup blueprint: time-log-bp object: kind: Deployment name: time-logger namespace: default configMaps: location: name: s3-location namespace: kanister secrets: aws: name: aws-creds namespace: kanister EOF ``` At this point, we have successfully backed up our application\\'s data to S3. In order to retrieve the information we have pushed to S3, we must store a reference to that data. In Kanister we call these references Artifacts. Kanister\\'s Artifact mechanism manages data we have externalized. Once an artifact has been created, it can be consumed in a Blueprint to retrieve data from external sources. Any time Kanister is used to protect data, it creates a corresponding Artifact. An Artifact is a set of key-value pairs. It is up to the Blueprint author to ensure that the data referenced by Artifacts is valid. Artifacts passed into Blueprints are Input Artifacts and Artifacts created by Blueprints are output Artifacts. In our example, we\\'ll create an outputArtifact called `timeLog` that contains the full path of our data in S3. This path\\'s base will be configured using a ConfigMap. ``` yaml cat <<EOF | kubectl apply -f - apiVersion: cr.kanister.io/v1alpha1 kind: Blueprint metadata: name: time-log-bp namespace: kanister actions: backup: configMapNames: location secretNames: aws outputArtifacts: timeLog: keyValue: path: '{{ .ConfigMaps.location.Data.path }}/time-log/' phases: func: KubeExec name: backupToS3 args: namespace: \"{{ .Deployment.Namespace }}\" pod: \"{{ index .Deployment.Pods 0 }}\" container: test-container command: sh -c | AWSACCESSKEYID={{ .Secrets.aws.Data.awsaccesskeyid | toString }} \\ AWSSECRETACCESSKEY={{ .Secrets.aws.Data.awssecretaccesskey | toString }} \\ aws s3 cp /var/log/time.log {{ .ConfigMaps.location.Data.path }}/time-log/ EOF ``` If you re-execute this Kanister Action, you\\'ll be able to see the Artifact in the ActionSet status. If you use a `DeferPhase`, below is how you can set the output artifact from the output that is being generated from `DeferPhase` as shown"
},
{
"data": "``` yaml cat <<EOF | kubectl apply -f - apiVersion: cr.kanister.io/v1alpha1 kind: Blueprint metadata: name: time-log-bp namespace: kanister actions: backup: configMapNames: location secretNames: aws outputArtifacts: timeLog: keyValue: path: '{{ .ConfigMaps.location.Data.path }}/time-log/' deferPhaseArt: keyValue: time: \"{{ .DeferPhase.Output.bkpCompletedTime }}\" phases: func: KubeExec name: backupToS3 args: namespace: \"{{ .Deployment.Namespace }}\" pod: \"{{ index .Deployment.Pods 0 }}\" container: test-container command: sh -c | echo \"Main Phase\" deferPhase: func: KubeExec name: saveBackupTime args: namespace: \"{{ .Deployment.Namespace }}\" pod: \"{{ index .Deployment.Pods 0 }}\" container: test-container command: sh -c | echo \"DeferPhase\" kando output bkpCompletedTime \"10Minutes\" EOF ``` Output from the previous phases can also be used in the `DeferPhase` like it is used in normal scenarios. Kanister can consume artifacts it creates using `inputArtifacts`. `inputArtifacts` are named in Blueprints and are explicitly listed in the ActionSet. In our example we\\'ll restore an older time log. We have already pushed one to S3 and created an Artifact using the backup action. We\\'ll now restore that time log by using a new restore action. We create a new ActionSet on our `time-logger` deployment with the action name `restore`. This time we also include the full path in S3 as an Artifact. ``` yaml cat <<EOF | kubectl create -f - apiVersion: cr.kanister.io/v1alpha1 kind: ActionSet metadata: generateName: s3restore namespace: kanister spec: actions: name: restore blueprint: time-log-bp object: kind: Deployment name: time-logger namespace: default secrets: aws: name: aws-creds namespace: kanister artifacts: timeLog: keyValue: path: s3://time-log-test-bucket/tutorial/time-log/time.log EOF ``` We add a restore action to the Blueprint. This action does not need the ConfigMap because the `inputArtifact` contains the fully specified path. ``` yaml cat <<EOF | kubectl apply -f - apiVersion: cr.kanister.io/v1alpha1 kind: Blueprint metadata: name: time-log-bp namespace: kanister actions: backup: configMapNames: location secretNames: aws outputArtifacts: timeLog: keyValue: path: '{{ .ConfigMaps.location.Data.path }}/time-log/' phases: func: KubeExec name: backupToS3 args: namespace: \"{{ .Deployment.Namespace }}\" pod: \"{{ index .Deployment.Pods 0 }}\" container: test-container command: sh -c | AWSACCESSKEYID={{ .Secrets.aws.Data.awsaccesskeyid | toString }} \\ AWSSECRETACCESSKEY={{ .Secrets.aws.Data.awssecretaccesskey | toString }} \\ aws s3 cp /var/log/time.log {{ .ConfigMaps.location.Data.path }}/time-log/ restore: secretNames: aws inputArtifactNames: timeLog phases: func: KubeExec name: restoreFromS3 args: namespace: \"{{ .Deployment.Namespace }}\" pod: \"{{ index .Deployment.Pods 0 }}\" container: test-container command: sh -c | AWSACCESSKEYID={{ .Secrets.aws.Data.awsaccesskeyid | toString }} \\ AWSSECRETACCESSKEY={{ .Secrets.aws.Data.awssecretaccesskey | toString }} \\ aws s3 cp {{ .ArtifactsIn.timeLog.KeyValue.path | quote }} /var/log/time.log EOF ``` We can check the controller logs to see that the time log was restored successfully. It is often useful to include the current time as parameters to an action. Kanister provides the job\\'s start time in UTC. We can modify the Blueprint\\'s output artifact to include the day the backup was taken: ``` yaml outputArtifacts: timeLog: path: '{{ .ConfigMaps.location.Data.path }}/time-log/{{ toDate \"2006-01-02T15:04:05.999999999Z07:00\" .Time | date \"2006-01-02\" }}' ``` For more on using the time template parameter, see . So far in this tutorial, we have shown you how to manually create action sets via YAML files. In some cases, an action depends on a previous action, and manually updating the action set to use artifacts created by the previous action set can be cumbersome. In situations like this, it is useful to instead use `kanctl`. To learn how to leverage `kanctl` to create action sets, see . Congratulations! You have reached the end of this long tutorial! Don\\'t stop here. There are many more example blueprints on the Kanister GitHub to explore. Use them to help you define your next blueprint. We would love to hear from you. If you have any feedback or questions, find us on Slack at ."
}
] |
{
"category": "Runtime",
"file_name": "tutorial.md",
"project_name": "Kanister",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "``` bash $ git clone https://github.com/cubefs/cubefs.git ``` CubeFS supports deploying a basic cluster with a single script. The basic cluster includes components such as `Master`, `MetaNode`, and `DataNode`, with the option to additionally start a `client` and `ObjectNode`. The steps are as follows: ```bash cd ./cubefs make sh ./shell/deploy.sh /home/data bond0 sh ./shell/deploy_client.sh /home/data sh ./shell/deploy_object.sh /home/data ``` `bond0`: The name of the local network card, fill in according to the actual situation `/home/data`: A local directory used to store cluster running logs, data, and configuration files. The directory should be the same for all three `sh` commands. Machine requirements Need root permission Able to use `ifconfig` Memory of 4G or more Remaining disk space corresponding to `/home/data` is more than 20G Check the cluster status ```bash ./build/bin/cfs-cli cluster info [Cluster] Cluster name : cfs_dev Master leader : 172.16.1.101:17010 Master-1 : 172.16.1.101:17010 Master-2 : 172.16.1.102:17010 Master-3 : 172.16.1.103:17010 Auto allocate : Enabled MetaNode count (active/total) : 4/4 MetaNode used : 0 GB MetaNode available : 21 GB MetaNode total : 21 GB DataNode count (active/total) : 4/4 DataNode used : 44 GB DataNode available : 191 GB DataNode total : 235 GB Volume count : 2 ... ``` For using the file system, refer to the . For using object storage, refer to the . ::: tip Note Optional section. If you need to use the erasure-coded volume, you need to deploy it. ::: ``` bash $> cd cubefs/blobstore $> ./run.sh --consul ... start blobstore service successfully, wait minutes for internal state preparation $> ``` After the erasure code subsystem is deployed successfully, modify the `ebsAddr` configuration item in the Master configuration file () to the Consul address registered by the Access node, which is `http://localhost:8500` by default. The script will stop the server and the mount point ```bash sh ./shell/stop.sh ``` In the docker directory, the run_docker.sh tool is used to facilitate running the CubeFS docker-compose trial cluster, including the `Master`, `MetaNode`, `DataNode`, and `ObjectNode`"
},
{
"data": "::: tip Note Please make sure that docker and docker-compose are installed, and before executing the docker deployment, make sure that the firewall is turned off to avoid permission issues causing container startup failures. ::: Execute the following command to create a minimal CubeFS cluster. ::: warning Note `/data/disk` is the data root directory and requires at least 10GB of free space. ::: ```bash $ docker/run_docker.sh -r -d /data/disk ``` After the client is started, use the `mount` command in the client docker container to check the directory mounting status: ```bash $ mount | grep cubefs ``` Open `http://127.0.0.1:3000` in a browser, log in with `admin/123456`, and you can view the Grafana monitoring indicator interface of CubeFS. Or use the following command to run step by step: ```bash $ docker/run_docker.sh -b $ docker/run_docker.sh -s -d /data/disk $ docker/run_docker.sh -c $ docker/run_docker.sh -m ``` For more commands, please refer to the help: ```bash $ docker/run_docker.sh -h ``` The Prometheus and Grafana related configurations for monitoring are located in the `docker/monitor` directory. ::: warning Note The erasure code docker deployment method has not been unified with other modules (such as Master) for the time being. This section is currently only used to experience the function of the erasure code subsystem itself, and will be improved later. ::: Support the following docker image deployment methods: Remote pull build [`recommended`] ``` bash $> docker pull cubefs/cubefs:blobstore-v3.3.0 # Pull the image $> docker run cubefs/cubefs:blobstore-v3.3.0 # Run the image $> docker container ls # View running containers CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 76100321156b blobstore:v3.3.0 \"/bin/sh -c /apps/...\" 4 minutes ago Up 4 minutes thirsty_kare $> docker exec -it thirsty_kare /bin/bash # Enter the container ``` Local script compilation and build ``` bash $> cd blobstore $> ./run_docker.sh -b # Compile and build &> Successfully built 0b29fda1cd22 Successfully tagged blobstore:v3.3.0 $> ./run_docker.sh -r # Run the image $> ... # The subsequent steps are the same as those for remote pull build ```"
}
] |
{
"category": "Runtime",
"file_name": "node.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "(devices-unix-char)= ```{note} The `unix-char` device type is supported for containers. It supports hotplugging. ``` Unix character devices make the specified character device appear as a device in the instance (under `/dev`). You can read from the device and write to it. `unix-char` devices have the following device options: % Include content from ```{include} ../config_options.txt :start-after: <!-- config group devices-unix-char-block start --> :end-before: <!-- config group devices-unix-char-block end --> ``` (devices-unix-char-hotplugging)= % Include content from ```{include} devicesunixblock.md :start-after: Hotplugging ```"
}
] |
{
"category": "Runtime",
"file_name": "devices_unix_char.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This is a collection of project ideas for . These projects are intended to be relatively self-contained and should be good starting projects for new contributors to gVisor. We expect individual contributors to be able to make reasonable progress on these projects over the course of several weeks. Familiarity with Golang and knowledge about systems programming in Linux will be helpful. If you're interested in contributing to gVisor through Google Summer of Code 2021, but would like to propose your own idea for a project, please see our for areas of development, and get in touch through our or ! Estimated complexity: easy This project involves implementing the syscall. gVisor currently supports manipulation of namespaces through the `clone` and `unshare` syscalls. These two syscalls essentially implement the requisite logic for `setns`, but there is currently no way to obtain a file descriptor referring to a namespace in gVisor. As described in the `setns` man page, the two typical ways of obtaining such a file descriptor in Linux are by opening a file in `/proc/[pid]/ns`, or through the `pidfd_open` syscall. For gVisor, we recommend implementing the `/proc/[pid]/ns` mechanism first, which would involve implementing a trivial namespace file type in procfs. Estimated complexity: medium Implement in gVisor, which is a filesystem event notification mechanism. gVisor currently supports `inotify`, which is a similar mechanism with slightly different capabilities, but which should serve as a good reference. The `fanotify` interface adds two new syscalls: `fanotify_init` creates a new notification group, which is a collection of filesystem objects watched by the kernel. The group is represented by a file descriptor returned by this syscall. Events on the watched objects can be retrieved by reading from this file descriptor. `fanotify_mark` adds a filesystem object to a watch group, or modifies the parameters of an existing watch. Unlike `inotify`, `fanotify` can set watches on filesystems and mount points, which will require some additional data tracking on the corresponding filesystem objects within the sentry. A well-designed implementation should reuse the notifications from `inotify` for files and directories (this is also how Linux implements these mechanisms), and should implement the necessary tracking and notifications for filesystems and mount points. Estimated complexity: hard `io_uring` is the latest asynchronous I/O API in Linux. This project will involve implementing the system interfaces required to support `io_uring` in gVisor. A successful implementation should have similar relatively performance and scalability characteristics compared to synchronous I/O syscalls, as in Linux. The core of the `io_uring` interface is deceptively simple, involving only three new syscalls: `iouringsetup(2)` creates a new `io_uring` instance represented by a file descriptor, including a set of request submission and completion queues backed by shared memory ring buffers. `iouringregister(2)` optionally binds kernel resources such as files and memory buffers to handles, which can then be passed to `io_uring`"
},
{
"data": "Pre-registering resources in this way moves the cost of looking up and validating these resources to registration time rather than paying the cost during the operation. `iouringenter(2)` is the syscall used to submit queued operations and wait for completions. This is the most complex part of the mechanism, requiring the kernel to process queued request from the submission queue, dispatching the appropriate I/O operation based on the request arguments and blocking for the requested number of operations to be completed before returning. An `io_uring` request is effectively an opcode specifying the I/O operation to perform, and corresponding arguments. The opcodes and arguments closely relate to the corresponding synchronous I/O syscall. In addition, there are some `io_uring`-specific arguments that specify things like how to process requests, how to interpret the arguments and communicate the status of the ring buffers. For a detailed description of the `io_uring` interface, see the by the `io_uring` authors. Due to the complexity of the full `io_uring` mechanism and the numerous supported operations, it should be implemented in two stages: In the first stage, a simplified version of the `iouringsetup` and `iouringenter` syscalls should be implemented, which will only support a minimal set of arguments and just one or two simple opcodes. This simplified implementation can be used to figure out how to integrate `io_uring` with gVisor's virtual filesystem and memory management subsystems, as well as benchmark the implementation to ensure it has the desired performance characteristics. The goal in this stage should be to implement the smallest subset of features required to perform a basic operation through `io_uring`s. In the second stage, support can be added for all the I/O operations supported by Linux, as well as advanced `io_uring` features such as fixed files and buffers (via `iouringregister`), polled I/O and kernel-side request polling. A single contributor can expect to make reasonable progress on the first stage within the scope of Google Summer of Code. The second stage, while not necessarily difficult, is likely to be very time consuming. However it also lends itself well to parallel development by multiple contributors. Estimated complexity: hard Linux provides two alternate message queues: and . gVisor currently doesn't implement either. Both mechanisms add multiple syscalls for managing and using the message queues, see the relevant man pages above for their full description. The core of both mechanisms are very similar, it may be possible to back both mechanisms with a common implementation in gVisor. Linux however has two distinct implementations. An individual contributor can reasonably implement a minimal version of one of these two mechanisms within the scope of Google Summer of Code. The System V queue may be slightly easier to implement, as gVisor already implements System V semaphores and shared memory regions, so the code for managing IPC objects and the registry already exist."
}
] |
{
"category": "Runtime",
"file_name": "gsoc-2021-ideas.md",
"project_name": "gVisor",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This document outlines sample topics for a training course on operating Manta. Docs: Manta Overview: https://github.com/TritonDataCenter/manta Manta Ops Guide: https://github.com/TritonDataCenter/manta/blob/master/docs/manta-ops.md Manatee Users Guide: https://github.com/TritonDataCenter/manatee/blob/master/docs/user-guide.md Manatee Troubleshooting Guide: https://github.com/TritonDataCenter/manatee/blob/master/docs/trouble-shooting.md Architecture Review Using Manta Get new users set up for Manta `m tools`: `mmkdir`, `mrmdir`, `mls`, `mfind`, `mput`, `mget`, `msign`, `mrm`, Basic Map/reduce patterns Discovery/Moving around `sdc-cnapi` `sdc-vmapi` `sdc-sapi` `sdc-login` `manta-login` - type, zonename, etc. `manta-adm` - show, cn Deployments `manta-adm` - `-s -j`, edit, `update -n`, `update` `manta-deploy` `manta-undeploy` `sapiadm reprovision ...` Upgrades Operations Typical Zone setup setup, `mdata:execute`, sapi_manifests, config-agent svcs, svcadm, svccfg Dashboards Via ssh tunnels like: `ssh -o TCPKeepAlive=yes -N -n root@[Headnode] -L 5555:[MadtomAdminIp]:80` Madtom: hint: sometimes it lies Marlin Dashboard Alarms mantamon Known issues Zookeeper Leader goes down Postgres needs vacuum/analyze Powering down Manta Command Line Tools/Where stuff is General `json` `bunyan` Zookeeper/binder `zkCli.sh` `dig @localhost` Postgres (Manatee) `manatee-adm` `psql moray` Moray/Electric Moray `getobject` `getbucket` Storage `/manta/[owner]/[object_id]` `/manta/tombstone/[date]/[object_id]`"
}
] |
{
"category": "Runtime",
"file_name": "sample-training.md",
"project_name": "Triton Object Storage",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "The basic idea of `virtio-mem` is to provide a flexible, cross-architecture memory hot plug and hot unplug solution that avoids many limitations imposed by existing technologies, architectures, and interfaces. More details can be found in https://lkml.org/lkml/2019/12/12/681. Kata Containers with `virtio-mem` supports memory resize. Kata Containers just supports `virtio-mem` with QEMU. Install and setup Kata Containers as shown . The `virtio-mem` config of the x86_64 Kata Linux kernel is open. Enable `virtio-mem` as follows: ``` $ sudo sed -i -e 's/^#enablevirtiomem.*$/enablevirtiomem = true/g' /etc/kata-containers/configuration.toml ``` The `virtio-mem` config of the others Kata Linux kernel is not open. You can open `virtio-mem` config as follows: ``` CONFIGVIRTIOMEM=y ``` Then you can build and install the guest kernel image as shown . Use following command to enable memory over-commitment of a Linux kernel. Because QEMU `virtio-mem` device need to allocate a lot of memory. ``` $ echo 1 | sudo tee /proc/sys/vm/overcommit_memory ``` Use following command to start a Kata Container. ``` $ pod_yaml=pod.yaml $ container_yaml=container.yaml $ image=\"quay.io/prometheus/busybox:latest\" $ cat << EOF > \"${pod_yaml}\" metadata: name: busybox-sandbox1 uid: $(uuidgen) namespace: default EOF $ cat << EOF > \"${container_yaml}\" metadata: name: busybox-killed-vmm image: image: \"$image\" command: top EOF $ sudo crictl pull $image $ podid=$(sudo crictl runp $pod_yaml) $ cid=$(sudo crictl create $podid $containeryaml $podyaml) $ sudo crictl start $cid ``` Use the following command to set the container memory limit to 2g and the memory size of the VM to its default_memory + 2g. ``` $ sudo crictl update --memory $((210241024*1024)) $cid ``` Use the following command to set the container memory limit to 1g and the memory size of the VM to its default_memory + 1g. ``` $ sudo crictl update --memory $((110241024*1024)) $cid ```"
}
] |
{
"category": "Runtime",
"file_name": "how-to-use-virtio-mem-with-kata.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This feature replaces the support bundle mechanism with the general purpose . Currently, the Longhorn support bundle file is hard to understand, and analyzing it is difficult. With the new support bundle, the user can simulate a mocked Kubernetes cluster that is interactable with `kubectl`. Hence makes the analyzing process more intuitive. https://github.com/longhorn/longhorn/issues/2759 Replace the Longhorn support-bundle generation mechanism with the support bundle manager. Keep the same support bundle HTTP API endpoints. Executing the` support-bundle-kit simulator` on the support bundle can start a mocked Kubernetes API server that is interactable using `kubectl`. Introduce a new `support-bundle-manager-image` setting for easy support-bundle manager image replacement. Introduce a new `support-bundle-failed-history-limit` setting to avoid unexpected increase of failed support bundles. `None` Introduce the new `SupportBundle` custom resource definition. Creating a new custom resource triggers the creation of a support bundle manager deployment. The support-bundle manager is responsible for support bundle collection and exposes it to `https://<ip>:8080/bundle`. Deleting the SupportBundle custom resource deletes its owning support bundle manager deployment. Introduce a new `longhorn-support-bundle` controller. Responsible for SupportBundle custom resource status updates and event recordings. - Responsible for cleaning up the support bundle manager deployment when the owner `SupportBundle` custom resource is tagged for deletion. There is no change to the HTTP API endpoints. This feature replaces the handler function logic. Introduce a new `longhorn-support-bundle` service account with `cluster-admin` access. The current `longhorn-service-account` service account cannot generate the following resources. ``` Failed to get /api/v1/componentstatuses Failed to get /apis/authentication.k8s.io/v1/tokenreviews Failed to get /apis/authorization.k8s.io/v1/selfsubjectrulesreviews Failed to get /apis/authorization.k8s.io/v1/subjectaccessreviews Failed to get /apis/authorization.k8s.io/v1/selfsubjectaccessreviews Failed to get /apis/certificates.k8s.io/v1/certificatesigningrequests Failed to get /apis/networking.k8s.io/v1/ingressclasses Failed to get /apis/policy/v1beta1/podsecuritypolicies Failed to get /apis/rbac.authorization.k8s.io/v1/clusterroles Failed to get /apis/rbac.authorization.k8s.io/v1/clusterrolebindings Failed to get /apis/node.k8s.io/v1/runtimeclasses Failed to get /apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations Failed to get /apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas Failed to get /api/v1/namespaces/default/replicationcontrollers Failed to get /api/v1/namespaces/default/bindings Failed to get /api/v1/namespaces/default/serviceaccounts Failed to get /api/v1/namespaces/default/resourcequotas Failed to get /api/v1/namespaces/default/limitranges Failed to get /api/v1/namespaces/default/podtemplates Failed to get /apis/apps/v1/namespaces/default/replicasets Failed to get /apis/apps/v1/namespaces/default/controllerrevisions Failed to get /apis/events.k8s.io/v1/namespaces/default/events Failed to get /apis/authorization.k8s.io/v1/namespaces/default/localsubjectaccessreviews Failed to get /apis/autoscaling/v1/namespaces/default/horizontalpodautoscalers Failed to get /apis/networking.k8s.io/v1/namespaces/default/ingresses Failed to get /apis/networking.k8s.io/v1/namespaces/default/networkpolicies Failed to get /apis/rbac.authorization.k8s.io/v1/namespaces/default/rolebindings Failed to get /apis/rbac.authorization.k8s.io/v1/namespaces/default/roles Failed to get /apis/storage.k8s.io/v1beta1/namespaces/default/csistoragecapacities Failed to get /apis/discovery.k8s.io/v1/namespaces/default/endpointslices Failed to get /apis/helm.cattle.io/v1/namespaces/default/helmcharts Failed to get /apis/helm.cattle.io/v1/namespaces/default/helmchartconfigs Failed to get /apis/k3s.cattle.io/v1/namespaces/default/addons Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/default/ingressroutetcps Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/default/ingressroutes Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/default/serverstransports Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/default/traefikservices Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/default/middlewaretcps Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/default/middlewares Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/default/tlsstores Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/default/tlsoptions Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/default/ingressrouteudps Failed to get /api/v1/namespaces/kube-system/bindings Failed to get /api/v1/namespaces/kube-system/resourcequotas Failed to get /api/v1/namespaces/kube-system/serviceaccounts Failed to get /api/v1/namespaces/kube-system/podtemplates Failed to get /api/v1/namespaces/kube-system/limitranges Failed to get /api/v1/namespaces/kube-system/replicationcontrollers Failed to get /apis/apps/v1/namespaces/kube-system/controllerrevisions Failed to get /apis/apps/v1/namespaces/kube-system/replicasets Failed to get /apis/events.k8s.io/v1/namespaces/kube-system/events Failed to get /apis/authorization.k8s.io/v1/namespaces/kube-system/localsubjectaccessreviews Failed to get /apis/autoscaling/v1/namespaces/kube-system/horizontalpodautoscalers Failed to get /apis/networking.k8s.io/v1/namespaces/kube-system/networkpolicies Failed to get /apis/networking.k8s.io/v1/namespaces/kube-system/ingresses Failed to get /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings Failed to get /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles Failed to get /apis/storage.k8s.io/v1beta1/namespaces/kube-system/csistoragecapacities Failed to get /apis/discovery.k8s.io/v1/namespaces/kube-system/endpointslices Failed to get /apis/helm.cattle.io/v1/namespaces/kube-system/helmchartconfigs Failed to get /apis/helm.cattle.io/v1/namespaces/kube-system/helmcharts Failed to get /apis/k3s.cattle.io/v1/namespaces/kube-system/addons Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/kube-system/serverstransports Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/kube-system/middlewaretcps Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/kube-system/middlewares Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/kube-system/tlsstores Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/kube-system/ingressrouteudps Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/kube-system/ingressroutes Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/kube-system/ingressroutetcps Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/kube-system/traefikservices Failed to get"
},
{
"data": "Failed to get /api/v1/namespaces/cattle-system/limitranges Failed to get /api/v1/namespaces/cattle-system/podtemplates Failed to get /api/v1/namespaces/cattle-system/resourcequotas Failed to get /api/v1/namespaces/cattle-system/serviceaccounts Failed to get /api/v1/namespaces/cattle-system/replicationcontrollers Failed to get /api/v1/namespaces/cattle-system/bindings Failed to get /apis/apps/v1/namespaces/cattle-system/replicasets Failed to get /apis/apps/v1/namespaces/cattle-system/controllerrevisions Failed to get /apis/events.k8s.io/v1/namespaces/cattle-system/events Failed to get /apis/authorization.k8s.io/v1/namespaces/cattle-system/localsubjectaccessreviews Failed to get /apis/autoscaling/v1/namespaces/cattle-system/horizontalpodautoscalers Failed to get /apis/networking.k8s.io/v1/namespaces/cattle-system/networkpolicies Failed to get /apis/networking.k8s.io/v1/namespaces/cattle-system/ingresses Failed to get /apis/rbac.authorization.k8s.io/v1/namespaces/cattle-system/roles Failed to get /apis/rbac.authorization.k8s.io/v1/namespaces/cattle-system/rolebindings Failed to get /apis/storage.k8s.io/v1beta1/namespaces/cattle-system/csistoragecapacities Failed to get /apis/discovery.k8s.io/v1/namespaces/cattle-system/endpointslices Failed to get /apis/helm.cattle.io/v1/namespaces/cattle-system/helmchartconfigs Failed to get /apis/helm.cattle.io/v1/namespaces/cattle-system/helmcharts Failed to get /apis/k3s.cattle.io/v1/namespaces/cattle-system/addons Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/cattle-system/tlsoptions Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/cattle-system/traefikservices Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/cattle-system/middlewares Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/cattle-system/ingressroutetcps Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/cattle-system/serverstransports Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/cattle-system/ingressroutes Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/cattle-system/middlewaretcps Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/cattle-system/tlsstores Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/cattle-system/ingressrouteudps Failed to get /api/v1/namespaces/longhorn-system/limitranges Failed to get /api/v1/namespaces/longhorn-system/podtemplates Failed to get /api/v1/namespaces/longhorn-system/resourcequotas Failed to get /api/v1/namespaces/longhorn-system/replicationcontrollers Failed to get /api/v1/namespaces/longhorn-system/serviceaccounts Failed to get /api/v1/namespaces/longhorn-system/bindings Failed to get /apis/apps/v1/namespaces/longhorn-system/replicasets Failed to get /apis/apps/v1/namespaces/longhorn-system/controllerrevisions Failed to get /apis/events.k8s.io/v1/namespaces/longhorn-system/events Failed to get /apis/authorization.k8s.io/v1/namespaces/longhorn-system/localsubjectaccessreviews Failed to get /apis/autoscaling/v1/namespaces/longhorn-system/horizontalpodautoscalers Failed to get /apis/networking.k8s.io/v1/namespaces/longhorn-system/ingresses Failed to get /apis/networking.k8s.io/v1/namespaces/longhorn-system/networkpolicies Failed to get /apis/rbac.authorization.k8s.io/v1/namespaces/longhorn-system/rolebindings Failed to get /apis/rbac.authorization.k8s.io/v1/namespaces/longhorn-system/roles Failed to get /apis/storage.k8s.io/v1beta1/namespaces/longhorn-system/csistoragecapacities Failed to get /apis/discovery.k8s.io/v1/namespaces/longhorn-system/endpointslices Failed to get /apis/helm.cattle.io/v1/namespaces/longhorn-system/helmchartconfigs Failed to get /apis/helm.cattle.io/v1/namespaces/longhorn-system/helmcharts Failed to get /apis/k3s.cattle.io/v1/namespaces/longhorn-system/addons Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/longhorn-system/serverstransports Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/longhorn-system/ingressroutes Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/longhorn-system/tlsstores Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/longhorn-system/traefikservices Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/longhorn-system/tlsoptions Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/longhorn-system/middlewares Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/longhorn-system/ingressroutetcps Failed to get /apis/traefik.containo.us/v1alpha1/namespaces/longhorn-system/middlewaretcps ``` This feature does not alter how the user generates the support bundle on UI. The user can simulate a mocked cluster with the support bundle and interact using `kubectl`. User clicks `Generate Support BundleFile` in Longhorn UI. Longhorn creates a `SupportBundle` custom resource. Longhorn creates a support bundle manager deployment. User downloads the support bundle as same as before. Longhorn deletes the `SupportBundle` custom resource. Longhorn deletes the support bundle manager deployment. User clicks `Generate Support BundleFile` in Longhorn UI. Longhorn creates a SupportBundle custom resource. Longhorn creates a support bundle manager deployment. The SupportBundle goes into an error state. User sees an error on UI. Longhorn retains the failed SupportBundle and its support-bundle manager deployment. User analyzes the failed SupportBundle on the cluster. Or generate a new support bundle so the failed SupportBundle can be analyzed off-site. User deletes the failed SupportBundle when done with the analysis. Or have Longhorn automatically purge all failed SupportBundles by setting to 0. Longhorn deletes the SupportBundle custom resource. Longhorn deletes the support bundle manager deployment. There will be no change to the HTTP API endpoints. This feature replaces the handler function logic. | Method | Path | Description | | -- | - | -- | | POST | `/v1/supportbundles` | Creates SupportBundle custom resource | | GET | `/v1/supportbundles/{name}/{bundleName}` | Get the support bundle details from the SuppotBundle custom resource | | GET | `/v1/supportbundles/{name}/{bundleName}/download` | Get the support bundle file from `https://<support-bundle-manager-ip>:8080/bundle` | Collecting the support bundle requires complete cluster access. Hence Longhorn will have a service account dedicated at deployment. ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: longhorn-support-bundle namespace: longhorn-system apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: longhorn-support-bundle roleRef: apiGroup:"
},
{
"data": "kind: ClusterRole name: cluster-admin subjects: kind: ServiceAccount name: longhorn-support-bundle namespace: longhorn-system ``` ```go type SupportBundle struct { client.Resource NodeID string `json:\"nodeID\"` State longhorn.SupportBundleState `json:\"state\"` Name string `json:\"name\"` ErrorMessage string `json:\"errorMessage\"` ProgressPercentage int `json:\"progressPercentage\"` } ``` Creates a new `SupportBundle` custom resource. Gets the `SupportBundle` custom resource and returns . Get the support bundle from . Copy the support bundle to the response writer. Delete the `SupportBundle` custom resource. ```yaml apiVersion: v1 items: apiVersion: longhorn.io/v1beta2 kind: SupportBundle metadata: creationTimestamp: \"2022-11-10T02:35:45Z\" generation: 1 name: support-bundle-2022-11-10t02-35-45z namespace: longhorn-system resourceVersion: \"97016\" uid: a5169448-a6e5-4637-b99a-63b9a9ea0b7f spec: description: \"123\" issueURL: \"\" nodeID: \"\" status: conditions: lastProbeTime: \"\" lastTransitionTime: \"2022-11-10T03:35:29Z\" message: done reason: Create status: \"True\" type: Manager filename: supportbundle08ccc085-641c-4592-bb57-e054562412042022-11-10T02-36-13Z.zip filesize: 502608 image: rancher/support-bundle-kit:master-head managerIP: 10.42.2.54 ownerID: ip-10-0-1-113 progress: 100 state: ReadyForDownload kind: List metadata:master-head resourceVersion: \"\" ``` The support bundle manager image for the support bundle generation. ``` Category = general Type = string Default = rancher/support-bundle-kit:master-head ``` This setting specifies how many failed support bundles can exist in the cluster. The retained failed support bundle is for analysis purposes and needs to clean up manually. Set this value to 0 to have Longhorn automatically purge all failed support bundles. ``` Category = general Type = integer Default = 1 ``` Block creation if the number of failed SupportBundle exceeds the . Block creation if there is another SupportBundle is in progress. However, skip checking the SupportBundle that is in an error state. We will leave the user to decide what to do with the failed SupportBundles. Add finalizer. This controller handles the support bundle in phases depending on its custom resource state. At the end of each phase will update the SupportBundle custom resource state and then returns the queue. The controller picks up the update and enqueues again for the next phase. When there is no state update, the controller automatically queues the handling custom resource until the state reaches `ReadyForDownload` or `Error`. State: None(\"\") Update the custom resource image with the setting value. Update the custom resource state to `Started`. State: Started Update the state to `Generating` when the support bundle manager deployment exists. Create support bundle manager deployment and requeue this phase to check support bundle manager deployment. State: Generating Update the status base on the support manager : IP file name progress filesize Update the custom resource state to `ReadyForDownload` when progress reached 100. Update the state to `Error` and record the error type condition when the phase encounters unexpected failure. When the is 0, update the state to `Purging`. Purging Delete all failed SupportBundles in the state `Error`. When the SupportBundle gets marked with `DeletionTimestamp`, the controller updated its state to `Deleting`. Deleting Delete its support bundle manager deployment. Remove the SupportBundle finalizer. If the is 0, update all failed SupportBundle state to `Purging`. Test support bundle generation should be successful. Test support bundle should be cleaned up after download. Test support bundle should retain when generation failed. Test support bundle should generate when the cluster has an existing `SupportBundle` in an error state. Test support bundle should purge when `support bundle failed history limit` is set to 0. Test support bundle cluster simulation. `None` `None`"
}
] |
{
"category": "Runtime",
"file_name": "20221109-support-bundle-enhancement.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Copyright (c) 2016-2017, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of the University of California, Lawrence Berkeley National Laboratory, U.S. Dept. of Energy nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. NOTICE. This Software was developed under funding from the U.S. Department of Energy and the U.S. Government consequently retains certain rights. As such, the U.S. Government has been granted for itself and others acting on its behalf a paid-up, nonexclusive, irrevocable, worldwide license in the Software to reproduce, distribute copies to the public, prepare derivative works, and perform publicly and display publicly, and to permit other to do so."
}
] |
{
"category": "Runtime",
"file_name": "LICENSE-LBNL.md",
"project_name": "Singularity",
"subcategory": "Container Runtime"
}
|
[
{
"data": "| Maintainer | GitHub ID | Affiliation | ||--|-| | Zhang Zhenhua | | Bocloud | | Antmoveh | | Bocloud | | Fan Hao | | Shopee | | Maintainer | GitHub ID | Affiliation | | -- | - | -- |"
}
] |
{
"category": "Runtime",
"file_name": "MAINTAINERS.md",
"project_name": "Carina",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Learn how to create snapshots of your data, and how you can restore the data in a snapshot. Snapshots create a copy of the volume content at a particular point in time. This copy remains untouched when you make modifications to the volume content. This, for example, enables you to create backups of your data before performing modifications or deletion on your data. Since a backup is useless, unless you have a way to restore it, this tutorial will teach you both how to create a snapshot, and how to restore in the case of accidental deletion of your data. An installed and configured Piraeus Datastore. Learn how to get started in our A storage pool supporting snapshots. LINSTOR supports snapshots for `LVMTHIN`, `FILETHIN`, `ZFS` and `ZFS_THIN` pools. If you followed , you are using the supported `FILE_THIN` pool. A cluster with deployed. To check if it is already deployed, try running: ``` $ kubectl api-resources --api-group=snapshot.storage.k8s.io -oname volumesnapshotclasses.snapshot.storage.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io volumesnapshots.snapshot.storage.k8s.io ``` If your output looks like above, you are good to go. If your output is empty, you should deploy a snapshot controller. You can quickly deploy it by using: ``` kubectl apply -k https://github.com/kubernetes-csi/external-snapshotter//client/config/crd kubectl apply -k https://github.com/kubernetes-csi/external-snapshotter//deploy/kubernetes/snapshot-controller ``` We will be using the same workload as in the . This workload will save the Pods name, the node it is running on, and a timestamp to our volume. By logging this information to our volume, we can easily keep track of our data. First, we create our `StorageClass`, `PersistentVolumeClaim` and `Deployment`: ``` $ kubectl apply -f - <<EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: piraeus-storage provisioner: linstor.csi.linbit.com allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer parameters: linstor.csi.linbit.com/storagePool: pool1 apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-volume spec: storageClassName: piraeus-storage resources: requests: storage: 1Gi accessModes: ReadWriteOnce apiVersion: apps/v1 kind: Deployment metadata: name: volume-logger spec: selector: matchLabels: app.kubernetes.io/name: volume-logger strategy: type: Recreate template: metadata: labels: app.kubernetes.io/name: volume-logger spec: terminationGracePeriodSeconds: 0 containers: name: volume-logger image: busybox args: sh -c | echo \"Hello from \\$HOSTNAME, running on \\$NODENAME, started at \\$(date)\" >> /volume/hello tail -f /dev/null env: name: NODENAME valueFrom: fieldRef: fieldPath: spec.nodeName volumeMounts: mountPath: /volume name: data-volume volumes: name: data-volume persistentVolumeClaim: claimName: data-volume EOF ``` Then, we wait for the Pod to start, and verify that the expected information was logged to our volume: ``` $ kubectl wait pod --for=condition=Ready -l app.kubernetes.io/name=volume-logger pod/volume-logger-cbcd897b7-jrmks condition met $ kubectl exec deploy/volume-logger -- cat /volume/hello Hello from volume-logger-cbcd897b7-jrmks, running on n3.example.com, started at Mon Feb 13 15:32:46 UTC 2023 ``` Creating a snapshot requires the creation of a first. The `SnapshotClass` specifies our `linstor.csi.linbit.com` provisioner, and sets the clean-up policy for our snapshots to `Delete`, meaning deleting the Kubernetes resources will also delete the snapshots in LINSTOR. ``` $ kubectl apply -f - <<EOF apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: piraeus-snapshots driver: linstor.csi.linbit.com deletionPolicy: Delete EOF ``` Next, we will request the creation of a snapshot using a"
},
{
"data": "The `VolumeSnapshot` resource references the `PersistentVolumeClaim` resource we created initially, as well as our newly created `VolumeSnapshotClass`. ``` $ kubectl apply -f - <<EOF apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: data-volume-snapshot-1 spec: volumeSnapshotClassName: piraeus-snapshots source: persistentVolumeClaimName: data-volume EOF ``` Now, we need to wait for the snapshot to be created. We can then also verify its creation in LINSTOR: ``` $ kubectl wait volumesnapshot --for=jsonpath='{.status.readyToUse}'=true data-volume-snapshot-1 volumesnapshot.snapshot.storage.k8s.io/data-volume-snapshot-1 condition met $ kubectl get volumesnapshot data-volume-snapshot-1 NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE data-volume-snapshot-1 true data-volume 1Gi piraeus-snapshots snapcontent-a8757c1d-cd37-42d2-9557-a24b1222d118 15s 15s $ kubectl -n piraeus-datastore exec deploy/linstor-controller -- linstor snapshot list ++ | ResourceName | SnapshotName | NodeNames | Volumes | CreatedOn | State | |=========================================================================================================================================================| | pvc-9c04b307-d22d-454f-8f24-ed5837fe4426 | snapshot-a8757c1d-cd37-42d2-9557-a24b1222d118 | n3.example.com | 0: 1 GiB | 2023-02-13 15:36:18 | Successful | ++ ``` Now we want to simulate a situation where we accidentally delete some important data. The important data in our example workload is the log of Pod name, Node name and timestamp. We will manually delete the file on the volume to simulate accidental removal of an important file on a persistent volume: ``` $ kubectl exec deploy/volume-logger -- rm /volume/hello $ kubectl exec deploy/volume-logger -- cat /volume/hello cat: can't open '/volume/hello': No such file or directory command terminated with exit code 1 ``` This is the exact situation where snapshots can come in handy. Since we created a snapshot before we made removed the file, we can create a new volume to recover our data. We will be replacing the existing `data-volume` with a new version based on the snapshot. First, we will stop the Deployment by scaling it down to zero Pods, so that it does not interfere with our next steps: ``` $ kubectl scale deploy/volume-logger --replicas=0 deployment.apps \"volume-logger\" deleted $ kubectl rollout status deploy/volume-logger deployment \"volume-logger\" successfully rolled out ``` Next, we will also remove the `PersistentVolumeClaim`. We still have the snapshot which contains the data we want to restore, so we can safely remove the volume. ``` $ kubectl delete pvc/data-volume persistentvolumeclaim \"data-volume\" deleted ``` Now, we will create a new `PersistentVolumeClaim`, referencing our snapshot. This will create a volume, using the data from the snapshot. ``` kubectl apply -f - <<EOF apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-volume spec: storageClassName: piraeus-storage resources: requests: storage: 1Gi dataSource: apiGroup: snapshot.storage.k8s.io kind: VolumeSnapshot name: data-volume-snapshot-1 accessModes: ReadWriteOnce EOF ``` Since we named this new volume `data-volume`, we can just scale up our Deployment again, and the new Pod will start using the restored volume: ``` $ kubectl scale deploy/volume-logger --replicas=1 deployment.apps/volume-logger scaled ``` After the Pod started, we can once again verify the content of our volume: ``` $ kubectl wait pod --for=condition=Ready -l app.kubernetes.io/name=volume-logger pod/volume-logger-cbcd897b7-5qjbz condition met $ kubectl exec deploy/volume-logger -- cat /volume/hello Hello from volume-logger-cbcd897b7-jrmks, running on n3.example.com, started at Mon Feb 13 15:32:46 UTC 2023 Hello from volume-logger-cbcd897b7-gr6hh, running on n3.example.com, started at Mon Feb 13 15:42:17 UTC 2023 ``` You have now successfully created a snapshot, and used it to back up and restore a volume after accidental deletion."
}
] |
{
"category": "Runtime",
"file_name": "snapshots.md",
"project_name": "Piraeus Datastore",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "New versions of the [OpenTelemetry Semantic Conventions] mean new versions of the `semconv` package need to be generated. The `semconv-generate` make target is used for this. Checkout a local copy of the [OpenTelemetry Semantic Conventions] to the desired release tag. Pull the latest `otel/semconvgen` image: `docker pull otel/semconvgen:latest` Run the `make semconv-generate ...` target from this repository. For example, ```sh export TAG=\"v1.21.0\" # Change to the release version you are generating. export OTELSEMCONVREPO=\"/absolute/path/to/opentelemetry/semantic-conventions\" docker pull otel/semconvgen:latest make semconv-generate # Uses the exported TAG and OTELSEMCONVREPO. ``` This should create a new sub-package of . Ensure things look correct before submitting a pull request to include the addition. You can run `make gorelease` that runs to ensure that there are no unwanted changes done in the public API. You can check/report problems with `gorelease` . First, decide which module sets will be released and update their versions in `versions.yaml`. Commit this change to a new branch. Update go.mod for submodules to depend on the new release which will happen in the next step. Run the `prerelease` make target. It creates a branch `prerelease<module set><new tag>` that will contain all release changes. ``` make prerelease MODSET=<module set> ``` Verify the changes. ``` git diff ...prerelease<module set><new tag> ``` This should have changed the version for all modules to be `<new tag>`. If these changes look correct, merge them into your pre-release branch: ```go git merge prerelease<module set><new tag> ``` Update the . Make sure all relevant changes for this release are included and are in language that non-contributors to the project can understand. To verify this, you can look directly at the commits since the `<last tag>`. ``` git --no-pager log --pretty=oneline \"<last tag>..HEAD\" ``` Move all the `Unreleased` changes into a new section following the title scheme (`[<new tag>] - <date of release>`). Update all the appropriate links at the"
},
{
"data": "Push the changes to upstream and create a Pull Request on GitHub. Be sure to include the curated changes from the in the description. Once the Pull Request with all the version changes has been approved and merged it is time to tag the merged commit. *IMPORTANT*: It is critical you use the same tag that you used in the Pre-Release step! Failure to do so will leave things in a broken state. As long as you do not change `versions.yaml` between pre-release and this step, things should be fine. *IMPORTANT*: . It is critical you make sure the version you push upstream is correct. . For each module set that will be released, run the `add-tags` make target using the `<commit-hash>` of the commit on the main branch for the merged Pull Request. ``` make add-tags MODSET=<module set> COMMIT=<commit hash> ``` It should only be necessary to provide an explicit `COMMIT` value if the current `HEAD` of your working directory is not the correct commit. Push tags to the upstream remote (not your fork: `github.com/open-telemetry/opentelemetry-go.git`). Make sure you push all sub-modules as well. ``` git push upstream <new tag> git push upstream <submodules-path/new tag> ... ``` Finally create a Release for the new `<new tag>` on GitHub. The release body should include all the release notes from the Changelog for this release. After releasing verify that examples build outside of the repository. ``` ./verify_examples.sh ``` The script copies examples into a different directory removes any `replace` declarations in `go.mod` and builds them. This ensures they build with the published release, not the local copy. Once verified be sure to that uses this release. Update the [Go instrumentation documentation] in the OpenTelemetry website under [content/en/docs/instrumentation/go]. Importantly, bump any package versions referenced to be the latest one you just released and ensure all code examples still compile and are accurate. Bump the dependencies in the following Go services:"
}
] |
{
"category": "Runtime",
"file_name": "RELEASING.md",
"project_name": "Stash by AppsCode",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "(import-machines-to-instances)= Incus provides a tool (`incus-migrate`) to create an Incus instance based on an existing disk or image. You can run the tool on any Linux machine. It connects to an Incus server and creates a blank instance, which you can configure during or after the migration. The tool then copies the data from the disk or image that you provide to the instance. ```{note} If you want to configure your new instance during the migration process, set up the entities that you want your instance to use before starting the migration process. By default, the new instance will use the entities specified in the `default` profile. You can specify a different profile (or a profile list) to customize the configuration. See {ref}`profiles` for more information. You can also override {ref}`instance-options`, the {ref}`storage pool <storage-pools>` to be used and the size for the {ref}`storage volume <storage-volumes>`, and the {ref}`network <networking>` to be used. Alternatively, you can update the instance configuration after the migration is complete. ``` The tool can create both containers and virtual machines: When creating a container, you must provide a disk or partition that contains the root file system for the container. For example, this could be the `/` root disk of the machine or container where you are running the tool. When creating a virtual machine, you must provide a bootable disk, partition or image. This means that just providing a file system is not sufficient, and you cannot create a virtual machine from a container that you are running. It is also not possible to create a virtual machine from the physical machine that you are using to do the migration, because the migration tool would be using the disk that it is copying. Instead, you could provide a bootable image, or a bootable partition or disk that is currently not in use. ````{tip} If you want to convert a Windows VM from a foreign hypervisor (not from QEMU/KVM with Q35/`virtio-scsi`), you must install the `virtio-win` drivers to your Windows. Otherwise, your VM won't boot. <details> <summary>Expand to see how to integrate the required drivers to your Windows VM</summary> Install the required tools on the host: Install `virt-v2v` version >= 2.3.4 (this is the minimal version that supports the `--block-driver` option). Install the `virtio-win` package, or download the image and put it into the `/usr/share/virtio-win` folder. You might also need to install . Now you can use `virt-v2v` to convert images from a foreign hypervisor to `raw` images for Incus and include the required drivers: ``` sudo virt-v2v --block-driver virtio-scsi -o local -of raw -os ./os -i vmx ./test-vm.vmx sudo virt-v2v --block-driver virtio-scsi -o local -of raw -os ./os -if qcow2 -i disk test-vm-disk.qcow2 ``` You can find the resulting image in the `os` directory and use it with `incus-migrate` on the next steps. </details> ```` Complete the following steps to migrate an existing machine to an Incus instance: Download the `bin.linux.incus-migrate` tool ( or ) from the Assets section of the latest . Place the tool on the machine that you want to use to create the instance. Make it executable (usually by running `chmod u+x bin.linux.incus-migrate`). Make sure that the machine has `rsync` installed. If it is missing, install it (for example, with `sudo apt install rsync`). Run the tool: sudo"
},
{
"data": "The tool then asks you to provide the information required for the migration. ```{tip} As an alternative to running the tool interactively, you can provide the configuration as parameters to the command. See `./bin.linux.incus-migrate --help` for more information. ``` Specify the Incus server URL, either as an IP address or as a DNS name. ```{note} The Incus server must be {ref}`exposed to the network <server-expose>`. If you want to import to a local Incus server, you must still expose it to the network. You can then specify `127.0.0.1` as the IP address to access the local server. ``` Check and confirm the certificate fingerprint. Choose a method for authentication (see {ref}`authentication`). For example, if you choose using a certificate token, log on to the Incus server and create a token for the machine on which you are running the migration tool with . Then use the generated token to authenticate the tool. Choose whether to create a container or a virtual machine. See {ref}`containers-and-vms`. Specify a name for the instance that you are creating. Provide the path to a root file system (for containers) or a bootable disk, partition or image file (for virtual machines). For containers, optionally add additional file system mounts. For virtual machines, specify whether secure boot is supported. Optionally, configure the new instance. You can do so by specifying {ref}`profiles <profiles>`, directly setting {ref}`configuration options <instance-options>` or changing {ref}`storage <storage>` or {ref}`network <networking>` settings. Alternatively, you can configure the new instance after the migration. When you are done with the configuration, start the migration process. <details> <summary>Expand to see an example output for importing to a container</summary> ```{terminal} :input: sudo ./bin.linux.incus-migrate Please provide Incus server URL: https://192.0.2.7:8443 Certificate fingerprint: xxxxxxxxxxxxxxxxx ok (y/n)? y 1) Use a certificate token 2) Use an existing TLS authentication certificate 3) Generate a temporary TLS authentication certificate Please pick an authentication mechanism above: 1 Please provide the certificate token: xxxxxxxxxxxxxxxx Remote Incus server: Hostname: bar Version: 5.4 Would you like to create a container (1) or virtual-machine (2)?: 1 Name of the new instance: foo Please provide the path to a root filesystem: / Do you want to add additional filesystem mounts? [default=no]: Instance to be created: Name: foo Project: default Type: container Source: / Additional overrides can be applied at this stage: 1) Begin the migration with the above configuration 2) Override profile list 3) Set additional configuration options 4) Change instance storage pool or volume size 5) Change instance network Please pick one of the options above [default=1]: 3 Please specify config keys and values (key=value ...): limits.cpu=2 Instance to be created: Name: foo Project: default Type: container Source: / Config: limits.cpu: \"2\" Additional overrides can be applied at this stage: 1) Begin the migration with the above configuration 2) Override profile list 3) Set additional configuration options 4) Change instance storage pool or volume size 5) Change instance network Please pick one of the options above [default=1]: 4 Please provide the storage pool to use: default Do you want to change the storage size? [default=no]: yes Please specify the storage size: 20GiB Instance to be created: Name: foo Project: default Type: container Source: / Storage pool: default Storage pool size: 20GiB Config:"
},
{
"data": "\"2\" Additional overrides can be applied at this stage: 1) Begin the migration with the above configuration 2) Override profile list 3) Set additional configuration options 4) Change instance storage pool or volume size 5) Change instance network Please pick one of the options above [default=1]: 5 Please specify the network to use for the instance: incusbr0 Instance to be created: Name: foo Project: default Type: container Source: / Storage pool: default Storage pool size: 20GiB Network name: incusbr0 Config: limits.cpu: \"2\" Additional overrides can be applied at this stage: 1) Begin the migration with the above configuration 2) Override profile list 3) Set additional configuration options 4) Change instance storage pool or volume size 5) Change instance network Please pick one of the options above [default=1]: 1 Instance foo successfully created ``` </details> <details> <summary>Expand to see an example output for importing to a VM</summary> ```{terminal} :input: sudo ./bin.linux.incus-migrate Please provide Incus server URL: https://192.0.2.7:8443 Certificate fingerprint: xxxxxxxxxxxxxxxxx ok (y/n)? y 1) Use a certificate token 2) Use an existing TLS authentication certificate 3) Generate a temporary TLS authentication certificate Please pick an authentication mechanism above: 1 Please provide the certificate token: xxxxxxxxxxxxxxxx Remote Incus server: Hostname: bar Version: 5.4 Would you like to create a container (1) or virtual-machine (2)?: 2 Name of the new instance: foo Please provide the path to a root filesystem: ./virtual-machine.img Does the VM support UEFI Secure Boot? [default=no]: no Instance to be created: Name: foo Project: default Type: virtual-machine Source: ./virtual-machine.img Config: security.secureboot: \"false\" Additional overrides can be applied at this stage: 1) Begin the migration with the above configuration 2) Override profile list 3) Set additional configuration options 4) Change instance storage pool or volume size 5) Change instance network Please pick one of the options above [default=1]: 3 Please specify config keys and values (key=value ...): limits.cpu=2 Instance to be created: Name: foo Project: default Type: virtual-machine Source: ./virtual-machine.img Config: limits.cpu: \"2\" security.secureboot: \"false\" Additional overrides can be applied at this stage: 1) Begin the migration with the above configuration 2) Override profile list 3) Set additional configuration options 4) Change instance storage pool or volume size 5) Change instance network Please pick one of the options above [default=1]: 4 Please provide the storage pool to use: default Do you want to change the storage size? [default=no]: yes Please specify the storage size: 20GiB Instance to be created: Name: foo Project: default Type: virtual-machine Source: ./virtual-machine.img Storage pool: default Storage pool size: 20GiB Config: limits.cpu: \"2\" security.secureboot: \"false\" Additional overrides can be applied at this stage: 1) Begin the migration with the above configuration 2) Override profile list 3) Set additional configuration options 4) Change instance storage pool or volume size 5) Change instance network Please pick one of the options above [default=1]: 5 Please specify the network to use for the instance: incusbr0 Instance to be created: Name: foo Project: default Type: virtual-machine Source: ./virtual-machine.img Storage pool: default Storage pool size: 20GiB Network name: incusbr0 Config: limits.cpu: \"2\" security.secureboot: \"false\" Additional overrides can be applied at this stage: 1) Begin the migration with the above configuration 2) Override profile list 3) Set additional configuration options 4) Change instance storage pool or volume size 5) Change instance network Please pick one of the options above [default=1]: 1 Instance foo successfully created ``` </details> When the migration is complete, check the new instance and update its configuration to the new environment. Typically, you must update at least the storage configuration (`/etc/fstab`) and the network configuration."
}
] |
{
"category": "Runtime",
"file_name": "import_machines_to_instances.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: Benchmark with mdtest sidebar_position: 8 slug: /mdtest :::tip Trash is enabled in JuiceFS v1.0+ by default. As a result, temporary files are created and deleted in the file system during the benchmark, and these files will be eventually dumped into a directory named `.trash`. To avoid storage space being occupied by `.trash`, you can run command `juicefs config META-URL --trash-days 0` to disable Trash before benchmark. See for details. ::: Perform a metadata test on JuiceFS, and with . The following tests are performed with `mdtest` 3.4. The arguments of `mdtest` are tuned to ensure that the command will finish within 5 minutes. ``` ./mdtest -d /s3fs/mdtest -b 6 -I 8 -z 2 ./mdtest -d /efs/mdtest -b 6 -I 8 -z 4 ./mdtest -d /jfs/mdtest -b 6 -I 8 -z 4 ``` All the following tests are performed using `mdtest` on a c5.large EC2 instance (2 CPU, 4G RAM) with Ubuntu 18.04 LTS (Kernel 5.4.0) operating system. The Redis (version 4.0.9) which JuiceFS uses runs on a c5.large EC2 instance in the same available zone to store metadata. JuiceFS mount command: ``` ./juicefs format --storage=s3 --bucket=https://<BUCKET>.s3.<REGION>.amazonaws.com localhost benchmark nohup ./juicefs mount localhost /jfs & ``` EFS mount command (the same as the configuration page): ``` mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport, <EFS-ID>.efs.<REGION>.amazonaws.com:/ /efs ``` S3FS (version 1.82) mount command: ``` s3fs <BUCKET>:/s3fs /s3fs -o host=https://s3.<REGION>.amazonaws.com,endpoint=<REGION>,passwd_file=${HOME}/.passwd-s3fs ``` ``` mdtest-3.4.0+dev was launched with 1 total task(s) on 1 node(s) Command line used: ./mdtest '-d' '/s3fs/mdtest' '-b' '6' '-I' '8' '-z' '2' WARNING: Read bytes is 0, thus, a read test will actually just open/close. Path : /s3fs/mdtest FS : 256.0 TiB Used FS: 0.0% Inodes: 0.0 Mi Used Inodes: -nan% Nodemap: 1 1 tasks, 344 files/directories SUMMARY rate: (of 1 iterations) Operation Max Min Mean Std Dev - Directory creation : 5.977 5.977 5.977 0.000 Directory stat : 435.898 435.898 435.898 0.000 Directory removal : 8.969"
},
{
"data": "8.969 0.000 File creation : 5.696 5.696 5.696 0.000 File stat : 68.692 68.692 68.692 0.000 File read : 33.931 33.931 33.931 0.000 File removal : 23.658 23.658 23.658 0.000 Tree creation : 5.951 5.951 5.951 0.000 Tree removal : 9.889 9.889 9.889 0.000 ``` ``` mdtest-3.4.0+dev was launched with 1 total task(s) on 1 node(s) Command line used: ./mdtest '-d' '/efs/mdtest' '-b' '6' '-I' '8' '-z' '4' WARNING: Read bytes is 0, thus, a read test will actually just open/close. Path : /efs/mdtest FS : 8388608.0 TiB Used FS: 0.0% Inodes: 0.0 Mi Used Inodes: -nan% Nodemap: 1 1 tasks, 12440 files/directories SUMMARY rate: (of 1 iterations) Operation Max Min Mean Std Dev - Directory creation : 192.301 192.301 192.301 0.000 Directory stat : 1311.166 1311.166 1311.166 0.000 Directory removal : 213.132 213.132 213.132 0.000 File creation : 179.293 179.293 179.293 0.000 File stat : 915.230 915.230 915.230 0.000 File read : 371.012 371.012 371.012 0.000 File removal : 217.498 217.498 217.498 0.000 Tree creation : 187.906 187.906 187.906 0.000 Tree removal : 218.357 218.357 218.357 0.000 ``` ``` mdtest-3.4.0+dev was launched with 1 total task(s) on 1 node(s) Command line used: ./mdtest '-d' '/jfs/mdtest' '-b' '6' '-I' '8' '-z' '4' WARNING: Read bytes is 0, thus, a read test will actually just open/close. Path : /jfs/mdtest FS : 1024.0 TiB Used FS: 0.0% Inodes: 10.0 Mi Used Inodes: 0.0% Nodemap: 1 1 tasks, 12440 files/directories SUMMARY rate: (of 1 iterations) Operation Max Min Mean Std Dev - Directory creation : 1416.582 1416.582 1416.582 0.000 Directory stat : 3810.083 3810.083 3810.083 0.000 Directory removal : 1115.108 1115.108 1115.108 0.000 File creation : 1410.288 1410.288 1410.288 0.000 File stat : 5023.227 5023.227 5023.227 0.000 File read : 3487.947 3487.947 3487.947 0.000 File removal : 1163.371 1163.371 1163.371 0.000 Tree creation : 1503.004 1503.004 1503.004 0.000 Tree removal : 1119.806 1119.806 1119.806 0.000 ```"
}
] |
{
"category": "Runtime",
"file_name": "mdtest.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Welcome to Kubernetes. We are excited about the prospect of you joining our ! The Kubernetes community abides by the CNCF . Here is an excerpt: As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities. We have full documentation on how to get started contributing here: <! If your repo has certain guidelines for contribution, put them here ahead of the general k8s resources --> Kubernetes projects require that you sign a Contributor License Agreement (CLA) before we can accept your pull requests - Main contributor documentation, or you can just jump directly to the - Common resources for existing developers - We have a diverse set of mentorship programs available that are always looking for volunteers! <! Custom Information - if you're copying this template for the first time you can add custom content here, for example: - Replace `kubernetes-users` with your slack channel string, this will send users directly to your channel. -->"
}
] |
{
"category": "Runtime",
"file_name": "CONTRIBUTING.md",
"project_name": "Kilo",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "sidebar_position: 3 sidebar_label: \"Disk Storage Node\" Raw disk storage nodes provide applications with raw disk data volumes and maintain the mapping between raw disks and raw disk data volumes on the storage node. Add the node to the Kubernetes cluster or select a Kubernetes node. For example, suppose you have a new node with the following information: name: k8s-worker-2 devPath: /dev/sdb diskType: SSD disk After the new node is already added into the Kubernetes cluster, make sure the following HwameiStor pods are already running on this node. ```bash $ kubectl get node NAME STATUS ROLES AGE VERSION k8s-master-1 Ready master 96d v1.24.3-2+63243a96d1c393 k8s-worker-1 Ready worker 96h v1.24.3-2+63243a96d1c393 k8s-worker-2 Ready worker 96h v1.24.3-2+63243a96d1c393 $ kubectl -n hwameistor get pod -o wide | grep k8s-worker-2 hwameistor-local-disk-manager-sfsf1 2/2 Running 0 19h 10.6.128.150 k8s-worker-2 <none> <none> $ kubectl get localdisknode k8s-worker-2 NAME FREECAPACITY TOTALCAPACITY TOTALDISK STATUS AGE k8s-worker-2 Ready 21d ``` First, change the `owner` information of the disk sdb to local-disk-manager as below: ```console $ kubectl edit ld localdisk-2307de2b1c5b5d051058bc1d54b41d5c apiVersion: hwameistor.io/v1alpha1 kind: LocalDisk metadata: name: localdisk-2307de2b1c5b5d051058bc1d54b41d5c spec: devicePath: /dev/sdb nodeName: k8s-worker-2 owner: local-disk-manager ... ``` Create the storage pool of the node by adding a LocalStorageClaim CR as below: ```console $ kubectl apply -f - <<EOF apiVersion: hwameistor.io/v1alpha1 kind: LocalDiskClaim metadata: name: k8s-worker-2 spec: nodeName: k8s-worker-2 owner: local-disk-manager description: diskType: SSD EOF ``` Finally, check if the node has created the storage pool by checking the LocalDiskNode CR. ```bash kubectl get localstoragenode k8s-worker-2 -o yaml ``` The output may look like: ```yaml apiVersion: hwameistor.io/v1alpha1 kind: LocalDiskNode metadata: name: k8s-worker-2 spec: nodeName: k8s-worker-2 status: pools: LocalDisk_PoolSSD: class: SSD disks: capacityBytes: 214744170496 devPath: /dev/sdb state: Available type: SSD freeCapacityBytes: 214744170496 freeVolumeCount: 1 totalCapacityBytes: 214744170496 totalVolumeCount: 1 type: REGULAR usedCapacityBytes: 0 usedVolumeCount: 0 volumeCapacityBytesLimit: 214744170496 volumes: state: Ready ```"
}
] |
{
"category": "Runtime",
"file_name": "disk_nodes.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "](https://gitter.im/datenlord/datenlord?utmsource=badge&utmmedium=badge&utmcampaign=pr-badge&utmcontent=badge) ](https://codecov.io/gh/datenlord/datenlord) DatenLord is a next-generation cloud-native distributed storage platform, which aims to meet the performance-critical storage needs from next-generation cloud-native applications, such as microservice, serverless, AI, etc. On one hand, DatenLord is designed to be a cloud-native storage system, which itself is distributed, fault-tolerant, and graceful upgrade. These cloud-native features make DatenLord easy to use and easy to maintain. On the other hand, DatenLord is designed as an application-orientated storage system, in that DatenLord is optimized for many performance-critical scenarios, such as databases, AI machine learning, big data. Meanwhile, DatenLord provides high-performance storage service for containers, which facilitates stateful applications running on top of Kubernetes (K8S). The high performance of DatenLord is achieved by leveraging the most recent technology revolution in hardware and software, such as NVMe, non-volatile memory, asynchronous programming, and the native Linux asynchronous IO support. Why do we build DatenLord? The reason is two-fold: Firstly, the recent computer hardware architecture revolution stimulates storage software refractory. The storage related functionalities inside Linux kernel haven't changed much in recent 10 years, whenas hard-disk drive (HDD) was the main storage device. Nowadays, solid-state drive (SSD) becomes the mainstream, not even mention the most advanced SSD, NVMe and non-volatile memory. The performance of SSD is hundreds of times faster than HDD, in that the HDD latency is around 1~10 ms, whereas the SSD latency is around 50150 s, the NVMe latency is around 25 s, and the non-volatile memory latency is 350 ns. With the performance revolution of storage devices, traditional blocking-style/synchronous IO in Linux kernel becomes very inefficient, and non-blocking-style/asynchronous IO is much more applicable. The Linux kernel community already realized that, and recently Linux kernel has proposed native-asynchronous IO mechanism, io_uring, to improve IO performance. Beside blocking-style/synchronous IO, the context switch overhead in Linux kernel becomes no longer negligible w.r.t. SSD latency. Many modern programming languages have proposed asynchronous programming, green thread or coroutine to manage asynchronous IO tasks in user space, in order to avoid context switch overhead introduced by blocking IO. Therefore we think its time to build a next-generation storage system that takes advantage of the storage performance revolution as far as possible, by leveraging non-blocking/asynchronous IO, asynchronous programming, NVMe, and even non-volatile memory, etc. Secondly, most distributed/cloud-native systems are computing and storage isolated, that computing tasks/applications and storage systems are of dedicated clusters, respectively. This isolated architecture is best to reduce maintenance, that it decouples the maintenance tasks of computing clusters and storage clusters into separate ones, such as upgrade, expansion, migration of each cluster respectively, which is much simpler than of coupled clusters. Nowadays, however, applications are dealing with much larger datasets than ever before. One notorious example is that an AI training job takes one hour to load data whereas the training job itself finishes in only 45 minutes. Therefore, isolating computing and storage makes IO very inefficient, as transferring data between applications and storage systems via network takes quite a lot of"
},
{
"data": "Further, with the isolated architecture, applications have to be aware of the different data location, and the varying access cost due to the difference of data location, network distance, etc. DatenLord tackles the IO performance issue of isolated architecture in a novel way, which abstracts the heterogeneous storage details and makes the difference of data location, access cost, etc, transparent to applications. Furthermore, with DatenLord, applications can assume all the data to be accessed are local, and DatenLord will access the data on behalf of applications. Besides, DatenLord can help K8S to schedule jobs close to cached data, since DatenLord knows the exact location of all cached data. By doing so, applications are greatly simplified w.r.t. to data access, and DatenLord can leverage local cache, neighbor cache, and remote cache to speed up data access, so as to boost performance. The main scenario of DatenLord is to facilitate high availability across multi-cloud, hybrid-cloud, multiple data centers, etc. Concretely, there are many online business providers whose business is too important to afford any downtime. To achieve high availability, the service providers have to leverage multi-cloud, hybrid-cloud, and multiple data centers to hopefully avoid single point failure of each single cloud or data center, by deploying applications and services across multiple clouds or data centers. It's relatively easier to deploy applications and services to multiple clouds and data centers, but it's much harder to duplicate all data to all clouds or all data centers in a timely manner, due to the huge data size. If data is not equally available across multiple clouds or data centers, the online business might still suffer from single point failure of a cloud or a data center, because data unavailability resulted from a cloud or a data center failure. DatenLord can alleviate data unavailable of cloud or data center failure by caching data to multiple layers, such as local cache, neighbor cache, remote cache, etc. Although the total data size is huge, the hot data involved in online business is usually of limited size, which is called data locality. DatenLord leverages data locality and builds a set of large scale distributed and automatic cache layers to buffer hot data in a smart manner. The benefit of DatenLord is two-fold: DatenLord is transparent to applications, namely DatenLord does not need any modification to applications; DatenLord is high performance, that it automatically caches data by means of the data hotness, and it's performance is achieved by applying different caching strategies according to target applications. For example, least recent use (LRU) caching strategy for some kind of random access, most recent use (MRU) caching strategy for some kind of sequential access, etc. DatenLord provides 3 kinds of user interfaces: KV interface, S3 interface and file interface. The backend storage is supported by the underlying distributed cache layer which is strong consistent. The strong consistency is guaranteed by the metadata management module which is built on high performance consensus protocol. The persistence storage layer can be local disk or S3"
},
{
"data": "For the network, RDMA is used to provide high throughput and low latency networks. If RDMA is not supported, TCP is an alternative option. For the multiple data center and hybrid clouds scenario, there will be a dedicated metadata server which supports metadata requests within the same data center. While in the same data center scenario, the metadata module can run on the same machine as the cache node. The network between data centers and public clouds are managed by a private network to guarantee high quality data transfer. <! DatenLord is of master-slave architecture. To achieve better storage performance, DatenLord has a coupled architecture with K8S, that DatenLord can be deployed within a K8S cluster, in order to leverage data locality to speed up data access. The above figure is the overall DatenLord architecture, the green parts are DatenLord components, the blue parts are K8S components, the yellow part represents containerized applications. There are several major components of DatenLord: master node (marked as DatenLord), slave node (marked as Daten Sklavin), and K8S plugins. The master node has three parts: S3 compatible interface (S3I), Lord, and Meta Storage Engine (MSE). S3I provides a convenient way to read and write data in DatenLord via S3 protocol, especially for bulk upload and download scenarios, e.g. uploading large amounts of data for big data batch jobs or AI machine learning training jobs. Lord is the overall controller of DatenLord, which controls all the internal behaviors of DatenLord, such as where and how to write data, synchronize data, etc. MSE stores all the meta information of DatenLord, such as the file paths of all the data stored in each slave node, the user-defined labels of each data file, etc. MSE is similar to HDFS namenode. The slave node has four parts: Data Storage Engine (DSE), Sklavin, Meta Storage Engine (MSE), S3/P2P interface. DSE is the distributed cache layer, which is in charge of local IO and network IO, that it not only reads/writes data from/to memory or local disks, but also queries neighbor nodes to read neighbor cached data, further if local and neighbor cache missed, it reads data from remote persistent storage, and it can write data back to remote storage if necessary. More specifically, DatenLord sets up a filesystem in userspace (FUSE) in a slave node. DSE implements the FUSE API's, executing all the underlying FUSE operations, such as open, create, read, and write, etc. DSE also functions as a distributed cache, every DSE of slave nodes can communicate with each other via TCP/IP or RDMA to exchange cached data. Sklavin is to communicate with the Lord of the master node and handle the requests from the Lord and CSI driver, such as health check report, data synchronization, data consistency inspection, Lord election, etc. The MSE of the slave node is a local copy of the MSE from the master node. S3 interface provides a convenient way to read, write and synchronize data in a slave"
},
{
"data": "The K8S plugins include a container storage interface (CSI) driver and a customer filter. The CSI driver is for DatenLord to work with K8S to manage volumes for container tasks, such as loading a read-only volume, creating a read-write volume. The customer filter is to help K8S to schedule tasks to data nearby based on the meta-information in MSE of the master node. --> <! In general, there are two kinds of storage needs from an application perspective: one is latency-sensitive, and the other is throughput-sensitive. As for latency-sensitive applications, such as database applications, like MySQL, MongoDB, and ElasticSearch, etc, their performance relies on how fast a single I/O-request got handled. As for throughput-sensitive applications, such as big data and AI machine learning jobs, like Spark, Hadoop, and TensorFlow, etc, the more data load per unit time, the better their performance. DatenLord is crafted to fit the aforementioned two scenarios. Specifically, to reduce latency, DatenLord caches in memory as much data as possible, in order to minimize disk access; to improve throughput (we focus on reading throughput currently), DatenLord, on one hand, prefetches data in memory to speed up access, on the other hand, leverages K8S to schedule tasks to data nearby, so as to minimize data transfer cost. DatenLord has several target scenarios, which fall into two main categories: Latency-sensitive cases, that DatenLord will coordinate with K8S to schedule containers close to data to minimize latency: Containerized applications, especially stateful applications; Serverless, Lambda, FaaS, etc, event-driven tasks; Throughput-sensitive cases, that DatenLord will pre-load remote data into local clusters to speed up access: AI and big-data jobs, especially training tasks; Multi-cloud storage unified management, to facilitate application migration across clouds. --> Currently DatenLord has been built as Docker images and can be deployed via K8S. To deploy DatenLord via K8S, just simply run: `sed -e 's/e2e_test/latest/g' scripts/setup/datenlord.yaml > datenlord-deploy.yaml` `kubectl apply -f datenlord-deploy.yaml` To use DatenLord, just define PVC using DatenLord Storage Class, and then deploy a Pod using this PVC: ``` cat <<EOF >datenlord-demo.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-datenlord-test spec: accessModes: ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Mi storageClassName: csi-datenlord-sc apiVersion: v1 kind: Pod metadata: name: mysql-datenlord-test spec: containers: name: mysql image: mysql env: name: MYSQLROOTPASSWORD value: \"rootpasswd\" volumeMounts: mountPath: /var/lib/mysql name: data subPath: mysql volumes: name: data persistentVolumeClaim: claimName: pvc-datenlord-test EOF kubectl apply -f datenlord-demo.yaml ``` DatenLord provides a customized scheduler which implements K8S . The scheduler will try to schedule a pod to the node that has the volume that it requests. To use the scheduler, add `schedulerName: datenlord-scheduler` to the spec of your pod. Caveat: dangling docker image may cause `failed to parse request` error. Doing `docker image prune` on each K8S node is a way to fix it. It may need to install snapshot CRD and controller on K8S, if used K8S CSI snapshot feature: `kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml` `kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml` `kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml` `kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml` `kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml` Datenlord monitoring guideline is in"
},
{
"data": "We provide both `YAML` and `Helm` method to deploy the monitoring system. To use `YAML` method, just run ``` sh ./scripts/setup/datenlord-monitor-deploy.sh ``` To use `Helm` method, run ``` sh ./scripts/setup/datenlord-monitor-deploy.sh helm ``` Performance test is done by and is used to plot performance histograms. To run performance test, ``` sudo apt-get update sudo apt-get install -y fio python3-pip sudo pip3 install matplotlib numpy fio-plot sh ./scripts/perf/fio-perf-test.sh TEST_DIR ``` Four histograms will be generated. Random read IOPS and latency for different block sizes Random write IOPS and latency for different block sizes Random read IOPS and latency for different read thread numbers with 4k block size Random write IOPS and latency for different write thread numbers with 4k block size Performance test is added to GitHub Action() and performance report is generated and archived as artifacts() for every four hours. Anyone interested in DatenLord is welcomed to contribute. Please follow the . Meanwhile, DatenLord adopts very strict clippy linting, please fix every clippy warning before submit your PR. Also please make sure all CI tests are passed. The CI of DatenLord leverages GitHub Action. There are two CI flows for DatenLord, is for Rust cargo test, clippy lints, and standard filesystem E2E checks; is for CSI related tests, such as CSI sanity test and CSI E2E test. The CSI E2E test setup is a bit complex, its action script is quite long, so let's explain it in detail: First, it sets up a test K8S cluster with one master node and three slave nodes, using Kubernetes in Docker (KinD); Second, CSI E2E test requires no-password SSH login to each K8S slave node, since it might run some commands to prepare test environment or verify test result, so it has to setup SSH key to each Docker container of KinD slave nodes; Third, it builds DatenLord container images and loads to KinD, which is a caveat of KinD, in that KinD puts K8S nodes inside Docker containers, thus kubelet cannot reach any resource of local host, and KinD provides load operation to make the container images from local host visible to kubelet; At last, it deploys DatenLord to the test K8S cluster, then downloads pre-build K8S E2E binary, runs in parallel by involking `ginkgo -p`, and only selects `External.Storage` related CSI E2E testcases to run. DatenLord has several related sub-projects, mostly working in progress, listed alphabetically: Native async Rust library for FUSE; Async and safe Rust library for RDMA; Async etcd client SDK in Rust; Lock-free hashmap in Rust; Pure containerized Linux distribution; Async and safe Rust library for io_uring; S3 server in Rust. [ ] 0.1 Refactor async fuse lib to provide clear async APIs, which is used by the datenlord filesystem. [ ] 0.2 Support all Fuse APIs in the datenlord fs. [ ] 0.3 Make fuse lib fully asynchronous. Switch async fuse lib's device communication channel from blocking I/O to `io_uring`. [ ] 0.4 Complete K8S integration test. [ ] 0.5 Support RDMA. [ ] 1.0 Complete Tensorflow K8S integration and finish performance comparison with raw fs."
}
] |
{
"category": "Runtime",
"file_name": "README.md",
"project_name": "DatenLord",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Generate the autocompletion script for fish Generate the autocompletion script for the fish shell. To load completions in your current shell session: cilium-operator-aws completion fish | source To load completions for every new session, execute once: cilium-operator-aws completion fish > ~/.config/fish/completions/cilium-operator-aws.fish You will need to start a new shell for this setup to take effect. ``` cilium-operator-aws completion fish [flags] ``` ``` -h, --help help for fish --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell"
}
] |
{
"category": "Runtime",
"file_name": "cilium-operator-aws_completion_fish.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Version: v3.3.1 | Node Type | Number of Nodes | CPU | Memory | Storage | Network | Remarks | |--|--|--||||| | Management Node | 3 | 8 Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz | 32GiB DDR4 2666 MHz | 197GiB HDD SCSI | 10 Gb/s | Docker container | | Metadata Node | 5 | 80 Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz | 377GiB DDR4 2933MHz | 4 x 3.7TiB SSD NVMe | 50 Gb/s | Mixed deployment | | Data Node | 5 | 80 Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz | 377GiB DDR4 2933MHz | 4 x 3.7TiB SSD NVMe | 50 Gb/s | Mixed deployment | No quota is set, and no flow control rate limit is set on DataNode. | Parameter | Default Value | Recommended Value | Description | |-||-|-| | FollowerRead | False | False | Whether to enable FollowerRead | | Capacity | 10 GB | 300,000,000 GB | Capacity | | Data Replica Number | 3 | 500 | Number of data replicas | | Meta Replica Number | 3 | 3 | Number of metadata replicas | | Data Partition Size | 120 GB | 120 GB | Theoretical upper limit, no space is pre-allocated | | Data Partition Count | 10 | 1500 | Number of data partitions | | Meta Partition Count | 3 | 10 | Number of metadata partitions | | Cross Zone | False | False | Whether to cross zones | Setting method: ```bash ./cfs-cli volume create test-vol {owner} --capacity=300000000 --mp-count=10 Create a new volume: Name : test-vol Owner : {owner} capacity : 300000000 G deleteLockTime : 0 h crossZone : false DefaultPriority : false description : mpCount : 10 replicaNum : size : 120 G volType : 0 followerRead : false readOnlyWhenFull : false zoneName : cacheRuleKey : ebsBlkSize : 8388608 byte cacheCapacity : 0 G cacheAction : 0 cacheThreshold : 10485760 byte cacheTTL : 30 day cacheHighWater : 80 cacheLowWater : 60 cacheLRUInterval : 5 min TransactionMask : TransactionTimeout : 1 min TxConflictRetryNum : 0 TxConflictRetryInterval : 0 ms Confirm (yes/no)[yes]: yes Create volume success. ``` Expand dp to 500 ```bash curl -v \"http://10.196.59.198:17010/dataPartition/create?count=32&name=test-vol\" ```"
}
] |
{
"category": "Runtime",
"file_name": "env.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Currently, Longhorn's recurring job automatically cleans up older snapshots of volumes to retain no more than the defined snapshot number. However, this is limited to the snapshot created by the recurring job. For the non-recurring volume snapshots or snapshots created by backups, the user needs to clean them manually. Having periodic snapshot cleanup could help to delete/purge those extra snapshots regardless of the creation method. https://github.com/longhorn/longhorn/issues/3836 Introduce new recurring job types: `snapshot-delete`: periodically remove and purge all kinds of snapshots that exceed the retention count. `snapshot-cleanup`: periodically purge removable or system snapshots. `None` Introduce two new `RecurringJobType`: snapshot-delete snapshot-cleanup Recurring job periodically deletes and purges the snapshots for RecurringJob using the `snapshot-delete` task type. Longhorn will retain snapshots based on the given retain number. Recurring job periodically purges the snapshots for RecurringJob using the `snapshot-cleanup` task type. The user can create a RecurringJob with `spec.task=snapshot-delete` to instruct Longhorn periodically delete and purge snapshots. The user can create a RecurringJob with `spec.task=snapshot-cleanup` to instruct Longhorn periodically purge removable or system snapshots. Have some volume backups and snapshots. Create RecurringJob with the `snapshot-delete` task type. ```yaml apiVersion: longhorn.io/v1beta2 kind: RecurringJob metadata: name: recurring-snap-delete-per-min namespace: longhorn-system spec: concurrency: 1 cron: ' *' groups: [] labels: {} name: recurring-snap-delete-per-min retain: 2 task: snapshot-delete ``` Assign the RecurringJob to volume. Longhorn deletes all expired snapshots. As a result of the above example, the user will see two snapshots after the job completes. Have some system snapshots. Create RecurringJob with the `snapshot-cleanup` task type. ```yaml apiVersion: longhorn.io/v1beta2 kind: RecurringJob metadata: name: recurring-snap-cleanup-per-min namespace: longhorn-system spec: concurrency: 1 cron: ' *' groups: [] labels: {} name: recurring-snap-cleanup-per-min task: snapshot-cleanup ``` Assign the RecurringJob to volume. Longhorn deletes all expired system snapshots. As a result of the above example, the user will see 0 system snapshot after the job completes. `None` List all expired snapshots (similar to the current `listSnapshotNamesForCleanup` implementation), and use as the in `doSnapshotCleanup`. Continue with the current implementation to purge snapshots. Do snapshot purge only in `doSnapshotCleanup`. Mutate the `Recurringjob.Spec.Retain` to 0 when the task type is `snapshot-cleanup` since retain value has no effect on the purge. Create volume. Create 2 volume backups. Create 2 volume snapshots. Create a snapshot RecurringJob with the `snapshot-delete` task type. Assign the RecurringJob to volume. Wait until the recurring job is completed. Should see the number of snapshots matching the Recurring job `spec.retain`. Create volume. Create 2 volume system snapshots, ex: delete replica, online expansion. Create a snapshot RecurringJob with the `snapshot-cleanup` task type. Assign the RecurringJob to volume. Wait until the recurring job is completed. Should see the volume has 0 system snapshots. `None` `None`"
}
] |
{
"category": "Runtime",
"file_name": "20230103-recurring-snapshot-cleanup.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This guide shows you how to deploy the [LINSTOR Affinity Controller] for Piraeus Datastore. The LINSTOR Affinity Controller keeps the affinity of your volumes in sync between Kubernetes and LINSTOR. When using a strict volume affinity setting, such as `allowRemoteVolumeAccess: false`, the Persistent Volume (PV) resource created by Piraeus Datastore will have a fixed affinity. When the volume is moved to a different node, for example because one of the existing replicas is being evacuated, the PV is not updated automatically. The LINSTOR Affinity controller watches PVs and LINSTOR resource and keeps the affinity up-to-date. To complete this guide, you should be familiar with: Deploying workloads in Kubernetes using Piraeus Datastore maintains a helm chart repository for commonly deployed components, including the LINSTOR Affinity Controller. To add the repository to your helm configuration, run: ``` $ helm repo add piraeus-charts https://piraeus.io/helm-charts/ ``` After adding the repository, deploy the LINSTOR Affinity Controller: ``` $ helm install linstor-affinity-controller piraeus-charts/linstor-affinity-controller NAME: linstor-affinity-controller LAST DEPLOYED: Mon Dec 4 09:14:07 2023 NAMESPACE: piraeus-datastore STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: LINSTOR Affinity Controller deployed. Used LINSTOR URL: http://linstor-controller.piraeus-datastore.svc:3370 ``` If you deploy the LINSTOR Affinity Controller to the same namespace as the Piraeus Operator, the deployment will automatically determine the necessary parameters for connecting to LINSTOR. In some cases, helm may not be able to determine the connection parameters. In this case, you need to manually provide the following values: ```yaml linstor: endpoint: http://linstor-controller.piraeus-datastore.svc:3370 clientSecret: \"\" options: propertyNamespace: Aux/topology ```"
}
] |
{
"category": "Runtime",
"file_name": "linstor-affinity-controller.md",
"project_name": "Piraeus Datastore",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "restruct is an open source project that anyone can contribute to. This file contains a list of all contributors up to this point. This list is obtained by running `git shortlog -s` and is listed in alphabetical order. If this file falls out of date and is missing a name, or an entry should be changed, please . *"
}
] |
{
"category": "Runtime",
"file_name": "CONTRIBUTORS.md",
"project_name": "Inclavare Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "sidebar_position: 3 description: This article will guide you through building a distributed, shared-access JuiceFS file system using cloud-based object storage and database. introduces how to create a file system that can be mounted on any host by using an \"object storage\" and a \"SQLite\" database. Thanks to the feature that the object storage is accessible by any computer with privileges on the network, we can also access the same JuiceFS file system on different computers by simply copying the SQLite database file to any computer that needs to access the storage. However, the real-time availability of the files is not guaranteed if the file system is shared by the above approach. Since SQLite is a single file database that cannot be accessed by multiple computers at the same time, a database that supports network access is needed, such as Redis, PostgreSQL, MySQL, etc., which allows a file system to be mounted and read by multiple computers in a distributed environment. In this document, a multi-user \"cloud database\" is used to replace the single-user \"SQLite\" database used in the previous document, aiming to implement a distributed file system that can be mounted on any computer on the network for reading and writing. The meaning of \"Network Database\" here refers to the database that allows multiple users to access it simultaneously through the network. From this perspective, the database can be simply divided into: Standalone Database: which is a single-file database and is usually only accessed locally, such as SQLite, Microsoft Access, etc. Network Database: which usually has complex multi-file structures, provides network-based access interfaces and supports simultaneous access by multiple users, such as Redis, PostgreSQL, etc. JuiceFS currently supports the following network-based databases. Key-Value Database: Redis, TiKV, etcd, FoundationDB Relational Database: PostgreSQL, MySQL, MariaDB Different databases have different performance and stability. For example, Redis is an in-memory key-value database with an excellent performance but a relatively weak reliability, while PostgreSQL is a relational database which is more reliable but has a less excellent performance than the in-memory database. The document that specifically introduces how to select database will come soon. Cloud computing platforms usually offer a wide variety of cloud database, such as Amazon RDS for various relational database versions and Amazon ElastiCache for Redis-compatible in-memory database products, which allows to create a multi-copy and highly available database cluster by a simple initial setup. Of course, you can also build your own database on the server. For simplicity, we take Amazon ElastiCache for Redis as an example. The most basic information of a network database consists of the following 2 items. Database Address: the access address of the database; the cloud platform may provide different links for internal and external networks. Username and Password: authentication information used to access the database. Install the JuiceFS client on all computers that need to mount the file system, refer to for"
},
{
"data": "Here is a pseudo sample with Amazon S3 as an example. You can also switch to other object storage (refer to for details). Bucket Endpoint: `https://myjfs.s3.us-west-1.amazonaws.com` Access Key ID: `ABCDEFGHIJKLMNopqXYZ` Access Key Secret: `ZYXwvutsrqpoNMLkJiHgfeDCBA` Here is a pseudo sample with Amazon ElastiCache for Redis as an example. You can also switch to other types of databases (refer to for details). Database Address: `myjfs-sh-abc.apse1.cache.amazonaws.com:6379` Database Username: `tom` Database Password: `mypassword` The format for using a Redis database in JuiceFS is as follows. ``` redis://<username>:<password>@<Database-IP-or-URL>:6379/1 ``` :::tip Redis versions lower than 6.0 do not take username, so omit the `<username>` part in the URL, e.g. `redis://:[email protected]:6379/1` (please note that the colon in front of the password is a separator and needs to be preserved). ::: The following command creates a file system that supports cross-network, multi-machine simultaneous mounts, and shared reads and writes using an object storage and a Redis database. ```shell juicefs format \\ --storage s3 \\ --bucket https://myjfs.s3.us-west-1.amazonaws.com \\ --access-key ABCDEFGHIJKLMNopqXYZ \\ --secret-key ZYXwvutsrqpoNMLkJiHgfeDCBA \\ redis://tom:[email protected]:6379/1 \\ myjfs ``` Once the file system is created, the terminal will output something like the following. ```shell 2021/12/16 16:37:14.264445 juicefs[22290] <INFO>: Meta address: redis://@myjfs-sh-abc.apse1.cache.amazonaws.com:6379/1 2021/12/16 16:37:14.277632 juicefs[22290] <WARNING>: maxmemory_policy is \"volatile-lru\", please set it to 'noeviction'. 2021/12/16 16:37:14.281432 juicefs[22290] <INFO>: Ping redis: 3.609453ms 2021/12/16 16:37:14.527879 juicefs[22290] <INFO>: Data uses s3://myjfs/myjfs/ 2021/12/16 16:37:14.593450 juicefs[22290] <INFO>: Volume is formatted as {Name:myjfs UUID:4ad0bb86-6ef5-4861-9ce2-a16ac5dea81b Storage:s3 Bucket:https://myjfs AccessKey:ABCDEFGHIJKLMNopqXYZ SecretKey:removed BlockSize:4096 Compression:none Shards:0 Partitions:0 Capacity:0 Inodes:0 EncryptKey:} ``` :::info Once a file system is created, the relevant information including name, object storage, access keys, etc. are recorded in the database. In the current example, the file system information is recorded in the Redis database, so any computer with the database address, username, and password information can mount and read the file system. ::: Since the \"data\" and \"metadata\" of this file system are stored in cloud services, the file system can be mounted on any computer with a JuiceFS client installed for shared reads and writes at the same time. For example: ```shell juicefs mount redis://tom:[email protected]:6379/1 ~/jfs ``` JuiceFS guarantees a \"close-to-open\" consistency, which means that when two or more clients read and write the same file at the same time, the changes made by client A may not be immediately visible to client B. Other client is guaranteed to see the latest data when they re-opens the file only if client A closes the file, no matter whether the file is on the same node with A or not. Since object storage is a network-based storage service, it will inevitably encounter access latency. To solve this problem, JuiceFS provides and enables caching mechanism by default, i.e. allocating a part of local storage as a buffer layer between data and object storage, and caching data asynchronously to local storage when reading files. Please refer to for more details. JuiceFS will set 100GiB cache in `$HOME/.juicefs/cache` or `/var/jfsCache` directory by"
},
{
"data": "Setting a larger cache space on a faster SSD can effectively improve read and write performance of JuiceFS even more . You can use `--cache-dir` to adjust the location of the cache directory and `--cache-size` to adjust the size of the cache space, e.g.: ```shell juicefs mount --background \\ --cache-dir /mycache \\ --cache-size 512000 \\ redis://tom:[email protected]:6379/1 \\ ~/jfs ``` :::note The JuiceFS process needs permission to read and write to the `--cache-dir` directory. ::: The above command sets the cache directory in the `/mycache` directory and specifies the cache space as 500GiB. In a Linux environment, you can set up automatic mounting when mounting a file system via the `--update-fstab` option, which adds the options required to mount JuiceFS to `/etc/fstab`. For example: :::note This feature requires JuiceFS version 1.1.0 and above ::: ```bash $ sudo juicefs mount --update-fstab --max-uploads=50 --writeback --cache-size 204800 redis://tom:[email protected]:6379/1 <MOUNTPOINT> $ grep <MOUNTPOINT> /etc/fstab redis://tom:[email protected]:6379/1 <MOUNTPOINT> juicefs _netdev,max-uploads=50,writeback,cache-size=204800 0 0 $ ls -l /sbin/mount.juicefs lrwxrwxrwx 1 root root 29 Aug 11 16:43 /sbin/mount.juicefs -> /usr/local/bin/juicefs ``` Refer to for more details. After the file system is mounted, you can use the `juicefs bench` command to perform basic performance tests and functional verification of the file system to ensure that the JuiceFS file system can be accessed normally and its performance meets expectations. :::info The `juicefs bench` command can only complete basic performance tests. If you need a more complete evaluation of JuiceFS, please refer to . ::: ```shell juicefs bench ~/jfs ``` After running the `juicefs bench` command, N large files (1 by default) and N small files (100 by default) will be written to and read from the JuiceFS file system according to the specified concurrency (1 by default), and statistics the throughput of read and write and the latency of a single operation, as well as the latency of accessing the metadata engine. If you encounter any problems during the verification of the file system, please refer to the document for troubleshooting first. You can unmount the JuiceFS file system (assuming the mount point path is `~/jfs`) by the command `juicefs umount`. ```shell juicefs umount ~/jfs ``` If the command fails to unmount the file system after execution, it will prompt `Device or resource busy`. ```shell 2021-05-09 22:42:55.757097 I | fusermount: failed to unmount ~/jfs: Device or resource busy exit status 1 ``` This failure happens probably because some programs are reading or writing files in the file system when executing `unmount` command. To avoid data loss, you should first determine which processes are accessing files in the file system (e.g. via the command `lsof`) and try to release the files before re-executing the `unmount` command. :::caution The following command may result in file corruption and loss, so be careful to use it! ::: You can add the option `--force` or `-f` to force the file system unmounted if you are clear about the consequence of the operation. ```shell juicefs umount --force ~/jfs ```"
}
] |
{
"category": "Runtime",
"file_name": "for_distributed.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "```bash kubectl apply -f https://github.com/k8up-io/k8up/releases/download/k8up-{{ template \"chart.version\" . }}/k8up-crd.yaml ``` <! The README.md file is automatically generated with helm-docs! Edit the README.gotmpl.md template instead. --> Always upgrade the CRDs before upgrading the Helm release. Watch out for breaking changes in the {{ title .Name }} release notes. {{ template \"chart.sourcesSection\" . }} {{ template \"chart.requirementsSection\" . }} <! The values below are generated with helm-docs! Document your changes in values.yaml and let `make docs:helm` generate this section. --> {{ template \"chart.valuesSection\" . }} In `image.repository` the registry domain was moved into its own parameter `image.registry`. K8up 1.x features leader election, this enables rolling updates and multiple replicas. `k8up.enableLeaderElection` defaults to `true`. Disable this for older Kubernetes versions (<= 1.15) `replicaCount` is now configurable, defaults to `1`. Note: Deployment strategy type has changed from `Recreate` to `RollingUpdate`. CRDs need to be installed separately, they are no longer included in this chart. Note: `image.repository` changed from `vshn/k8up` to `k8up-io/k8up`. Note: `image.registry` changed from `quay.io` to `ghcr.io`. Note: `image.tag` changed from `v1.x` to `v2.x`. Please see the . `metrics.prometheusRule.legacyRules` has been removed (no support for OpenShift 3.11 anymore). Note: `k8up.backupImage.repository` changed from `quay.io/vshn/wrestic` to `ghcr.io/k8up-io/k8up` (`wrestic` is not needed anymore in K8up v2). Due to the migration of the chart from to this repo, we decided to make a breaking change for the chart. Only chart archives from version 3.x can be downloaded from the https://k8up-io.github.io/k8up index. No 2.x chart releases will be migrated from the APPUiO Helm repo. Some RBAC roles and role bindings have change the name. In most cases this shouldn't be an issue and Helm should be able to cleanup the old resources without impact on the RBAC permissions. New parameter: `podAnnotations`, default `{}`. New parameter: `service.annotations`, default `{}`. Parameter changed: `image.tag` now defaults to `v2` instead of a pinned version. Parameter changed: `image.pullPolicy` now defaults to `Always` instead of `IfNotPresent`. Note: Renamed ClusterRole `${release-name}-manager-role` to `${release-name}-manager`. Note: Spec of ClusterRole `${release-name}-leader-election-role` moved to `${release-name}-manager`. Note: Renamed ClusterRoleBinding `${release-name}-manager-rolebinding` to `${release-name}`. Note: ClusterRoleBinding `${release-name}-leader-election-rolebinding` removed (not needed anymore). Note: Renamed ClusterRole `${release-name}-k8up-view` to `${release-name}-view`. Note: Renamed ClusterRole `${release-name}-k8up-edit` to `${release-name}-edit`. The image tag is now pinned again and not using a floating tag. Parameter changed: `image.tag` now defaults to a pinned version. Each new K8up version now requires also a new chart version. Parameter changed: `image.pullPolicy` now defaults to `IfNotPresent` instead of `Always`. Parameter changed: `k8up.backupImage.repository` is now unset, which defaults to the same image as defined in `image.{registry/repository}`. Parameter changed: `k8up.backupImage.tag` is now unset, which defaults to the same image tag as defined in `image.tag`."
}
] |
{
"category": "Runtime",
"file_name": "README.gotmpl.md",
"project_name": "K8up",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Name | Type | Description | Notes | - | - | - Id | Pointer to string | | [optional] Resources | Pointer to []map[string]interface{} | | [optional] Children | Pointer to []string | | [optional] PciBdf | Pointer to string | | [optional] `func NewDeviceNode() *DeviceNode` NewDeviceNode instantiates a new DeviceNode object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewDeviceNodeWithDefaults() *DeviceNode` NewDeviceNodeWithDefaults instantiates a new DeviceNode object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *DeviceNode) GetId() string` GetId returns the Id field if non-nil, zero value otherwise. `func (o DeviceNode) GetIdOk() (string, bool)` GetIdOk returns a tuple with the Id field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *DeviceNode) SetId(v string)` SetId sets Id field to given value. `func (o *DeviceNode) HasId() bool` HasId returns a boolean if a field has been set. `func (o *DeviceNode) GetResources() []map[string]interface{}` GetResources returns the Resources field if non-nil, zero value otherwise. `func (o DeviceNode) GetResourcesOk() ([]map[string]interface{}, bool)` GetResourcesOk returns a tuple with the Resources field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *DeviceNode) SetResources(v []map[string]interface{})` SetResources sets Resources field to given value. `func (o *DeviceNode) HasResources() bool` HasResources returns a boolean if a field has been set. `func (o *DeviceNode) GetChildren() []string` GetChildren returns the Children field if non-nil, zero value otherwise. `func (o DeviceNode) GetChildrenOk() ([]string, bool)` GetChildrenOk returns a tuple with the Children field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *DeviceNode) SetChildren(v []string)` SetChildren sets Children field to given value. `func (o *DeviceNode) HasChildren() bool` HasChildren returns a boolean if a field has been set. `func (o *DeviceNode) GetPciBdf() string` GetPciBdf returns the PciBdf field if non-nil, zero value otherwise. `func (o DeviceNode) GetPciBdfOk() (string, bool)` GetPciBdfOk returns a tuple with the PciBdf field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *DeviceNode) SetPciBdf(v string)` SetPciBdf sets PciBdf field to given value. `func (o *DeviceNode) HasPciBdf() bool` HasPciBdf returns a boolean if a field has been set."
}
] |
{
"category": "Runtime",
"file_name": "DeviceNode.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Status: Alternative Proposal Velero should introduce a new type of plugin that runs when a backup is deleted. These plugins will delete any external resources associated with the backup so that they will not be left orphaned. With the CSI plugin, Velero developers introduced a pattern of using BackupItemAction and RestoreItemAction plugins tied to PersistentVolumeClaims to create other resources to complete a backup. In the CSI plugin case, Velero does clean up of these other resources, which are Kubernetes Custom Resources, within the core Velero server. However, for external plugins that wish to use this same pattern, this is not a practical solution. Velero's core cannot be extended for all possible Custom Resources, and not external resources that get created are Kubernetes Custom Resources. Therefore, Velero needs some mechanism that allows plugin authors who have created resources within a BackupItemAction or RestoreItemAction plugin to ensure those resources are deleted, regardless of what system those resources reside in. Provide a new plugin type in Velero that is invoked when a backup is deleted. Implementations of specific deletion plugins. Rollback of deletion plugin execution. Velero will provide a new plugin type that is similar to its existing plugin architecture. These plugins will be referred to as `DeleteAction` plugins. `DeleteAction` plugins will receive the `Backup` CustomResource being deleted on execution. `DeleteAction` plugins cannot prevent deletion of an item. This is because multiple `DeleteAction` plugins can be registered, and this proposal does not include rollback and undoing of a deletion action. Thus, if multiple `DeleteAction` plugins have already run but another would request the deletion of a backup stopped, the backup that's retained would be inconsistent. `DeleteActions` will apply to `Backup`s based on a label on the `Backup` itself. In order to ensure that `Backup`s don't execute `DeleteAction` plugins that are not relevant to them, `DeleteAction` plugins can register an `AppliesTo` function which will define a label selector on Velero backups. `DeleteActions` will be run in alphanumerical order by plugin name. This order is somewhat arbitrary, but will be used to give authors and users a somewhat predictable order of events. The `DeleteAction` plugins will implement the following Go interface, defined in `pkg/plugin/velero/deletion_action.go`: ```go type DeleteAction struct { // AppliesTo will match the DeleteAction plugin against Velero Backups that it should operate against. AppliesTo() // Execute runs the custom plugin logic and may connect to external services. Execute(backup *api.backup) error } ``` The following methods would be added to the `clientmgmt.Manager` interface in `pkg/pluginclientmgmt/manager.go`: ``` type Manager interface { ... // GetDeleteActions returns the registered DeleteActions. //TODO: do we need to get these by name, or can we get them all? GetDeleteActions([]velero.DeleteAction, error) ... ``` TODO TODO Backwards compatibility should be straight-forward; if there are no installed `DeleteAction` plugins, then the backup deletion process will proceed as it does today. TODO In order to add a custom label to the backup, the backup must be modifiable inside of the `BackupItemActon` and `RestoreItemAction` plugins, which it currently is not. A work around for now is for the user to apply a label to the backup at creation time, but that is not ideal."
}
] |
{
"category": "Runtime",
"file_name": "deletion-plugins.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "(howto-storage-move-volume)= You can {ref}`copy <storage-copy-volume>` or {ref}`move <storage-move-volume>` custom storage volumes from one storage pool to another, or copy or rename them within the same storage pool. To move instance storage volumes from one storage pool to another, {ref}`move the corresponding instance <storage-move-instance>` to another pool. When copying or moving a volume between storage pools that use different drivers, the volume is automatically converted. (storage-copy-volume)= Use the following command to copy a custom storage volume: incus storage volume copy <sourcepoolname>/<sourcevolumename> <targetpoolname>/<targetvolumename> Add the `--volume-only` flag to copy only the volume and skip any snapshots that the volume might have. If the volume already exists in the target location, use the `--refresh` flag to update the copy. Specify the same pool as the source and target pool to copy the volume within the same storage pool. You must specify different volume names for source and target in this case. When copying from one storage pool to another, you can either use the same name for both volumes or rename the new volume. (storage-move-volume)= Before you can move or rename a custom storage volume, all instances that use it must be {ref}`stopped <instances-manage-stop>`. Use the following command to move or rename a storage volume: incus storage volume move <sourcepoolname>/<sourcevolumename> <targetpoolname>/<targetvolumename> Specify the same pool as the source and target pool to rename the volume while keeping it in the same storage pool. You must specify different volume names for source and target in this case. When moving from one storage pool to another, you can either use the same name for both volumes or rename the new volume. For most storage drivers (except for `ceph` and `ceph-fs`), storage volumes exist only on the cluster member for which they were created. To copy or move a custom storage volume from one cluster member to another, add the `--target` and `--destination-target` flags to specify the source cluster member and the target cluster member, respectively. Add the `--target-project` to copy or move a custom storage volume to a different project. You can copy or move custom storage volumes between different Incus servers by specifying the remote for each pool: incus storage volume copy <sourceremote>:<sourcepoolname>/<sourcevolumename> <targetremote>:<targetpoolname>/<targetvolumename> incus storage volume move <sourceremote>:<sourcepoolname>/<sourcevolumename> <targetremote>:<targetpoolname>/<targetvolumename> You can add the `--mode` flag to choose a transfer mode, depending on your network setup: `pull` (default) : Instruct the target server to pull the respective storage volume. `push` : Push the storage volume from the source server to the target server. `relay` : Pull the storage volume from the source server to the local client, and then push it to the target server. (storage-move-instance)= To move an instance storage volume to another storage pool, make sure the instance is stopped. Then use the following command to move the instance to a different pool: incus move <instancename> --storage <targetpool_name>"
}
] |
{
"category": "Runtime",
"file_name": "storage_move_volume.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Kata Containers on Amazon Web Services (AWS) makes use of instances. Most of the installation procedure is identical to that for Kata on your preferred distribution, except that you have to run it on bare metal instances since AWS doesn't support nested virtualization yet. This guide walks you through creating an i3.metal instance. Python: Python 2 version 2.6.5+ Python 3 version 3.3+ Install with this command: ```bash $ pip install awscli --upgrade --user ``` First, verify it: ```bash $ aws --version ``` Then configure it: ```bash $ aws configure ``` Specify the required parameters: ``` AWS Access Key ID []: <your-key-id-from-iam> AWS Secret Access Key []: <your-secret-access-key-from-iam> Default region name []: <your-aws-region-for-your-i3-metal-instance> Default output format [None]: <yaml-or-json-or-empty> ``` Alternatively, you can create the files: `~/.aws/credentials` and `~/.aws/config`: ```bash $ cat <<EOF > ~/.aws/credentials [default] awsaccesskey_id = <your-key-id-from-iam> awssecretaccess_key = <your-secret-access-key-from-iam> EOF $ cat <<EOF > ~/.aws/config [default] region = <your-aws-region-for-your-i3-metal-instance> EOF ``` For more information on how to get AWS credentials please refer to . Alternatively, you can ask the administrator of your AWS account to issue one with the AWS CLI: ```sh $ aws_username=\"myusername\" $ aws iam create-access-key --user-name=\"$aws_username\" ``` More general AWS CLI guidelines can be found . You will need this to access your instance. To create: ```bash $ aws ec2 create-key-pair --key-name MyKeyPair | grep KeyMaterial | cut -d: -f2- | tr -d ' \\n\\\"\\,' > MyKeyPair.pem $ chmod 400 MyKeyPair.pem ``` Alternatively to import using your public SSH key: ```bash $ aws ec2 import-key-pair --key-name \"MyKeyPair\" --public-key-material file://MyKeyPair.pub ``` Get the latest Bionic Ubuntu AMI (Amazon Image) or the latest AMI for the Linux distribution you would like to use. For example: ```bash $ aws ec2 describe-images --owners 099720109477 --filters \"Name=name,Values=ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server*\" --query 'sort_by(Images, &CreationDate)[].ImageId ' ``` This command will produce output similar to the following: ``` [ ... \"ami-063aa838bd7631e0b\", \"ami-03d5270fcb641f79b\" ] ``` Launch the EC2 instance and pick IP the `INSTANCEID`: ```bash $ aws ec2 run-instances --image-id ami-03d5270fcb641f79b --count 1 --instance-type i3.metal --key-name MyKeyPair --associate-public-ip-address > /tmp/aws.json $ export INSTANCEID=$(grep InstanceId /tmp/aws.json | cut -d: -f2- | tr -d ' \\n\\\"\\,') ``` Wait for the instance to come up, the output of the following command should be `running`: ```bash $ aws ec2 describe-instances --instance-id=${INSTANCEID} | grep running | cut -d: -f2- | tr -d ' \\\"\\,' ``` Get the public IP address for the instances: ```bash $ export IP=$(aws ec2 describe-instances --instance-id=${INSTANCEID} | grep PublicIpAddress | cut -d: -f2- | tr -d ' \\n\\\"\\,') ``` Refer to for more details on how to launch instances with the AWS CLI. SSH into the machine ```bash $ ssh -i MyKeyPair.pem ubuntu@${IP} ``` Go onto the next step. The process for installing Kata itself on bare metal is identical to that of a virtualization-enabled VM. For detailed information to install Kata on your distribution of choice, see the ."
}
] |
{
"category": "Runtime",
"file_name": "aws-installation-guide.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: Common Issues To help troubleshoot your Rook clusters, here are some tips on what information will help solve the issues you might be seeing. If after trying the suggestions found on this page and the problem is not resolved, the Rook team is very happy to help you troubleshoot the issues in their Slack channel. Once you have , proceed to the General channel to ask for assistance. For common issues specific to Ceph, see the page. Kubernetes status and logs are the main resources needed to investigate issues in any Rook cluster. Kubernetes status is the first line of investigating when something goes wrong with the cluster. Here are a few artifacts that are helpful to gather: Rook pod status: `kubectl get pod -n <cluster-namespace> -o wide` e.g., `kubectl get pod -n rook-ceph -o wide` Logs for Rook pods Logs for the operator: `kubectl logs -n <cluster-namespace> -l app=<storage-backend-operator>` e.g., `kubectl logs -n rook-ceph -l app=rook-ceph-operator` Logs for a specific pod: `kubectl logs -n <cluster-namespace> <pod-name>`, or a pod using a label such as mon1: `kubectl logs -n <cluster-namespace> -l <label-matcher>` e.g., `kubectl logs -n rook-ceph -l mon=a` Logs on a specific node to find why a PVC is failing to mount: Connect to the node, then get kubelet logs (if your distro is using systemd): `journalctl -u kubelet` Pods with multiple containers For all containers, in order: `kubectl -n <cluster-namespace> logs <pod-name> --all-containers` For a single container: `kubectl -n <cluster-namespace> logs <pod-name> -c <container-name>` Logs for pods which are no longer running: `kubectl -n <cluster-namespace> logs --previous <pod-name>` Some pods have specialized init containers, so you may need to look at logs for different containers within the pod. `kubectl -n <namespace> logs <pod-name> -c <container-name>` Other Rook artifacts: `kubectl -n <cluster-namespace> get all`"
}
] |
{
"category": "Runtime",
"file_name": "common-issues.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "CubeFS consists of a metadata subsystem, a data subsystem, a resource management node (Master), and an object gateway (Object Subsystem), which can access stored data through the POSIX/HDFS/S3 interface. Resource Management Node: Composed of multiple Master nodes, it is responsible for asynchronously processing different types of tasks, such as managing data shards and metadata shards (including creation, deletion, updating, and consistency checks), checking the health status of data or metadata nodes, and maintaining volume information. ::: tip Note There can be multiple Master nodes, and the consistency of metadata is ensured through the Raft algorithm and persisted to RocksDB. ::: Metadata Subsystem: Composed of multiple Meta Node nodes, multiple metadata shards (Meta Partition), and Raft instances (based on the Multi-Raft replication protocol), each metadata shard represents an Inode range metadata, which contains two in-memory B-Tree structures: inode B-Tree and dentry B-Tree. ::: tip Note At least 3 metadata instances are required, and horizontal scaling is supported. ::: Data Subsystem: Divided into Replica Subsystem and Erasure Code Subsystem, both subsystems can coexist or exist independently: The Replica Subsystem consists of DataNodes, with each node managing a set of data shards. Multiple nodes' data shards form a replica group. The Erasure Code Subsystem (Blobstore) is mainly composed of BlobNode modules, with each node managing a set of data blocks. Multiple nodes' data blocks form an erasure-coded stripe. ::: tip Note DataNode support horizontal scaling. ::: Object Subsystem: Composed of object nodes, it provides an access protocol compatible with standard S3 semantics and can be accessed through tools such as Amazon S3 SDK or s3cmd. Volume: A logical concept composed of multiple metadata and data shards. From the client's perspective, a volume can be seen as a file system instance that can be accessed by containers. From the perspective of object storage, a volume corresponds to a bucket. A volume can be mounted in multiple containers, allowing files to be accessed by different clients simultaneously."
}
] |
{
"category": "Runtime",
"file_name": "architecture.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "[TOC] gVisor has the ability to checkpoint a process, save its current state in a state file, and restore into a new container using the state file. Checkpoint/restore functionality is currently available via raw `runsc` commands. To use the checkpoint command, first run a container. ```bash runsc run <container id> ``` To checkpoint the container, the `--image-path` flag must be provided. This is the directory path within which the checkpoint related files will be created. All necessary directories will be created if they do not yet exist. Note: Two checkpoints cannot be saved to the same directory; every image-path provided must be unique. ```bash runsc checkpoint --image-path=<path> <container id> ``` There is also an optional `--leave-running` flag that allows the container to continue to run after the checkpoint has been made. (By default, containers stop their processes after committing a checkpoint.) Note: All top-level runsc flags needed when calling run must be provided to checkpoint if `--leave-running` is used. Note: `--leave-running` functions by causing an immediate restore so the container, although will maintain its given container id, may have a different process id. ```bash runsc checkpoint --image-path=<path> --leave-running <container id> ``` To restore, provide the image path to the directory containing all the files created during the checkpoint. Because containers stop by default after checkpointing, restore needs to happen in a new container (restore is a command which parallels start). ```bash runsc create <container id> runsc restore --image-path=<path> <container id> ``` Note: All top-level runsc flags needed when calling run must be provided to `restore`. Run a container: ```bash docker run [options] --runtime=runsc --name=<container-name> <image> ``` Checkpoint the container: ```bash docker checkpoint create <container-name> <checkpoint-name> ``` Restore into the same container: ```bash docker start --checkpoint <checkpoint-name> <container-name> ``` : Docker version 18.03.0-ce and earlier hangs when checkpointing and does not create the checkpoint. To successfully use this feature, install a custom version of docker-ce from the moby repository. This issue is caused by an improper implementation of the `--leave-running` flag. This issue is fixed in newer releases. Docker does not support restoration into new containers: Docker currently expects the container which created the checkpoint to be the same container used to restore. This is needed to support container migration. : Docker does not currently support the `--checkpoint-dir` flag but this will be required when restoring from a checkpoint made in another container."
}
] |
{
"category": "Runtime",
"file_name": "checkpoint_restore.md",
"project_name": "gVisor",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Manage IP addresses and associated information ``` -h, --help help for ip ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI - List IP addresses in the userspace IPcache"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_ip.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Users can add annotation to pod to limit bandwidth or IOPS. Example: `kubectl apply -f deployment.yaml` ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: carina-deployment-speed-limit namespace: carina labels: app: web-server-speed-limit spec: replicas: 1 selector: matchLabels: app: web-server-speed-limit template: metadata: annotations: carina.storage.io/blkio.throttle.readbpsdevice: \"10485760\" carina.storage.io/blkio.throttle.readiopsdevice: \"10000\" carina.storage.io/blkio.throttle.writebpsdevice: \"10485760\" carina.storage.io/blkio.throttle.writeiopsdevice: \"100000\" labels: app: web-server-speed-limit spec: containers: name: web-server image: nginx:latest imagePullPolicy: \"IfNotPresent\" volumeMounts: name: mypvc mountPath: /var/lib/www/html volumes: name: mypvc persistentVolumeClaim: claimName: csi-carina-pvc-speed-limit readOnly: false apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-carina-pvc-speed-limit namespace: carina spec: accessModes: ReadWriteOnce resources: requests: storage: 17Gi storageClassName: csi-carina-sc volumeMode: Filesystem ``` Carina will convert those anntoations into pod's cgroupfs hierarchy. ```shell cgroup v1 /sys/fs/cgroup/blkio/kubepods/burstable/pod0b0e005c-39ec-4719-bbfe-78aadbc3e4ad/blkio.throttle.readbpsdevice /sys/fs/cgroup/blkio/kubepods/burstable/pod0b0e005c-39ec-4719-bbfe-78aadbc3e4ad/blkio.throttle.readiopsdevice /sys/fs/cgroup/blkio/kubepods/burstable/pod0b0e005c-39ec-4719-bbfe-78aadbc3e4ad/blkio.throttle.writebpsdevice /sys/fs/cgroup/blkio/kubepods/burstable/pod0b0e005c-39ec-4719-bbfe-78aadbc3e4ad/blkio.throttle.writeiopsdevice ``` ```shell cgroup v2 /sys/fs/cgroup/kubepods/burstable/pod0b0e005c-39ec-4719-bbfe-78aadbc3e4ad/io.max ``` Users can add one or more annotations. Adding or removing annotations will be synced to cgroupfs in about 60s. Currently, only block device disk speed limit is supported. User can test io throttling with command `dd if=/dev/zero of=out.file bs=1M count=512 oflag=dsync`. Carina can automatically decide whether to use cgroup v1 or cgroup v2 according to the system environment. If the system uses cgroup v2, it supports buffer io speed limit (you need to enable io and memory controllers at the same time), otherwise only direct io speed limit is supported. If user can set io throttling too low, it may cause the procedure of formating filesystem hangs there and then the pod will be in pending state forever."
}
] |
{
"category": "Runtime",
"file_name": "disk-speed-limit.md",
"project_name": "Carina",
"subcategory": "Cloud Native Storage"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.